Karen Clark, who created the world’s first hurricane catastrophe model in 1983, explains that there is very little data for most peril regions
February 29, 2016: When Karen Clark created the world’s first hurricane catastrophe model in 1983, she helped revolutionise the insurance industry. It allowed insurers to get a grip on their exposure to hurricanes and earthquakes and price their premiums more accurately. An expert in the field of catastrophe risk assessment, Karen went on to found AIR Worldwide, the first catastrophe modelling company. After selling AIR in 2002, she came up with another venture called Karen Clark & Company. Excerpts from an interview:
What was the ethos behind Karen Clark & Company’s formation?
Because Karen Clark & Company (KCC) is not in the business of selling the traditional models, we can openly and independently inform our clients about the models, the model assumptions and model usage.
We’ve been able to observe how insurers and reinsurers are using the models and, in particular, how they’re dealing with the model limitations and uncertainty. Because the traditional models are “black boxes”, companies have had to develop costly processes around the models to try to infer what’s inside the box and to evaluate the credibility of the model output.
These detailed observations led us to think about how the models and modelling processes could be improved and how the current state of practice could be advanced. Ultimately, we determined the next logical innovation in this space was an open loss-modelling platform.
What is different about the company’s approach to catastrophe modelling?
The traditional catastrophe models are proprietary to the model vendors, which means most of the model assumptions are “secret” and not visible to the users. This is why (re)insurers have to spend a lot of time and resources trying to understand and “validate” the models and model updates. But (re)insurers can never truly validate a closed third-party model or be sure what’s inside, even though they spend a lot of time and money trying to do so.
KCC developed the RiskInsight® open loss modelling platform, which makes it much more efficient for insurers and reinsurers to understand what’s inside the box and to take ownership of the risk.
Open loss modelling platforms are advanced tools because they provide more insight into a company’s large loss potential and they enable model users to better leverage their own internal expertise. RiskInsight makes it much more efficient for model users to customise the model assumptions to better reflect their own views of risk and their actual loss experience.
How is the company changing the way modelling is used and perceived in the insurance industry?
Over the past two decades, there has been a growing misconception that the models are becoming more accurate over time. The models will never be accurate because there is so little data for most peril regions. Scientists rely on observations and data for the model assumptions. The catastrophe models are no different from other types of statistical models — where there is little data, there is low confidence and high uncertainty.
There was an increasing trend to use the models – wrongly – as something that provided answers to questions such as “what is my 1-in-100 year Probable Maximum Loss (PML)?” (In some sense, model users didn’t have much choice because all you can do with the traditional vendor models is put in your exposure data and generate loss estimates.)
KCC is advancing the models so they can be used more appropriately as tools for better understanding and managing catastrophe risk. With open platforms, you can see and customise the model assumptions so you can have more confidence in and control over your risk management decisions.
In the past, you have expressed concerns that insurers, rating agencies and regulators have become too reliant on the models. Is this still the case? If so, what needs to change to stop that over-reliance?
It became clear that the models are the primary – and frequently the only – tools used by insurers and reinsurers to underwrite, price, and manage catastrophe risk. Obviously, this is not an optimal or desirable situation.
At KCC, we do believe the models provide the most robust structured approach to catastrophe loss estimation. So the challenge was how to make the models better (and the numbers more believable). We found the solution is not for insurers to use the models differently. The solution is a new type of model – an open model – and additional risk metrics for managing large loss potential.
Certainly, if you’re going to rely on a model, you need to know exactly what’s in the model and you need to understand and believe the numbers coming out of the models – not because the numbers are “right” but because you know the assumptions driving the loss estimates are credible and fit with your own risk appetite.
Hurricane Katrina caused a re-evaluation of the models. Would the industry be better prepared if the storm happened today?
The problem is not being prepared for the last major storm – but being prepared for the next big storm. The models can become backward-looking tools when they’re overly calibrated to the last event because we can be sure the next event will not be like the past one – every event is unique.
Another shortcoming of the traditional models is they provide one type of output, which shows the estimated probabilities of different loss amounts being exceeded. This information is very valuable for certain decisions, but it doesn’t answer all the questions CEOs and Boards have about potential catastrophe losses.
In response to the demand for more information, KCC developed the Characteristic Event (CE) methodology. In the CE approach, the probabilities are defined by the hazard and the losses estimated for selected return period events. The return period events are scientifically derived and then systematically “floated” across each region to ensure full spatial coverage that does not over or under penalise specific locations.
How did the first catastrophe models come about?
The idea for the model came a few years earlier when I was working as a Research Associate for Commercial Union Insurance Company and was asked to figure out if the company had too much coastal exposure. Before I was able to complete the model, my department was disbanded, but I was already hooked on to the idea of a hurricane model as a decision-making tool. For some reason, this area of research really intrigued me.
On my own, I wrote a paper, “A Formal Approach to Catastrophe Risk Assessment and Management”, that was published by the Casualty Actuarial Society. An actuary at what was at that time the E.W. Blanch Company read the paper. Blanch became my first client, and the rest is history.
Did you realise at the time that these models would become vital to the industry? What role did Hurricane Andrew play in the process?
When I started AIR, I had no idea I was creating a tool and a whole new profession that would become so important to the insurance industry. I simply loved what I was doing.
For the first five years before Hurricane Andrew, it was difficult to convince many companies that they needed a new tool to assess and manage catastrophe risk. Most companies thought their old methods were just fine and that the worst case scenario was a $7 billion industry event. My model was saying it was more like $70 billion.
Hurricane Andrew hit well south of the most populated areas of Miami and caused $15 billion in losses. Anyone could calculate that had the storm made a direct hit on downtown Miami, the losses would have been closer to $60 billion. Hurricane Andrew was the wake-up call that made all companies believe in the models.