The problem with current catastrophe-risk models

Thanks, in part, to emerging technologies, the future should hold many positives in risk understanding and predictability.

Natural catastrophes are becoming more frequent and severe. This was the scene in 2018 Mexico Beach, Fla., in the aftermath of Hurricane Michael. (Terry Kelly/Shutterstock)

2022 is on track to be one of the warmest years in recording history at the same time that natural disasters are becoming ever more severe.

In recent years, insurers have become reliant on various catastrophe models to measure the impact of these devastating events. However, many of these models aren’t always accurate as natural catastrophes can be unpredictable. It follows that scrutiny of current catastrophe models is beginning to grow.

How can insurers measure potential losses from a major event in a way that adequately prepares them for the future? Below are five common questions about catastrophe modeling and how this system can best evolve to serve insurers and policyholders.

What can we expect from natural catastrophes (CAT) this year as compared to previous years? 

I expect higher wildfire, flood, hurricane, tornado and hail activity. We’ve already seen wildfire activity in California and Arizona this year, and typically wildfire season doesn’t even start until the fall.

As for flooding, we’ve seen the recent events at Yellowstone National Park become extreme enough to close that area to the public. While not as ‘predictable,’ we are still braced for increased activity. All major hurricane forecasters are also expecting an above-average year. Tornadic activity has started early, and we’ve witnessed the tornado belt being expanded eastward.

What’s causing the expected growth in these risks?

Frankly, a variety of factors including the fact that exposures are continuing to increase in CAT-prone areas such as Florida, Texas and the states located in ‘tornado alley’ as well as the wildfire areas of the western United States. This exposure growth increases event severity. Also, frequency seems to be increasing as global warming and warmer sea surface temperatures are stemming more activity.

How are these risks currently measured, and what’s making measurement of these risks more difficult or questionable?

Most CAT risks are measured by one of the standard industry models such as those available from AIR or RMS. Alternatively, some firms use a blended view or create a proprietary model to assess. Many factors are driving the difficulty in measurement. Primarily, most models tend to look backward, reviewing old data to predict new activity. This works if the future is the same as the past but not as well when the environment is changing. Models are seeking to solve this by working toward quicker adaptation to encompass frequency changes in wind, wildfire and tornadic activity as well as severity changes tied to population moves, valuation issues and event-scale issues.

Given these challenges, how can insurers prepare themselves (and insureds) to responsibly navigate catastrophe risks?

Insurers and insureds can prepare themselves in a few ways. First, by simply using multiple views of risk (insurers), not relying solely on one model, using aggregations, spreading of risk, and utilizing probable maximum loss (PML) simplified approaches, all while ensuring risks have proper valuation to quantify what’s at risk (for both insurers and insureds).

Looking ahead, what role does technology play in the balance of being flexible with natural-catastrophe risks while also quantifying them?

Technology will play a significant role in continuing to improve risk assessment primarily through increased computing power that enables more simulations to be run along with ‘what if’ scenarios, which will increase predictability. Machine learning will allow models to adapt to environmental changes more quickly. New, more complex models will be enabled by technology to better capture climate changes such as sea surface temperature, sea level rise, etc. Lastly, exposure data bases will be more accessible and intertwined within models, allowing the capture of more accurate information in real time (i.e., building valuations).

What does the future hold for quantifying these risks? What would you expect future models to look like to ensure greater accuracy? Why don’t models adapt more quickly?

The future should hold many positives in risk understanding and predictability including better capturing of policy terms (i.e., wind/surge are bundled together in some models) with more detailed data; a stronger ability to adapt quickly to environmental changes; and an increased ability to allow user input to adjust terms (individual adaptation) and views of risk.

I also expect current models will be updated more frequently, and machine learning could lead to immediate adaptation. Historically, models have typically only seen major adjustments after events occur and therefore provide a stress testing. This is driven by a tendency to focus on singular perils as well as the habit of looking to the past rather than the future. I expect the

Mark Bernacki

future to be better with more resources to research perils continuously, more collaboration with users, and closer alignment/tie-in to regulatory changes.

Mark Bernacki is chief underwriting officer at Amwins. These opinions are his own.

See also: