The insurance industry has been using statistical models to provide guidance towards data-driven decision-making for many years.
Life insurance companies have mortality tables going back a century or more, allowing actuaries to determine rates for today's life insurance customers that will maintain future profitability while fulfilling the promises made in the policies.
The explosion of computer technology in the last 50 years has created opportunities for data gathering and analysis that could only be dreamed of even a generation ago. Insurers now have the ability to sift through mountains of data to help derive useful insights and estimate the potential impact of a variety of possible future events.
|Earthquake activity
The invention of the seismograph in the early 20th century provided scientists and later insurers with hard data about earthquakes as they happened. The U.S. Geological Survey (USGS) was established in 1879, the bill authorizing its creation signed by President Rutherford B. Hayes. Their initial mandate was to improve topological mapping of the United States. In the 138 years since their creation, the USGS has developed high-resolution maps of nearly the entire country, and provided detailed information about landslide potential, liquefaction susceptibility and soil conditions (along with numerous other types of data).
This data, along with probabilistic estimates of the frequency and magnitude of future earthquake events, is combined into sophisticated algorithms that provide insurers and reinsurers with estimates of the losses they can expect from future earthquake events. Gone are the days when underwriters could keep track of the buildings that they insured by sticking push-pins into a map. Today's fast-paced business climate requires a much more sophisticated approach to accumulation management and risk analysis.
|Predicting the flu
Catastrophe models don't stop with earthquakes. The National Oceanographic and Atmospheric Administration (NOAA) uses computer models to help predict weather patterns — movement of the jet stream, hurricanes, tornadoes, severe thunderstorms and the like. Statistical models are also employed in analysis of sea surface temperatures, sea level rise and ocean currents. The Centers for Disease Control and Prevention (CDC) uses complex epidemiological models to help predict the spread of diseases such as influenza, viral hemorrhagic fevers, HIV and many more. These models also help epidemiologists determine what strains of influenza to include in the influenza vaccines each year.
(Photo: Shutterstock)
|CAT model uses
Insurance underwriters and actuaries use catastrophe (CAT) models to help them convert immense amounts of individual data points into actionable information. The models provide a framework for ratemaking decisions, capital allocation decisions, lines of business to target or avoid, even how much reinsurance a company should purchase. It's important to bear in mind, however, that these models make no guarantee of accuracy.
There is no way to precisely and accurately predict when the next major earthquake will strike the San Francisco Bay Area, for example. What the models offer is a range of potential outcomes, based on the best science available that can be used in conjunction with the risk appetite of each organization to make data-driven decisions about business practices.
Using these statistical CAT models is relatively straightforward (exactly how the models work is a topic for another article). For an earthquake model, the first step is geo-coding and hazard retrieval. The street addresses for each risk are entered into the model, which then converts each location address to latitude/longitutde coordinates. The model then overlays the USGS soil, landslide and liquefaction maps on top of the geo-coded locations, and takes note of the known conditions at each location. Distance to known faults is calculated and weighted based on the expected return period for each fault as well as the expected magnitude of earthquake that each fault segment is expected to produce.
|How the magic happens
All of this data is coupled with expectations about ground motion attenuation (how ground motion is dampened or increased by various soil conditions and distance from the epicenter). Then the magic happens — the model developer's "secret sauce" involves the algorithms that convert all of these disparate pieces of data into expected loss information. Each model uses a different algorithm, the exact details of which are highly protected proprietary information.
For the end user, whether underwriters, actuaries or senior executives, the real trick is determining how to interpret the loss estimates, and how to craft guidelines and business practices to maximize profit while keeping overall risk within established tolerances. The models can't tell whether or not a major earthquake, hurricane, tornado or pandemic will occur in a given time period. All the models can offer are estimates of the financial impact of these events should they transpire, along with educated guesses about the likelihood of such events occurring within the given time horizon.
Output from an earthquake analysis would include an estimate such as, "There is a 0.4 percent chance of an earthquake occurring within the policy year of such magnitude as to cause $X (or greater) losses to the company after deductibles and reinsurance are applied." This can also be thought of as the 250-year return period, or the one in 250-year event. With floods, we often hear reference to the 100-year flood, which really means flooding that has a 1 percent chance of occurring each year. (It does not mean that such a flood happens only once in a century.)
|Better understanding loss exposures
It's the "or greater" bit of the model output that causes the sleepless nights for risk managers. Perhaps the most useful benefit derived from the use of a CAT model is being pushed out of one's comfort zone, and forced to consider the potential outcomes from very low probability, high severity events. Failure to consider the small possibility of such an event might expose an organization to great risk of ruin in the event that actual losses are significantly higher than expected.
The ever-growing quantity of data available in the world today suggests that dependence on statistical analysis will increase in the future. As the insurance industry continues down this path of better understanding loss exposures and enterprise-wide risk management, expect the continuous evolution of CAT modeling and data-analytics to help plot the course.
Michael Brown ([email protected]) is vice president and property department manager at Golden Bear Insurance Company.
Want to continue reading?
Become a Free PropertyCasualty360 Digital Reader
Your access to unlimited PropertyCasualty360 content isn’t changing.
Once you are an ALM digital member, you’ll receive:
- Breaking insurance news and analysis, on-site and via our newsletters and custom alerts
- Weekly Insurance Speak podcast featuring exclusive interviews with industry leaders
- Educational webcasts, white papers, and ebooks from industry thought leaders
- Critical converage of the employee benefits and financial advisory markets on our other ALM sites, BenefitsPRO and ThinkAdvisor
Already have an account? Sign In Now
© 2024 ALM Global, LLC, All Rights Reserved. Request academic re-use from www.copyright.com. All other uses, submit a request to [email protected]. For more information visit Asset & Logo Licensing.