Post pandemic P&C insurance: How to avoid past playbooks

The COVID-19 pandemic catalyzed new economics and business models that will require fresh data insights. Here's what that means for insurance.

Using the right data and methodologies will be one of the key differentiators in shaping the response to the current economic slowdown. (Shutterstock)

A lot has been written about how COVID-19 is testing the very fundamentals of our economies, societies and communities.

What has received relatively less attention is how different this disruption was from past financial catastrophes, and how unconventional the recovery is expected to be.

Several aspects of the current situation are different from recent recessions. First, the obvious: The cause of this recession has been biological, not a fundamental lack of investor confidence in the markets. Second, the scale and speed of this recession has been stunning; every market economy has plunged to record depths at record speed. Third, some sectors have been affected more deeply than others; a number of studies have pointed to the rapid increase of the online grocery sector alongside the complete decimation of airlines and movie theaters. Finally, and perhaps most consequentially, our actions during this crisis have greatly modified entire business ecosystems. This will have a lasting impact even after a coronavirus vaccine is widely available.

Individuals and firms might become comfortable paying a premium for these ‘new ecosystem business models’ that include strategies that were unheard of even a few months back, such as remote employees, drone deliveries, and online schooling and classes. Past playbooks won’t be able to predict which of these strategies will become prevalent part of “the new normal.”

Dealing with the economic repercussions of this crisis will require:

  1. Heavy dependence on data and analytics and,
  2. Tracking a very different set of metrics than the ones traditionally used.

The challenge of today can really be formulated in terms of determining how to fill information gaps. Which among the countless data artifacts are the right ones to use, and which algorithms are the most relevant in generating the right insights?

Here’s how this dynamic will impact the industry insurance.

The challenges currently faced by insurers

During the crisis, many general insurance companies are managing the following issues:

  1. Customer relief: Which businesses can be given extension on Premium Payment terms?
  2. Increased fraud: Which customers are more likely to file fraudulent claims as compared to others?
  3. Growth and risk assessment: Which geographic areas/segments should be the focus for growth and retention?
  4. Marketing and distribution strategy: When, how and where to start increasing spending on advertising and distribution?

To answer all of these questions, insurers need to determine the core business issues and then prioritize their resources wisely. The drivers behind the new risks have changed significantly. So has the data and algorithms to discern them.

Insure response framework

One of the most critical aspects of insurance is the assessment of risk. The sustenance of an insurer depends on how accurately they are able to determine key risk factors and then price them accordingly. Since small and medium enterprises are on the front lines of any economic slowdown, this framework has been created keeping them in mind. The objective of the framework is to expand the scope of risk assessment (using data and analytics) and then use that assessment to formulate the response strategy for the risks an insurer already has on their books.

Step 1: Get the right data to develop the true risk 360-degree view.

Insurers have traditionally relied on insights from brokers and regional managers for risk assessment. In fact, underwriting and pricing models often just have the Industry or Class Code Classification to gauge the risk and determine the price. This often understates the true risk of the entity.

Instead, determining true risk requires 360-degree view:

  1. Include more factors that are likely to have impact and that are readily available. For instance, territorial analytics can bring in novel aspects around factors that are published at a certain level of granularity (e.g. unemployment rate at country/state level) and not used often in the analysis. Breaking down a macro indicator to a more granular detail can be achieved by assumption-based statistical analysis. Using “Exit Rate,” “Industry Composition” and “Work Force Composition,” insurers are able to estimate job loss numbers for a given Local Government Area (LGA).
    Graphic provided by EXL Service.
  2. Include dynamic data elements that reflect the fast-changing nature of the risk. Risks are time-dependent. This is especially true during this pandemic in which the medical risk within a Local Government Area (LGA) can improve significantly in a matter of days, leading to greater economic activity. Consequently, using “stale” sources of data will misconstrue the nature of the risk. So the higher the frequency of publishing and the more granular the data, the greater is its predictive power.
  3. Unlock the true value trapped of in-house, unstructured data. Insurers are beginning to pay heed to the massive potential of unstructured data including comments, requests, feedback from customers, broker comments, adjuster annotations; they all serve as excellent sources of information. Claims notes, for instance, are often lengthy and neglected (until investigation prompts their use). But as a source of information for predictive analytics, they offer the next level of information not typically captured by structured fields. Consider the structured data that categorize the cause of loss as “Water Damage,” or, if one is lucky enough, it might be “Water Damage: Dishwasher leakage.” The structured data will not yield the make and year of the dishwasher. Should there be a systemic problem or design flaw with a particular machine, insurers will not know that and will keep walking into risks that could have been flagged earlier.
  4. Look to specific providers for known specialized risks: New sources of data are increasingly being used to pinpoint specialized risks. In fact, if done well, early access to this data can provide a notable competitive edge. A good example of this is the “Hinterland” effect. Imagine two otherwise similar restaurants that are located 250 miles apart. One is closer to a train station. In its ‘hinterland,’ it might see significantly higher occupancy. Insurers need to use similar information to ascertain whether they are covering that business for the right risks.

Step 2: Get the most of out of the data using the right analytical toolkit.

Many carriers have traditionally used either judgement-based approaches or simplistic analytical models to make decisions. Going forward, the predictive information will need to be gleaned from new sources of fast moving data. Simple models will not suffice. Insurers will need to take the following concrete steps:

First, improve feature engineering to get the right insights from the data. At times, the data suffers from quality or accessibility issues. Machine Learning techniques such as automated feature engineering become crucial to ensure the analysis doesn’t discard relevant variables. Here’s how:

  1. Different variables can be combined to create more meaningful and predictive factors. For instance, the interaction between restaurant occupancy rate and # of employees can better predict demand and operating efficiency. Knowing a higher demand/footfall can help insurers decide the right coverage for the businesses. A higher sum insured would be appropriate for this establishment. Business owners at times tend to underplay the demand/footfall to reduce premiums. The operating efficiency also gives an idea about the number of people working in the restaurant, which is a key factor for deciding Worker’s Compensation. This too is often under-represented to avoid higher premium costs.
  2. Unemployment Rate (UR) might not be a good predictor of GDP, but a 2 period lag of UR can show a strong correlation with GDP.
  3. Probability of recovery (“Exit Rate”) of a business can be created by contrasting decay rates of recently opened businesses with historical trends. The insurers can use this information to make tactical as well as strategic decisions.

Second, determine improving which parts of the current methodology gives the biggest impact: Many insurers feel that the analytics behind every process has room for improvement. However, they are unsure of how to act, as it is not clear where the biggest ‘bang for the buck’ is. At times, this is also dependent on the capability of the existing tech infrastructure, the ability to rapidly integrate machine learning solutions with visual dashboards. Without this clarity, it is a frustrating experience for executives.

Here are a couple of areas where insurers can generate the greatest value:

Claims triage. Most insurers use rules based models or handler’s experience to manage their existing claim process. For instance, they have rules to triage a given claim into low, medium and high complexity by using information available at First Notice of Loss (FNOL). Such models are not very effective across different core coverages because of:

  1. Data standardization: Different definition of same variables across coverages. For example ‘Burn in left arm’ for CTP coverage might be coded differently than Workers Compensation.
  2. Lack of re-triaging framework:  Models seldom use data available post FNOL and reassess FNOL triage results.
  3. Limited supporting framework: The right triaging is the first step in claim analytics process but every insurer needs supporting framework to better manage their highly complex claims and make their operating model effective.

Claims Leakage: Quantify claim leakages across different claim segments by using data driven analysis and prioritize intervention not only based on claim leakages but also on ease of Implementation. Many insurers want intervention that should integrate with their existing claim centers. They also want to set priorities. But there are other interventions that do not require such integration and provide the insurer with better return on limited investment. For example, a machine-earning based legal auditing tool can flag overcharging and does not require claim center integration.

Insurers’ path to recovery

Using the right data and methodologies will be one of the key differentiators in shaping the response to the current economic slowdown. Insurers can leverage the new sources of data and advance algorithms to potentially create a customer centric responses. Recovery has already started in certain sectors with opening of lock down and easing of restriction on social movement. It remains to be seen how fast the economy will recover but it is clear that certain sectors and some companies within those sectors will be the flag bearers for recovery. The path to recovery for insurers is relatively straightforward, they should be able to identify these flag bearers before their competition does.

Rajnil Mallik (rajnil.mallik@exlservice.com) is head of Analytics Services , Australia & APAC, at EXL Service.  Neeraj Sibal (Neeraj.sibal@exlservice.com) is EXL’s assistant vice president of Analytics Services, Australia & APAC. Devesh Nagar (Devesh.nagar2@exlservice.com) is a project manager with EXL Service.

These opinions are the authors’ own.

Also from EXL Service: