The race for data granularity in P&C

Discover the five keys to enhancing risk evaluations by leveraging datasets that offer a more detailed view.

In the end, third-party data not only helps insurance professionals innovate and better match rate to risk, but it also helps provide the best possible experience to customers: faster quotes, better rates and coverage, improved disaster response, automated claims, and faster speed to market with new products and services. (Credit: Roberto Saporito/Adobe Stock)

Data may be the Holy Grail for insurers, but there is more to a risk score or model these days than meets the eye. To understand what information is feeding that score or model requires digging a bit deeper; is it the same data a company has been using for years? Or, has that data evolved to include new inputs, fresh characteristics, and emerging third-party data sources?

Today there is a race among insurers to be as granular as possible. Winning the race depends on enriching existing data with new and emerging third-party data sources. Leading insurers have embraced the exploration of third-party data — constantly testing and integrating new forms of data such as hazard, aerial, satellite, building, Internet of Things. Many insurers, however, might find themselves somewhere in the middle of the pack — not for lack of ambition — but because the process of finding, testing, and integrating gets in the way of being able to swiftly consume third-party data and drive actionable insights.

One of my former colleagues would tell anecdotes about his early days as an electrician in new home construction. He recalled how he had wished that people could see behind the walls so they would appreciate the complexity within them. Think about new homes today versus 20, 10, or even five years ago — everything inside the walls has changed drastically, from materials to smart technology.

The same is true of a risk score or model. Insurers must “look behind the walls” to know what inputs are going into that score and if they’re keeping pace. Is the score smart, or is it just builder grade? Builder grade is not going to compete with the complex scores and models that are being produced today.

Consider the evolution of territory ratings. Twenty years ago, territories were defined by counties and loss costs were calculated using traditional actuarial practices. Since then, we’ve seen territories shrink from counties to collections of ZIP codes to single ZIP codes — even down to census block groups. The ability to create a smaller geographic footprint is made possible through advanced predictive models. These models are fed large volumes of risk characteristics ranging from distance to fire stations, proximity to highways, changes in population density, detailed vehicle features and hundreds of other potential data sources with correlation to losses.

The same is true of geospatial and hazard data. To arrive at a “good” overall property risk score today, it’s necessary to know the structure’s wildfire, flood, hail score, etc. And beyond that, a more granular understanding of what inputs are driving each individual peril rating. For example, how proximity to fire stations or hydrants impacts a wildfire score. The ability to rate on a by-peril basis has changed the way companies price properties today, leading to better risk selection, rapid premium growth, and lower loss ratios.

As a benchmark for what the best are doing on the spectrum of data proficiency, McKinsey & Company states, “Leaders in pricing innovation invest in data infrastructure to better harness internal data and, perhaps more important, data from external sources. Sophisticated insurance carriers evaluate more than 30 new external data sources and then select two to four sources each year that they use to develop new features to embed in their pricing and rating models.”

Comparatively, Insurity’s 2021 Analytics Outlook Report found that most respondents said their underwriters have access to a minimal number of third-party data sources: 57% use five or fewer sources, 26% use between five and ten, and only 10% use more than ten sources.

Here are a few critical elements to keep in mind to achieve a higher level of granularity in risk evaluation and expand the use of third-party data:

1. Prioritize data and tech stack capabilities

Technology is the conduit to third-party data. External data, just like internal data, is only as good as the insight that can be extracted from it. To make data an asset, insurance professionals need to be able to rapidly collect, conform and operationalize it. This requires a modern data platform and API strategy that enables insurance organizations to consume external data, marry it to their own data and drive actionable insights with speed.

2. Have a dedicated resource or partner to hunt for new data

Consider dedicating an in-house resource to the constant search for emerging forms of data (even outside the insurance sector) that could provide a competitive advantage. Hunting, gathering and evaluating is extremely difficult as a side gig and requires constant focus and attention. Alternatively, or in conjunction with in-house resources, look for partners who can help supplement the search and continuously provide a pulse on where the market is going and what competitors are doing. Economies of scale are the most logical way to expand an organization’s data footprint. 

3. Refine the process for testing and evaluating new sources

The tools for statistical insights are expanding, but data is the blood that keeps it running. Ensure dedicated data teams have a set process and the ability to constantly be testing new data sources that could complement or supplement existing data. Data teams should not only be evaluating incremental predictive power, but also identifying backup data sources for current data assets. Again, look to partners who can help evolve risk models quickly and incorporate fresh characteristics. Knowing about data is a great first step. Putting data into action is where it makes a difference. 

4. Think proactive, not reactive

How will factors like climate change, remote workforces, political unrest, self-driving cars and electric vehicles impact your portfolio in a few years from now? What data can be employed now to get ahead of the curve and more effectively price, mitigate, and manage future risk? How can third-party data be used to provide forward-thinking insights into a company’s book of business to more effectively balance growth and profitability? As the landscape continues to evolve, keeping pace is vital with the emergence of new competitors. 

5. ‘Stream’ access to third-party data

As with each of the above points, partnering is key to accelerating every aspect of third-party data integration. Tap into partner ecosystems to enable rapid ingestion of third-party data with providers who already have a variety of leading data partnerships established. This avoids having to set up an API for each data relationship in favor of one API that taps into many data providers, including emerging forms of data as they become available.

Kirstin Marr of Insurity Analytics. (Credit: Courtesy photo)

I like to think of data today in terms of streaming. It’s become a standard to watch any show you desire at any time—integrating with multiple platforms (e.g., Netflix, Hulu, YouTube, etc.) from any device. This same capability should be available to risk professionals from underwriting to portfolio management, claims, and event response. The ability to consume any data, any time, within any user workflow is where data is headed.

In the end, third-party data not only helps insurance professionals innovate and better match rate to risk, but it also helps provide the best possible experience to customers: faster quotes, better rates and coverage, improved disaster response, automated claims, and faster speed to market with new products and services. Data enrichment is a race for granularity. Remember to dig deeper and look behind the walls in order to uncover nuggets of competitive differentiation. Insurers who do this — who never stop digging and continuously refine and sophisticate their use of third-party data — will find themselves leaders in the race for granularity.

Kirstin Marr is the head of Insurity Analytics, leading the development and operations for Insurity’s portfolio of data and analytics solutions. Kirstin is a recognized thought leader, specializing in data and predictive analytics in the insurance market. Her vision and leadership has enabled insurers to improve performance and meet customer demands for transparency, responsiveness, predictability, and accuracy in risk management and business operations. Prior to this role, Kirstin served as president of Valen Analytics, which was acquired by Insurity in 2017, and previously served as the company’s head of marketing, leading market strategy, brand awareness and innovative thought leadership.

Opinions expressed here are the author’s own.

Related: