How choosing rating accuracy wins the day in insurance
Companies that learn how to rate their exposures more accurately will succeed in the long term.
Data has been a focal point for the industry for some time, and that only seems to be accelerating. Whether we’re talking about new data sources, using third-party data to avoid asking insureds for information, looking at how we structure our data models and warehouses (or lakes, but hopefully not swamps!), implementing data governance or something else, data is as much a buzzword as embedded, AI, IoT or anything else.
Unlike many of those buzzwords, however, data is very much a reality, and has been since the beginning of the industry. After all, the entire premise of pricing insurance is born from data.
As much as we’ve focused on the new sources and ways of making decisions from data, we also often find ourselves wedded to the crutches of old data, even when we know it’s not serving us or can’t serve us as well as new alternatives.
The time has come to up our data game.
The comfort of flawed data
I have had several conversations with carriers about the threat to their ability to use credit scores in their rating models. As some states, like Washington, restrict the use of credit due to its inherent bias, carriers are increasingly finding themselves backed up against a wall.
On the one hand, the introduction of credit to rating models gave one of the biggest lifts to accuracy and profitability in lines like personal auto. On the other hand, credit has been proven to be a biased measure. When we know that something we rely on heavily is also biased, we should seek to retire its use since bias leads to sub-optimal decision-making in every case.
Countering this concern, though, is the portfolio approach to underwriting lines where credit is used, meaning the impact of bias on the portfolio as a whole is smoothed out and offset as the population can be priced in a way that offsets individual-level biases.
That’s fine at a portfolio level, but not at a risk level.
Individual risk accuracy beats portfolio underwriting every time
The rise of Progressive in the early 2000s taught us that pricing accuracy at the individual risk level always wins.
Progressive developed granular, individualized pricing in a space that was very much written on a portfolio basis. This meant they knew the specific rate to charge to take a risk, and, equally important, at what rate they should walk away from a risk.
This strategy helped them to grow nearly seven times in the first two decades of the 2000s and see a combined ratio just over 94 while the personal auto space wrote at or over 100.
The lesson here is that, as much as we talk about smoothing losses and having a balanced portfolio so the overall performance makes up for highs and lows, those who price individual risk accurately are the actual winners.
You cannot do that with flawed data.
Navigating changes we can make today
While replacing credit may be a hurdle many carriers aren’t ready to face yet, and a viable alternative (or set of alternatives) perhaps has not yet presented itself, there are areas where we continue to make sub-optimal decisions we can avoid, choosing to depend on our crutches despite their inferiority.
One example is in the geospatial and mapping area. Today, we have access to powerful mapping capabilities, offering the ability to precisely understand the location of insured properties and respective buildings better than ever before. This location forms the foundation of key rating drivers like risk assessment and reconstruction cost estimates — and is therefore critical to get right.
This comes down to technology called “geocoding”: the ability to enter an address, and receive back a coordinate representing the location of that address — a deceivingly complex challenge. Historically, a series of imprecise methods have been used to estimate the location (“geocode”) related to an address. This started with “street-segment geocoding,” offering a point along the road — a methodology that worked well for navigation but did not do a great job of identifying the location of the building on the property. The industry then evolved to “parcel-based geocoding,” which offers a point in the center of the legal land parcel — an improvement over street-segment geocoding, but often not an adequate representation of where the building is (or building are) located on a property.
This brings us to the gold-standard, “building-based geocoding” — the ability to enter an address and receive back the precise location of each building on that property. The industry has come to agree that this building-based geocoding is not only ideal, but essential to properly understanding exposure.
Looking at a map of a property recently, it’s easy to understand why. I saw both a parcel-based geocode, and a building-based geocode including the building footprint. The parcel-based geocode was a safe distance from a flood zone that had been drawn on the map. However, that is because the parcel-based geocode “pin” was placed by that carrier’s mapping solution near the front of the plot. Looking at the building-based geocode footprint, you could see that the back portion of the property (and the building) overlapped significantly with the flood zone. Had we underwritten off the parcel-based geocode, we would have missed the flood exposure, and offered an inadequate rate for that risk.
If the flood zone did not overlap with the actual building footprint yet a parcel-based geocode suggested it did, we would over-price the risk, and perhaps lose it to a competitor who caught the nuance.
In either case, we would not be rating accurately. That may all even out at the portfolio level, but, as we saw with Progressive above, that is not the path to winning.
New solutions solve old problems
Unfortunately, we’ve not had much of an option to deal with this problem other than deploying underwriters or analysts to study maps in-depth, which is both costly and time-consuming. And, in a world of instant quoting, that’s not a viable alternative to rating inaccurately.
Luckily, though, new tools are available now. In this specific context, Canadian geospatial technology company Ecopia recently created the only complete map of every building in the U.S. — laying the foundation for a true building-based geocoding engine, empowering carriers to solve this problem.
Even though the optimal solution exists now, not everyone thinks this is a problem to worry about. Today, many carriers still use parcel, street or even ZIP code-based geocoding in their underwriting. Because we’ve been living with this inaccuracy, many carriers are desensitized to it. They know it’s there and have averaged out their pricing to try to compensate. We’ve done this in other, similar situations like with fraud we cannot identify and control, subrogation we don’t engage in, and more. Because we’ve been living with this inaccuracy, we presume it to be acceptably small or resign to feeling that it’s par for the course and move on with our day.
We know it isn’t good enough, but we accept it, even if we could move to something better if we were willing to put in the effort to change.
How big is the problem, really?
Before accepting defeat, we really should be asking ourselves, “How much better could that ‘something better’ be?”
Comparing Ecopia’s maps to traditional solutions we find that, for any given address, traditional geocoding methods return a point on top of the building only 58% of the time; whereas Ecopia returns a point on top of the building 97% of the time. Clearly, this is not a small issue. If we were to apply that accuracy difference to the U.S. homeowners insurance market, that would equate to $47.2 billion of premium being at risk. To make the scale of this more tangible, for a carrier with 100,000 homes insured, that would mean roughly $54.4 million of premium was based on an inaccurate understanding of the exposure.
Premium based on inaccurate information is topline revenue with a high risk of a negative bottom-line result. And given that the U.S. homeowners market has seen a combined ratio of 103.7 for the last five years, clearly, this is not just a concern, but a reality. And, of course, this isn’t just a homeowners issue, affecting commercial property and other lines of business, as well, meaning the scale of the problem is even larger.
Better mapping presents outsized ROI opportunity
Mapping, as we find with many other data solutions we depend on, can be quite engrained in our systems and processes, making replacing sources or approaches seem like a Herculean task. While it may not be small, the technological changes we’ve seen in recent years mean it is no longer the barrier we once thought it to be. The use of APIs and cloud-based solutions means we can change which pipes are connected where faster and more cheaply than in the past, and even deploy multiple technologies in a parallel, cascading fashion. It also means we can test and learn quickly and with low-risk — an approach solutions like Ecopia are designed for through their proof-of-concept and pilot approaches.
And, unlike some solutions where the impact may be muted, the pervasive nature of the problem here – 60% lower accuracy – creates the kind of ROI dynamics we rarely come across in insurance.
We can look at Progressive’s experience as a proxy to try to size the potential bottom-line benefit of just this one improvement in accuracy from mapping. Progressive ran a roughly 8-point combined ratio advantage over the industry so far this century through their superior rating accuracy. Applying that to our 100,000-home insurer, that represents a $4.4 million annual underwriting profit lift.
But the savings don’t stop there. Once you have a truer picture of the actual exposure, you can start to apply other advanced data solutions to it, increasing your accuracy further. This also opens carriers up to engaging with insureds on a risk prevention and management basis, improving book performance and driving retention. Rather than a crutch, we now have a flywheel of returns.
And, while you could deploy these tools without accurate underlying maps, much of that investment would be wasted and the benefits would be muted because they would be applied on top of an inaccurate foundation.
Mapping a path ahead
Mapping is just one example of the impact of making the choice of accuracy over the more typical “we’ve always done it this way, and we’ve been fine,” approach. There are many other examples around us already, and more will come (including, I’m sure, for replacing credit). It is imperative that we do not fall back on the comfortable way that we have used portfolio averages to mute the negative impacts of flawed data, or that we focus only on the potential cost or effort of changing how we work when the upside from doing so is so high.
After all, this isn’t about pockets of savings here or there. It’s about the core insight the industry should take from Progressive’s experience.
The company that rates more accurately will win.
Bryan Falchuk is the managing partner of Insurance Evolution Partners.
Related:
- Technology, life sciences companies: New opportunities & risks
- P&C industry’s future hinges on people & technology