How to more accurately manage risk with geolocation data
Having accurate, up-to-date geolocation data is more critical than ever as property risk shifts to reflect a changing climate.
Real estate professionals know the value of “location, location, location,” and so do property insurers.
That’s because calculating property risk is dependent on location information — geolocation data. Insurers can underwrite the optimum policy more easily by using high-quality geolocation data.
But how can insurers manage their risk assessments using geolocation data? And, maybe more importantly, how do they effectively determine which locations are susceptible to specific risks as climate change and other shifts grow?
Defining geolocation and its importance
Now more than ever, property and casual (P&C) underwriters must prioritize geolocation and, more specifically, geocoding. Geocoding is the latitude and longitude of a given location, which can be as exact as rooftop-level coordinates of an individual address, or to a ZIP+4 (usually 10-20 addresses), or even a broad ZIP Code center.
For over a decade, geocoding has aided insurers with risk assessment processes, providing primary latitude and longitude coordinates for an address. It’s a great starting point when determining a property’s proximity to hazards, such as flooding, which results in very high-cost damage. Geocoding allows insurers to see if the property they’re writing a policy for falls within a floodplain. Precision is essential here, as the risk of flooding can vary exponentially between properties. A few feet can create a big price difference in underwriting a specific policy. Geocoding can also provide historical risk information for a given location. Insurers, by gathering more geolocation data, can identify trends to inform future underwriting processes.
Additionally, it’s critical to remember that geolocation data is dynamic. Territories reshape, new housing developments are built, and administrative borders, parcels, postal codes, street names, and topography evolve. These factors mean that data changes constantly and requires updating annually or even quarterly. Even hazard zones evolve. For example, the number of properties with a 1% or greater annual chance of being affected by wildfire is anticipated to increase sixfold in the next 30 years, according to data from First Street Foundation. This change will undoubtedly have a significant policy impact. Further, new location data sources are being realized every year.
Not utilizing this critical information can be a big problem for insurers and their customers. Consider this hypothetical situation: unbeknownst to the insurer, a fleet of public utility trucks and vans are stored in a lot that was under sea level and within a floodplain. Severe and unpredictable inclement weather destroys the entire fleet, resulting in millions of dollars in damages. That’s the real-world situation that occurred when a school district parked its buses in a known floodplain. When learning of an incoming hurricane, the district moved its buses to another property, which was unknowingly also in a floodplain. Despite the district’s best efforts, the entire bus fleet was lost.
Use the right data, the right way
Because geolocation data is so dynamic and ever changing, harnessing it can be difficult. Many insurers still determine risk by relying on statistics and legacy geocoding solutions. These legacy data sources and tools sometimes can only provide general estimates of a property’s physical location and lack the precision to detect any new hazards in close proximity. Even for insurers using advanced intelligence platforms, acting on the data can be challenging. One major complication is having large amounts of location data coming from different sources, all needing to be verified that the data is indeed fit for insight. Otherwise, incomplete or poor-quality data can result in an underpriced policy (creating undue risk) or an overpriced policy (risking lost business). This creates a lose-lose scenario for both the insurer and the insured. It’s garbage in, garbage out — or inaccurate risk assessment driving today’s estimated $50 billion value-at-risk gap.
To prevent this, an insurer’s primary tactic is to guarantee use of the most high-quality geolocation data available while also verifying it against a wide variety of sources. Ideally, insurance professionals can tap into third-party vendors to access data quality, address management, and multi-sourced hazard risk assessment data sets referencing any U.S. property. This can include data from postal authorities, census data, utilities, direct marketers, cataloguers, and location technology companies — all of which serve as excellent data sources. The process gives insurers the confidence to make the best possible business decisions, ultimately underwriting policies based on the best data.
Having verified, actionable, and the most recent geolocation data benefits both insurers and the insured, and reduces the value-at-risk disparity. Insurers are empowered to better assess risk and properly underwrite customer policies, while the insured can better understand their risk so that they can have an easier time selecting the right coverage.
Bud Walker (bud.walker@melissa.com) is vice president enterprise strategy at Melissa. Bud manages the strategic vision and next-generation capabilities of Melissa’s data quality tools and services.
See also: