Navigating the IoT data flood

With the volume of data available, insurers must distill it into usable nuggets of information.

The Internet of Things is collecting vast amounts of data that insurers must now convert into usable information to provide a better picture of claims risks. (Photo: Shutterstock)

The Internet of Things (IoT) is changing the nature of risk assessment in real time. As more and more devices begin transmitting information about their locations, movements and patterns of use, a wealth of data is accumulating about the environmental and human factors that contribute to the likelihood of a loss event taking place. It’s an embarrassment of riches from an actuarial standpoint, but the data is proliferating so rapidly, and from so many sources, that one could be forgiven for feeling overwhelmed by the challenge of putting it all to good use.

Perhaps that is why, according to a 2016 EY survey of C-level executives across multiple industries, only 36% of respondents from the insurance industry claimed that their companies could use insights from new data sources to boost customer value. Of the seven industries EY surveyed, the insurance sector ranked dead last on this question. How can that be? Surely the ability to use real information rather than calculated probability to determine premiums, prevent fraud and settle claims faster must be an advantage.

Related:  Scaling hurdles on the path to insurance digital transformation

Using data to manage risks

So why doesn’t the insurance industry see this as an opportunity? One of the biggest reasons is the sheer variety of data sources. IoT data comes to the world of insurance in myriad forms – from driving habits revealed by onboard diagnostic systems to utility consumption patterns gathered by “smart” thermostats and meters. All of it is potentially valuable, but it comes in disparate formats and from different providers.

Making sense of it and extracting real value requires a solution that can standardize and rationalize the data, leading EY to conclude that “legacy system limitations and the variety and volume of new data requires an overall ecosystem approach” rather than an attempt to make sense of it all with one devilishly overcomplicated system.

Related: Choosing the right Producer Management System

For the successful insurer of tomorrow, this will demand the ability to distill that high-volume, raw data into meaningful nuggets, such as scores, factors and indicators. Core systems then can integrate and use that data from an ever-broadening variety of sources in a centralized system capable of translating it all into product development, underwriting and claims value.

Legacy systems with outdated models based on historical data and risk probability simply can’t keep up – the world of “the new insurance” will be supported by flexible, modern platforms that can take data from as many sources as necessary, in as many formats as are required, and allow carriers to put the valuable information they extrapolate from the bigger picture into action.

The carriers that don’t see this as an opportunity to plan for the future now won’t just find themselves drowning in data — they’ll be washed away completely.

Jeff Wargin is vice president product management – policy and platform at Duck Creek Technologies. Contact him at jeffrey.m.wargin@duckcreek.com.