Taking artificial intelligence out of the black box

Artificial intelligence is coming, but insurers need to do more than just plug in a computer.

Most insurance companies haven’t grasped the human and organizational challenges of integrating AI into the real world. (Credit: agsandrew/Shutterstock)

For property & casualty (P&C) insurers, and especially for chief claims officers, artificial intelligence (AI) and machine learning are quickly becoming powerful new tools for reducing losses from fraudulent claims.

Between 10% and 40% of P&C claims have some element of fraud, depending on the country, and those losses total some $40 billion in the U.S. alone. Human detection catches only a tiny share, and computerized rules-based approaches do better. Machine learning models, however, are destined to launch an entirely new era.

Machine learning is more than just using powerful computers to make millions of “if-then,” rules-based calculations per second. Instead, machine learning systems train themselves by incorporating human experts, creating artificial intelligence with a better chance of staying up to date with the latest fraud schemes.

That said, most insurance companies haven’t grasped the human and organizational challenges of integrating AI into the real world. These systems impact people — employees as well as customers — and create new issues that chief claims officers and senior management ignore at their peril.

Because machine learning can instantly analyze every single incoming claim for “signals” of possible fraud, its sheer speed can trigger a considerable increase in labor-intensive investigations. As a result, a plug-and-play deployment of a fraud detection system could lead to an increase in investigation costs and undermine any benefits.

Human expertise remains essential

Insurance companies need to think hard about those trade-offs. How much is a company willing to spend on additional investigations to stop 60% of total fraud? How about 90%?

At Metromile, we built a return-on-investment calculator into our fraud detection system that estimates the trade-offs and allows insurers to adjust their thresholds for fraud alerts accordingly. But this isn’t simply a financial decision. There are also human factors to consider. A spike in the number of false alerts, for example, could alienate customers and even prompt front-line managers to dismiss recommendations — reducing the system’s efficacy.

AI systems must also earn employee trust. Humans need to have some intuitive understanding of how a system makes its predictions. Unfortunately, the more sophisticated a model is, the harder it is to understand. Machine learning systems don’t look for intuitive or common-sense strategies. They look to produce accurate predictions, which emerge from the interplay between hundreds of variables.

If these systems are to gain acceptance in the real world, humans need to understand and even challenge these recommendations. An analogy here is to the credit scores compiled by agencies like Experian or TransUnion. Credit bureaus offer explanations of how they set a score, so people can find out how a card balance or a missed payment may have affected their ratings. Insurers need to offer something similar, or they risk infuriating customers and perhaps human adjusters with opaque decisions. To provide that kind of clarity, however, insurers need to invest in data analysis. We made that investment, but it’s one that’s easy to overlook.

Finally, insurance executives need to remember that human expertise remains essential. Even the best AI system will encounter baffling “edge cases” that require the textured experience of human experts. Besides, any AI-based fraud detection model will degrade over time unless it keeps up with new situations. At Metromile, for example, our Detect system asks for human guidance whenever it runs into a case where it has low confidence in reaching the right answer. A human expert may well recognize a fraudster’s new tactics, which is crucial to keeping an AI-based system current.

There is no question artificial intelligence offers breakthroughs in fraud reduction. Insurers need to understand, however, that it still needs to fit in and evolve with the messy and constantly changing world of people.

Amrish Singh (asingh@metromile.com) is the vice president, enterprise product at Metromile. The views expressed here are the author’s own. 

Related: