Artificial intelligence and the future of insurance

By incorporating AI into their operations, insurers are investing in ways to automate claims operations without sacrificing accuracy.

The insurance industry should think of AI as a set of exciting tools to learn about, use and improve business processes. (Photo: Shutterstock)

Artificial intelligence (AI) is changing the world. From game shows — where IBM Watson wins at Jeopardy — to medical advances and business decisions, the implications are staggering. Due to the highly quantifiable nature of insurance company operations, AI will have a transformative impact. While claims handling has traditionally taken a back seat in technology advancements compared to other insurance functions like actuarial, marketing and underwriting, this is starting to change.

Imagine the following scenario: A category 4 hurricane hits Florida, damaging or destroying property throughout the state. With so many claims to handle, how do overwhelmed adjusters scrutinize all the data to evaluate the claims and make accurate assessments?  Sure, many of them have years of experience and rely on skills or instincts, but is this enough?  

Companies know the consequences for an incorrect claim decision can be disastrous to their bottom line. By incorporating AI into their operations, insurers are investing in ways to automate components of the claims operations without sacrificing accuracy. These insurers are either buying or developing their own software that allows adjusters to spend less time evaluating claims. Most of this efficiency will come from the automation of routine manual tasks, but cognitive tasks are also becoming more efficient. AI and automation can remove unnecessary human involvement and quickly report the claim, capture the damage, update the system and notify the customer. It also reduces fraudulent claims and human error by identifying data patterns in claim reports.  

Claims professionals recognize AI adds value, helping to better manage their time and use their skills more creatively. They recognize AI models are doing something useful; helping them, not creating more work. Having a formal AI model helps adjusters quickly identify the most important information in a case and provide recommendations that serve as a valuable starting point.  In some cases, it purposely slows down the process, thus making claims adjusters think about why AI made any singular assessment. That said, some conclusions AI makes need to be refined. There can be important conflicts where a claims adjuster will back off their position or give justification for it. If there’s a conflict for high case reserve, it brings attention to a specific claim.  

The future of predictive analytics 

Companies are also attaining greater insight into the claims process through predictive analytics, which involves advanced statistical modeling, data mining and machine learning tools to cull data in order to forecast future events.    

Insurers see huge, unexplored potential for advanced analytics in the claims area, according to a 2017/2018 Willis Towers Watson survey, that found fraud prevention and triage to identify complex claims as the top applications for development over the next two years. 

According to a 2018 McKinsey report, 90% of personal and small business insurance claims processing will be completely automated by 2030. As AI research evolves, we will likely see the technology mimic cognitive functions associated with human minds, such as online learning, perceived context, problem-solving and reasoning, taking on more complex functions, e.g., setting a case reserve. Someday, AI may even completely provide the first notice of loss handling, case reserve estimates, and initial triage, without the need for claims professional oversight.  

Interpretation is key to predictive model applications in claims operations

Predictive models can be applied in all aspects of the claim management process, including allocation of resources, reserving, settlement values and identifying fraudulent claims. And while the prediction is the central goal, interpreting those models can provide key insights into a company’s operations and help the organization identify models destined to fail.  

Complex decisions require complex models. One machine learning technique suited for very complex tasks is Artificial Neural Networks (ANNs). ANNs are the current state-of-the-science for tasks that include speech recognition, natural language processing and computer vision (both image evaluation and video review). Unlike many other machine learning algorithms, ANNs not only learn to make decisions but also learn how to process information in the best way for making decisions.  As a result, these models perform well but are very complex and, therefore, very difficult to understand.

While it is tempting to rely solely on model predictive accuracy, it is often quite important to understand the relationships captured by the model. For example, a modeler might want to know how the AI is processing an image of a damaged vehicle to gain insight into what the ANNs are relying on to decide a case. In a simple case, maybe the AI is identifying the presence of a specific object in an image or detecting something about the lighting conditions in a given area where an accident has occurred. Without characterizing the ANN, we don’t really know what it is focusing on.

The need to understand predictive models is driven by a variety of internal and external factors. External factors can include regulatory requirements, liability issues and customer satisfaction. For example, if the model arrives at a claims decision a client felt was unfair, the insurer needs to explain how it arrived at this decision. From an internal perspective, management might want to know if the decision algorithm is consistent with their view of the business.

Ultimately, we want to know how each of the facts about a claim contributed to the final AI decision.  This is known as attribution and the attribution process requires special techniques for complex models and, for technologies like ANN, researchers are just beginning to develop approaches.

AI or humans?

It is tempting to focus on the limitations of AI and make judgments about the technology. It is true researchers are still establishing best practices in AI and troubleshooting along the way, and it may still be too early to allow AI to make important, life-changing decisions without proper monitoring and governance. But humans have limitations, too; they have many unconscious biases that can adversely affect their decisions. Their beliefs shape expectations, which, in turn, form perceptions and conclusions. Also, humans tend to make mistakes uncommon to AI like overestimating the likelihood of recent rare events and having optimism bias that leads to believing they are more likely to experience good over bad events. 

Ultimately, AI is very tangible in contrast to human decision-making. An AI model is just computer code and equations, which can be examined and perfectly repeated. A human decision-maker is a lot more complicated to understand and probably not consistent over time.  If some sensitive information, such as race is being illegally factored into a decision, we can identify the problem with the algorithm and systemically remove this component from the AI decision model.

Our professional community should think of AI as a set of exciting tools (not a source of competition) to learn about, use and improve business processes.  

Change is always difficult, but claims professionals are adjusting. They realize AI will make their jobs more efficient and ultimately, they will make better decisions, thus allowing them to not only to do their job better but to better serve their customers.

Jason Rodriguez (jason.rodriguez@willistowerswatson.com) is the data science lead in the Insurance Consulting Technology practice at Willis Towers Watson. Opinions expressed here are the author’s own. 

Related: