How insurers can build confidence in their AI
Increased transparency and control allow insurers to become more sophisticated about the manner in which they use artificial intelligence.
Data and analytics have been central to the insurance industry for decades.
In fact, insurers paved the way for many aspects of data-based decision-making.
The increasing maturity of artificial intelligence techniques, and the explosion of data from new sources, such as wearables and IoT devices, is now turbo-charging AI opportunities.
However, at many insurers, enthusiastic experimentation with AI has not yet translated into large-scale adoption and impact. While there are multiple reasons for this, including data availability and legacy systems, a key challenge has been the difficulty of convincing stakeholders about the accuracy, trustworthiness and relevance of AI model outputs.
Fortunately, technology solutions are emerging to help address that challenge and help insurers capture value at scale from AI.
AI holds great promise for insurers
Underwriting and pricing risk in insurance has always been based on analyzing historical data such as those on mortality or P&C loss records. In this sense, actuaries can perhaps be seen as precursors to modern-day data scientists.
Two major changes are turbo-charging the current AI opportunity for insurers:
- First, there is an explosion in the data available to support decision-making. Some of this, such as the detailed information filed to regulators by publicly listed companies, was always available as unstructured documents, but recent advances in extracting structured data from such documents have made them much more accessible. Other types of data, such as satellite imagery, have simply become more widely available. Finally, altogether new categories of data have arisen in some cases, such as personal health data originating from wearable devices or machine health data originating from Internet of Things (IoT) networks.
- Second, the maturity of AI techniques has increased dramatically in the last few years. Computing power and data storage has become substantially cheaper. All this has enabled insurers to build meaningful predictive models using the massive amounts of data they can now access.
As a result, a rich set of opportunities have opened up to use data and advanced modeling techniques more effectively. AI models can now be used to supplement traditional models for underwriting and pricing of risk, automate several aspects of claims management and associated fraud assessment, and automate parts of customer service and back-office operations. Personal line carriers such as auto and health insurers have been at the forefront of early adoption, but even more specialized segments such as large corporate risk and specialty insurers have been experimenting with AI.
The business impact of AI
A critical stumbling block inhibiting widespread AI adoption among insurers has been the difficulty convincing stakeholders about the accuracy, trustworthiness and relevance of AI model outputs. Two factors are driving this lack of trustworthiness:
- The workings of many AI models are far more opaque than traditional models. The most common types of AI algorithms (machine learning) create models based on the data used to train them. As a result, the data scientist’s understanding of how the model actually arrives at its conclusions can be limited. This poses a challenge in convincing stakeholders — business line owners, risk and compliance teams, auditors, regulators and customers — about their suitability for large-scale use.
- AI models’ dependence on the training data can make them prone to particular weaknesses. Compared to traditional models, AI models are more likely to ‘overfit’ or exaggerate historical trends. They may lose their predictive accuracy more easily in the face of changes in input data, such as those triggered by, for example, the pandemic. Finally, they can exacerbate existing biases present in the training data, such as biases regarding gender or race.
Insurance regulators have recognized these risks and have set expectations for insurers to implement AI responsibly. Examples include the Principles on AI governance published by the U.S. National Association of Insurance Commissioners in 2020 and the European Insurance and Occupational Pensions Authority in 2021.
Overcoming obstacles
How can insurers overcome these obstacles and realize the full value of AI?
While these concerns about the risks of AI are valid, they should not become a reason to stall greater adoption of AI. The last couple of years have seen a tremendous increase in the awareness of such risks in the technology, actuarial and data science communities and among risk and compliance teams. Internal policies and standards have been defined for responsible use of AI.
Most importantly, the technology to analyze AI models, explain the underlying drivers of the model outputs accurately and monitor and troubleshoot the model’s performance on an ongoing basis has made rapid strides. For example, it allows insurers to:
- Create transparency around the key drivers of the model’s predictions/ decisions (“Why did this claim get flagged as potentially fraudulent?”)
- Assess any potential biases in model predictions, and the root causes (“Do female drivers get better car insurance rates? If so, is this legal in this jurisdiction?”)
- Monitor model and data stability over time, trigger alerts when they breach pre-defined thresholds and identify the root causes of such instability (“Is our triage model flagging fewer claims for manual review this month? If so, what is driving that change?”)
- Determine potential parts of the population for which the model is unreliable (“Is the model’s predictions for over-60 white-collar workers based on too few data points?”)
- Identify potential changes in data quality that might impact the predictive accuracy of the model (“Are there sudden changes in the flood risk data provided periodically by our third-party partner using satellite images?”)
This increased transparency and control over AI are allowing insurers to become more sophisticated about the manner in which they use it. For example, instead of attempting to replace existing underwriting and pricing models, they are using AI to augment such expert processes by suggesting potential new risk factors for consideration by actuaries. Similarly, AI is being used to gather and process new sources of data, such as unstructured text or image data from application forms and satellite images converted into usable, structured data points. Those insurers that take appropriate action to ensure their AI models are both useful and trustworthy will gain a meaningful competitive advantage over those who shy away from these approaches.
David Marock (david_marock@yahoo.com) is a senior advisor and Shameek Kundu (shameek@truera.com) is chief strategy officer and head of financial services at TruEra.
These opinions are the authors’ own.
See also: