Why insurers must prioritize responsible AI use

To be successful with AI while also maintaining regulatory compliance, insurers must ensure the models they deploy are effective, trustworthy and valuable.

Insurers need to know with certainty that their AI models aren’t turning prospective policyholders away based on non-relevant socioeconomic or demographic factors. (Photo: Alex/Adobe Stock)

Adoption of AI has exploded in recent years as organizations recognize the power of machine learning (ML) models in delivering greater business insights and fueling competitive advantage. When applied and monitored properly, AI has proven to drive real, tangible value — particularly in the insurance industry, where AI is being used to support risk assessment and fraud detection. In fact, PwC’s survey of U.S. insurers found that nearly half said AI has improved decision-making. Additionally, almost two-thirds of respondents using AI reported that it helped create better customer experiences.

Unfortunately, some insurers are still facing obstacles when attempting to leverage AI models that will enhance their business operations and bottom lines. The same PwC survey found that potential new cybersecurity and privacy threats topped the list of AI worries, cited by 42% and 36% of survey respondents, respectively. As a result, risk and regulatory teams often hit the brakes on new initiatives.

Insurers can’t afford to slow down their AI investments, especially knowing the technology has the potential to add up to $1.1 trillion to businesses annually. To be successful with AI while also maintaining regulatory compliance, insurers must ensure the models they deploy are effective, trustworthy and valuable. That is, every insurer must prioritize responsible AI.

Mission critical

Responsible AI is a governance framework that documents how an organization is addressing the challenges around AI from both an ethical and legal point of view. Creating an enterprise-wide focus on responsible AI enables the design, development and deployment of ethical models. When AI is developed responsibly, users will be able to govern and audit models to understand how and why a decision is made. The result is greater visibility into AI pre- and post-deployment, AI models that continuously perform as expected in production, and outcomes that are fair and more reliable. For insurers, this means being able to confidently leverage AI to assess applicant risk, detect claim fraud, and eliminate human errors in underwriting processes.

The challenge in bringing responsible AI practices to life lies in that many organizations lack the technical skills and capabilities to effectively monitor and manage their AI models. While data scientists and MLOps engineers may attempt to build their own tools, the only ones capable of providing full transparency throughout the entire AI lifecycle must incorporate model monitoring, explainability and analytics. Without that, models remain opaque, causing major headaches for highly regulated insurers who need to be able to explain how and why their AI decided to, for example, deny a claim.

This level of transparency is especially critical when you consider applications for AI specific to determining risk. Insurers need to know with certainty that models aren’t turning prospective policyholders away based on non-relevant socioeconomic or demographic factors. Additionally, as cybersecurity and privacy threats grow, insurers need models that keep data secure and pinpoint vulnerabilities faster. Solutions that support responsible AI practices, such as Model Performance Management (MPM), will enhance fraud prevention models, giving insurers greater visibility behind suspicious activity, reducing false positives, and enabling faster resolution times to reduce costs and improve customer trust.

MPM tracks and monitors the performance of ML models through all stages of the model lifecycle, ensuring complete AI explainability and transparency. From a single viewpoint, organizations can record their ML models and training data; conduct an automated assessment of feature quality, bias, and fairness; ensure human approval of models prior to launch; continuously stress test models; and gain actionable insights to improve models as data changes. The technology makes it easier for insurers to study model outcomes, track performance, and identify potential issues like model drift and bias. The most sophisticated solutions offer easy-to-understand visualizations and reports that highlight key metrics, making it simpler for even non-technical users to safely leverage AI. In an industry like insurance where it’s not always clear how to quantify bias or ensure fairness, MPM provides transparency and objective insight — alleviating concerns of risk and regulatory teams who can now see and explain why decisions are being made and confirm that the models adhere to both company policies and broader compliance mandates.

With a cultural shift to focus on responsible AI and the power of MPM to bring those practices to life, insurers can feel confident that every model they deploy is trustworthy, accurate, and risk-free.

Kirti Dewan is the vice president of Marketing at Fiddler AI. The company’s platform has a unified environment that provide a common language, centralized controls and actionable insights to operationalize machine learning and artificial intelligence. Dewan has over 20 years of experience in the technology sector. These opinions are the author’s own.

See also: