Setting guardrails for AI use in P&C insurance

Balancing innovation with regulation: Carriers considering AI for complex workflows must be aware of evolving governance.

The insurance industry must have this key question answered: How will AI regulation bite insurers in the coming years? (Credit: Credit: john/Adobe Stock)

The promise and potential of AI for the insurance industry goes well beyond the prosaic (such as automating repetitive tasks in areas such as customer service). Insurers are extending AI to more complex processes that financially impact consumers, including underwriting and claims.

See also: AI integration propels insurtech sector despite funding slump

Despite AI’s potential, many insurance executives are exercising caution as there is a great deal of uncertainty about insurance regulators’ receptivity to AI being a part of those processes. To that end, any insurer considering AI for more complex workflows must be aware of how the regulatory landscape around AI is evolving. Most importantly, they must have this key question answered: How will AI regulation bite insurers in the coming years?

Insurers’ hesitation about AI stems, in some part, from the lack of enforceable AI regulation and not having a clear view of what is and what is not permissible. At this point, there are not many laws that address the usage of AI for insurance. In fact, most of the buzz around AI regulation focuses on “frameworks,” which merely lay the groundwork for potential legislation or regulation.

Most of the frameworks that have emerged in recent years follow the path blazed by the National Association of Insurance Commissioners (NAIC), which has monitored AI since at least 2017 through working group and committee discussions channeled through its Innovation and Technology Task Force. These discussions centered on issues that could present unique risks to consumers, such as the potential for inaccuracies, unfair discrimination, data vulnerability, and the lack of transparency and explainability.

NAIC released numerous viewpoints and positions on these risks on a piecemeal basis over the last few years before finally codifying them in the “Model Bulletin on the Use of AI by Insurers,” which was adopted by its membership in December of 2023. This bulletin advocated for responsible governance, risk management policies, and procedures to ensure fair and accurate outcomes for consumers. Further, this bulletin points out that any decisions impacting consumers that are made or supported by advanced analytical and computational technologies, including AI, must comply with all applicable insurance laws and regulations, including unfair trade practices. The NAIC cannot pass laws or regulations, but the bulletin can serve as a benchmark to evaluate AI regulatory efforts in process across the U.S. (The accompanying timeline summarizes key state and federal efforts to regulate usage of AI for insurance.)

News from the top

Let’s start this evaluation at the federal level. In November of 2023, a bipartisan group of senators, ranging from Amy Klobuchar (D-MN) to Roger Wicker (R-MS), introduced a bill to establish a “framework to bolster innovation while bringing greater transparency, accountability, and security to the development and operation of the highest-impact applications of AI.” As of this writing, this bill has not emerged from its committee.

To complement the legislative effort, the Biden Administration issued an executive order in October of 2023 that set a foundation for future AI regulation by requiring federal departments and agencies to act on several mandates for responsible AI use. With the introduction of legislation and the issuance of an executive order, the U.S. government is interested in monitoring all AI developments.

As of mid-summer 2024, the federal government’s positions on this topic really seem to be aligned with NAIC’s positions, so any federal influence likely would not cut against the grain of influence set by NAIC.

State of the union

Clearly, insurers’ ability to incorporate AI into insurance decisions will largely rest with state regulators allowing it. State regulators, as is their purview under federalism, have been locked in on these AI issues for a few years and have forged ahead with enforceable laws and decrees.

Some states such as Connecticut began to move well in advance of NAIC’s bulletin. In April of 2022, Connecticut’s Insurance Department (CID) issued its own bulletin that addressed discriminatory practices in insurance resulting from the usage of big data. In March of 2024, the CID issued another bulletin that essentially endorsed the December 2023 NAIC bulletin and went further by installing a yearly AI certification to ensure insurers do not use AI to synthesize data in a discriminatory manner.

To enforce compliance, CID envisions compelling insurers to complete an annual AI certification that can prove that any AI system used to make or support decisions related to regulated insurance products complies with applicable anti-discrimination laws. Eleven other states took the same path as Connecticut and effectively signed on to the latest NAIC bulletin by using those principles as the basis for their own bulletins and guidance and adding some minor modifications.

Some states, however, chose their own path. For example, in 2021, Colorado passed a law designed to protect consumers from unfair discrimination in insurance practices. The law specified that insurers must be accountable for testing their “big data” systems, which include external consumer data and information sources, algorithms, and predictive models, to ensure fairness. While Colorado’s efforts have largely focused on life insurance, there is no reason to suspect that P&C insurance will avoid similar scrutiny.

Other states, such as California and New York, chose to deviate from the NAIC and issued their own AI guidance. California’s bulletin focused on allegations of racial bias and unfair discrimination in marketing, rating, and claims practices whereas New York’s regulators focused on underwriting and pricing and ensuring that using AI did not yield any “unfair adverse effects.”

To be certain, while there has been some deviation, the spirit of these states’ bulletins is the same. The common theme among all these states’ approaches to AI is that they seem inclined to allow AI to play a role in the P&C insurance ecosystem provided consumers are not harmed and there is a full and transparent understanding of how decisions are reached.

Of course, “harm” is always open to interpretation, but at this point, it would seem that every stakeholder (regulator, insurer, insured, prospect) has a vested interest in ensuring fair and transparent insurance decisions.

The bottom line…

So, the high-level answer to the question posed above is that there may be a lot of noise around regulation but AI is not at risk of being banished, so any regulatory bite may be more of a friendly nip just to keep insurers on the straight and narrow.

The bottom line is that insurers should feel confident about deploying AI in their more complex operations as long as transparency is paramount and they continue to monitor regulatory developments.

Jay Sarzen

Jay Sarzen is a director in insurance research at Conning, an insurance asset management company.  Prior to Conning, he was an executive at Swiss Re, Alte Group, The Hartford and Mass Mutual.

This piece is published with permission from Conning and may not be reproduced.

Related: