The future of artificial intelligence and insurance sales

AI use may result in significant additional regulatory burdens for insurance producers.

Artificial intelligence has transformed the way that we interact with data and technology, revolutionizing how many industries leverage data to drive sales and growth. (Credit: VideoFlow/Adobe Stock)

The insurance industry relies heavily on data to market, underwrite and administer insurance products.

Today, insurance agencies and producers are looking to AI as a powerful tool for streamlining business operations and boosting sale productivity.

However, the use of AI raises a host of regulatory concerns, which, in an industry that already has a well-established regulatory framework, may result in significant additional regulatory burdens for insurance producers in the near future.

A look at AI in insurance sales

The use of AI in insurance sales presents many advantages to insurance producers, similar to other sale-driven industries.

For example, AI can automate mundane and repetitive tasks such as insurance license and continuing education monitoring, data entry and claims tracking, to increase productivity and decrease the time and costs associated with these functions.

Further, chatbots can provide 24/7 quick and accessible customer service, and AI-driven tools can assist in risk management and fraud assessment. Generative AI can even help create marketing materials and blog posts. Overall, AI, when used properly, can help insurance producers to be more efficient, decrease costs, and avoid unnecessary human error.

The pitfalls of AI

Despite the numerous benefits, regulators have flagged that the use of AI can result in discrimination, lack of transparency and compromise data security, all of which are hot-button issues in the insurance industry.

Although AI can assist insurance producers in processing large amounts of data at a fast pace, it can also result in disparities in access to, and pricing of, insurance products. Further, although AI can greatly reduce human error, AI is not without error itself.

For example, a study by Vectara suggests that chatbots “hallucinate” false information between 3% to 27% of the time. When processes are automated by AI, or decisions are made based on AI-driven interpretation of data, transparency in the data processing and decision-making process can be difficult to maintain, resulting in decisions and interpretations that are inaccurate or discriminatory. Further, the more data fed to a chat bot or generative AI system, the greater the risk of a data leakage (i.e., unintentional exposure of sensitive data or information).

Insurance AI regulation

To date, insurance industry regulation of AI has primarily focused on insurance carriers, and there is little regulation actually restricting how insurance producers are allowed to use AI in insurance marketing and sales.

However, the existing insurance company regulation around AI provides some guidance for insurance producers as they consider implementing AI into their business operations.

NAIC Model Bulletin

In December 2023, the National Association of Insurance Commissioners (NAIC) adopted a Model Bulletin on the use of AI systems by insurers. About 11 states have adopted a version of the Model Bulletin, and more states are expected to do so in the future.

Although the Model Bulletin and its state adoptions do not carry the same weight as a statute or regulation, it can be largely helpful in understanding the way that regulators are thinking about AI and the direction that the AI regulatory framework may take in the insurance industry.

The Model Bulletin primarily requires that insurance companies maintain a written program that ensures that AI systems are used responsibly. Specifically, the program should be centered around transparency, accountability, and fairness, and hold insurance companies accountable for use of AI systems developed by third parties through establishment of document risk management and internal controls.

Under the Model Bulletin, regulators also are granted authority to investigate insurance companies’ development and use of AI, including the creation and implementation of an insurance company’s written program for the use of AI.

Case study: Colorado

Colorado is the first state to pass legislation regulating the use of AI in the insurance industry.

Senate Bill 21-169 was signed into law on July 6, 2021. It prohibits insurance companies from using external consumer data and information sources, algorithms and predictive models unless the insurance company controls for, or demonstrates that such use does not result in, unfair discrimination.

Specifically, certain insurance companies are required to provide information concerning the use of external consumer data and information sources, and the development and implementation of algorithms and predictive models, provide explanations of the manner in which the data, information, algorithms, and models are used, establish and maintain a risk management framework, and provide an assessment of the results of the risk management framework for minimizing unfair discrimination.

However, before adopting any such regulations, the Colorado law requires that the Colorado insurance commissioner first hold meetings with stakeholders (e.g., insurance companies, insurance producers, and consumer representatives) to establish laws that properly incorporate factors and processes relevant to particular types of insurance (such as life and private passenger auto).

The Colorado Department of Regulatory Agencies, Division of Insurance (the Division) announced that it would first focus on the following areas: life insurance underwriting, private passenger auto insurance underwriting, and health insurance.

In respect of life insurance, the Division has promulgated regulations establishing governance and risk management requirements for life insurers using external consumer data and information sources, and algorithms and predictive models.

The Division also is holding stakeholder meetings regarding a draft proposed regulation on algorithm and predictive model quantitative testing in life insurance.

In respect of private passenger auto and health insurance, the Division has recently held stakeholder meetings focused on how the life insurance regulation can be extended to cover these additional types of insurance.

What’s happening in other states?

Although Colorado is the only state to adopt formal regulation specifically targeting the use of AI in insurance so far, other states have recently considered bills also addressing the use of AI and insurance including California, New York and Pennsylvania. As more states begin to adopt and implement the Model Bulletin, and Colorado continues to push ahead with more expansive regulation, we expect additional states to jump into the fray once regulators have a chance to examine the Model Bulletin’s and the Colorado regulation’s impact on the use of AI in the insurance industry.

What should insurance agencies do?

Even though current regulatory focus is on insurance companies, insurance producers are likely to feel the effects of new AI compliance requirements imposed on their carrier partners.

For example, if an insurer is required to create and follow a written AI program which focuses on mitigating adverse consumer outcomes, insurance producers working with that carrier will likely be required to comply with that same written AI program under the insurer’s supervision. Such programs will likely result in more stringent risk management, data retention, and data security policies for insurance producers. Similarly, although the Colorado regulation is aimed at insurers, it restricts how insurers can use external information and AI, with such restrictions likely trickling down to the insurer’s insurance producers.

Accordingly, to stay on top of (and even get ahead of) the regulation of AI in the insurance industry, specifically insurance sales, insurance producers should ensure that any AI systems used are trained on unbiased data, establish human checks and balances against inevitable AI failures, maintain strong data security policies and systems, and practice good recordkeeping and transparency in respect of AI use.

JillAllison Opell, a partner with Foley & Lardner in the firm’s New York office, represents insurers and insurance-related entities in all lines of business, including accident, life and health, property and casualty (including pet), surplus lines, travel and reinsurance.

Margaret Brzakala is an associate in the Milwaukee office of the firm. She helps insurers, producers, and other insurance-related entities achieve their business goals while maintaining compliance with insurance regulatory laws. The authors also wish to acknowledge assistance of summer associate, Deajah Scott, J.D. Candidate University of Chicago Law School, Class of 2026.

See also: