AI will raise novel issues in insurance litigation, attorneys say
Carriers should ensure AI tools are accurate and produce reliable results to avoid potential lawsuits.
Citing an instance where 300,000 health insurance claims were denied with little human review, a bill is now being developed in Pennsylvania that would force insurance companies to disclose when artificial intelligence is used in the claims process.
The bill, which is set to be introduced by Rep. Arvind Venkat, D-Allegheny, aims to get ahead of concerns about what many expect to become the pervasive use of AI throughout the insurance industry.
Although insurance litigators in Pennsylvania said they haven’t yet seen any litigation focusing on a carrier’s use of AI in the claims process, they said it seems only a matter of time before AI-related insurance disputes begin to arise, raising novel civil procedure and discovery issues.
“This is obviously a new frontier for insurance companies,” Post & Schell’s Bryan Shay, who focuses on insurance litigation and bad faith, said, adding the technology has the potential to create great efficiencies for both the carrier and the insureds, but also will lead to novel disputes. “If you’re relying on this algorithm, how much do you have to consider the reasonableness of the data it’s using?”
Since the high-profile release of generative AI tools such as ChatGPT, awareness of AI has skyrocketed. Industries across the board have begun launching new tools that help with information gathering, processing and evaluating. While applications, such as chatbots, have become ubiquitous on websites in various public-facing industries, the use of AI in more sophisticated tasks, such as researching, analyzing and writing, is still viewed largely with skepticism.
When it comes to the insurance industry, litigators say the use of AI in more rote tasks, such as during the information-gathering intake process, will likely not lead to many novel issues that carriers and counsel should be aware of. However, using AI to process and evaluate claims will likely lead to new types of disputes that attorneys and courts will need to grapple with.
According to Shay, the two things carriers and attorneys should be most aware of are the accuracy and reliability of the AI tools they’re using.
AI is only as good as its dataset
“You’re going to end up having litigation over how good the AI really is,” Shay said. “It’s incumbent on carriers and counsel for carriers to become familiar with these technologies and to be familiar with the ways that reliance is good, but overreliance can cause problems.”
As more AI tools begin hitting the market, it is becoming increasingly clear that a particular program is only as good as the datasets it is trained on, and so training bias is one issue carriers, insureds and attorneys will likely wade into when litigation arises.
Both Shay and First Law Strategy Group’s David Senoff gave the example of Microsoft’s AI chatbot Tay, which had to be shut down shortly after launching because it began spewing racist and hateful Tweets. The bot had been programmed to learn from its interactions on the site, so after users began targeting it with hate speech, the AI’s output became hateful as well.
A similar phenomenon could happen with a carrier’s algorithm, which could be trained, or adjusted, to produce certain outcomes.
“You could turn it on and let every claim go through, or make it like a trickle,” Senoff said.
Much the way plaintiff’s attorneys sought the flow charts and computer programs that underpinned the insurance adjuster’s decision-making process when that work was done by hand, lawyers will likely now begin seeking access to the underlying algorithms and training materials that were used to develop the AI, lawyers said.
Senoff said accessing the information will likely face its own hurdles, with carriers arguing the information is proprietary. But, even with the access, complications are bound to arise, Senoff said.
“You’re going to get this crazy algorithm and then you’re going to have to figure out, what does it mean,” Senoff said. “Now in addition to medical experts, you’re going to need computer experts to decipher the algorithm.”
Shay offered similar sentiments, saying that, since juries and judges may need to begin wading into the reasonableness of the decision underlying the claims denial, the evaluations processes themselves will eventually need to be evaluated.
“It almost could turn bad faith into a battle of the tech and algorithm experts,” he said, giving this hypothetical argument: “Here’s what [the AI] didn’t consider, so was it reasonable to rely on AI if you knew this particular thing wasn’t being considered?”
Bad-faith litigator Wes Payne of White and Williams said the carriers he works with understand that there needs to be human interaction when it comes to the evaluation process, and failing to that could expose a carrier to bad faith claims.
“Before you make a decision, there’s got to be some human with experience going either ‘yea’ or ‘nay,’” Payne said. “Most aren’t really trying to use it that way at this point in time. As it develops, maybe they will, but right now nobody’s really willing to bet the company on AI.”
Pennslyvania’s AI legislation
In Pennslyvania, Venkat is expected to introduce legislation that will require carriers to disclose when AI is used for evaluating health insurance claims. It will also require carriers to define the algorithms being used so they can be subjected to the current laws and regulations, and to require specialized health care professionals to review all claims that were initially reviewed by AI.
In a news release announcing the proposal, Venkat cited a ProPublica article that said Cigna denied more than 300,000 claims where the employees only reviewed the cases for about 1.2 seconds each.
“With professionals spending approximately 1.2 seconds on a case and subsequently issuing rapid denials based on algorithmic decision-making, individuals may receive unexpected bills for medically necessary treatments. It is time to regulate AI in health insurance claims processes that may only accelerate such dangerous abdication of claims review responsibilities,” Venkat said.
In response to the ProPublica report, Cigna said the publication’s characterization was “incomplete,” and that the review system was created to “accelerate payment of claims for certain routine screenings,” which allows the company to “automatically approve claims when they are submitted with correct diagnosis codes.”
A spokesperson with Venkat’s office confirmed that the proposed legislation would strictly cover the health insurance sector.
A spokeswoman for the Pennsylvania Department of Insurance said state regulators are continuing to study the use of AI in the life and health insurance markets.
“Insurance regulators are in the beginning stages of developing guidance that is specific to AI through the National Association of Insurance Commissioners,” Lindsay Bracale, communications director for the department, said. “Insurers, whether they use AI or not, are required to comply with all applicable insurance laws and regulations in Pennsylvania.”
Related: