AI in insurance: New opportunities come with new worries
Artificial intelligence is cutting-edge technology that may prove to be a two-edged sword.
Artificial intelligent (AI) business solutions — or “cognitive systems” such as IBM’s Watson — absorb enormous volumes of data, extract meaning from that data in the form of correlations, inferences, and predictions, and proceed to “recommend” decisions on that basis.
AI innovations that leverage “big data” are increasingly found in every aspect of the insurance business — claims processing, fraud detection, risk management, marketing, underwriting, rate-setting and pricing.
The cutting-edge technology, however, may prove to be a two-edged sword. The foundation upon which it is built — i.e., the harvesting of personal information from many millions of people, and the use of that information to make decisions affecting many millions more — while offering tremendous new efficiencies and other improvements upon existing business practices, also entails new concerns and risks, and ultimately may increase the number and kind of legal issues with which insurers must grapple.
Unfair discrimination
The predictive modeling capacities of AI systems constitute a natural “fit” to the assessment of risk inherent in the processes of insurance rate-making and pricing. Specifically, more sophisticated data-mining modeling techniques have allowed the use of more objective and detailed quantitative information in risk assessment.
AI systems also offer significant enhancement of existing insurer capabilities in the detection of fraud. Advanced predictive modeling can generate “red flags” during the claim intake process, which enables suspect claims to be routed for investigation while proper claims proceed to payment.
While these and other AI innovations represent exciting and economically efficient new business tools, some observers warn that they may result in disparate impacts upon groups legally protected from discrimination. Moreover, a charge of unfair discrimination in insurance might also be leveled against practices which impact people based upon characteristics such as income level, place of residence, occupation, education, marital or family status, and the like.
Lack of transparency, questionable reliability
The consequences to insurance customers of AI-based decision-making — e.g., higher premiums; claims denied; questioning of veracity — may immediately be evident to those customers. Considerably less clear is how insurance professionals will be able to respond fully to customer requests for explanation of the reasoning that underlies those determinations, given the mystery that cloaks the algorithms by which cognitive systems produce their results.
Questions may also target the validity of the results of AI-based decision-making. The familiar aphorism that correlation is not causation retains its indisputable truth no matter how many times it is uttered. Thus, a given characteristic that correlates with increased risk or suspicion of fraud might be challenged as discriminatory to the extent there is no demonstrable causative connection between the characteristic and the risk or suspicion.
Privacy
The “big data” upon which cognitive systems rely are harvested from numerous sources, many or all of which the average consumer might be disconcerted to learn were being mined and used by unknown third parties for unknown purposes. The recent, highly publicized controversy over use of data gathered by Facebook on millions of its members, including alleged improper use by third-party Cambridge Analytica, starkly illustrates both the substantial privacy liability exposures and vast potential for misuse of data in the age of “big data” analysis.
The worries…
Legislation and Regulation
Insurers should prepare for increased legislation and regulation of the use of data fueling AI in decision-making. In 2017, at least two states — Maryland and Delaware — enacted new legislation prohibiting or restricting the use of factors such as credit history in insurance underwriting. Meanwhile, industry-specific consumer protection laws such as the Fair Credit Reporting Act (FCRA) and the Fair Housing Act (FHA) have long been enforced against insurance underwriting practices deemed to violate those laws.
With respect to the privacy concerns raised by data-driven technology, many federal and state laws and regulations address the use and disclosure of personal information in specific contexts, e.g., cybersecurity, medical information, protection of minors, and credit reporting. On the other hand, no United States regulator to date has taken on the larger question of whether, and to what extent, an individual has a right to control the use and/or disclosure of personal information; nor otherwise imposed any constraints upon the practices of harvesting, selling, or using personal information.
The European Union, by contrast, has taken a major step in that direction. The EU General Data Protection Regulation (GDPR), which was enacted in 2016 and becomes enforceable on May 25, 2018, expands the rights of “data subjects” (i.e., individuals) in numerous respects.
Lawsuits
Insurers’ uses of data-driven technology in the claims process have been challenged in the courts. Many such cases have involved first party medical payments coverage under auto insurance policies, and some have achieved notable successes for plaintiffs. For example, in Strawn v. Farmers Insurance Company of Oregon, the Oregon Supreme Court upheld a jury award of $8,750,000 in compensatory and punitive damages to a class of policyholders who challenged their insurer’s use of a licensed “cost containment software program” to process medical claims.
In the context of unlawful discrimination, class action plaintiffs apply the long-recognized theory of disparate impact to challenge underwriting practices that disproportionately target legally protected groups. Classes are also being certified in cases where harvesting of personal information is alleged to violate existing privacy laws. Going forward, almost any use of predictive algorithms that results in disadvantage to a definable group of consumers could theoretically inspire plaintiffs’ counsel to initiate a class action lawsuit.
In the real world of litigation, of course, there are hurdles to overcome before the theoretical evolves to the feasible. Nowhere is this more true than in the subworld of class actions, which can only proceed as such after the court certifies that the requirements of Fed. R. Civ. P. 23 (or its state law analog) have been met: i.e., the Rule 23(a) requirements of numerosity, commonality, typicality, and adequacy of representation, and at least one of the additional requirements under Rule 23(b).
The most frequently litigated 23(a) issue is commonality, which the Supreme Court re-examined in the 2011 case Wal-Mart Stores, Inc. v. Dukes. Noting that “any competently crafted class complaint literally raises common questions”, the Court emphasized that certifiable class claims “must depend upon a common contention” which “must be of such a nature that it is capable of classwide resolution”.
The commonality requirement, as “tightened” by Walmart, raises the bar for putative class plaintiffs seeking to challenge insurers’ use of AI applications. For example, in Byorth v. USAA Casualty Insurance Company, the Montana Supreme Court held that an insurer’s mere act of sending medical claims to a third party contractor which allegedly “applied computer algorithms to review the files for any possible means to deny the claims” was “precisely the type of superficial question that fails to demonstrate a common injury.” Rather, to satisfy commonality, the plaintiffs needed to “identify the allegedly unlawful, systematic program … that causes the denials [of claims.]”
In addition to commonality, a putative class seeking damages must satisfy the more stringent requirement of Rule 23(b)(3) that “the questions of law or fact common to class members predominate over any questions affecting only individual members.” That provision, as well, has been “tightened” by the Supreme Court, and requires the plaintiffs to show that individual injury resulting from the challenged conduct can be proved by evidence common to the class. That requirement affords a powerful tool to opponents of class certification in lawsuits predicated on AI applications, because the causes of action available in such cases frequently do not lend themselves to class-wide calculation of damages.
Moving forward
The transformations that AI offers to the insurance industry are understandably greeted with interest and enthusiasm. Insurance, however, is unique among industries in that a constant vigilance to emerging risks constitutes the very essence of the business. For that reason, insurance companies are uniquely well positioned to understand and appreciate the potential liabilities that may follow upon the implementation of new technologies. Insurers should evaluate the potential for legislation, regulation and litigation targeting AI-based decision-making, and proceed with appropriate foresight and caution.
Laura Foggan is a partner and Elaine Panagakos is a counsel in Crowell & Moring’s Washington, D.C. office, where they are members of the firm’s Insurance/Reinsurance Group. To reach these contributors, send email to lfoggan@crowell.com or epanagakos@crowell.com.
See also:
5 insurance and artificial intelligence predictions for 2018