NY Dept. of Financial Services proposes AI guidance for insurers

In a proposed circular letter, the DFS recommended actions for insurers to prevent discrimination when using AI and ECDIS.

The New York DFS laid out expectations within the letter for how insurers can develop and manage this technology in a way that mitigates any potential harm to consumers. Credit: Eyal Shtark/Adobe Stock

The State of New York’s Superintendent of Financial Services, Adrienne A. Harris, issued a proposed circular letter for public comment on January 17 that aims to address and prevent discrimination when it comes to the use of AI and customer data by insurers.

The letter — available in full on the Department of Financial Services’ website — acknowledges the benefits presented by utilizing consumer data and information services (ECDIS) and artificial intelligence to both insurers and customers by creating simplified, efficient processes. However, it also warns: “ At the same time, ECDIS may reflect systemic biases and its use can reinforce and exacerbate inequality. This raises significant concerns about the potential for unfair adverse effects or discriminatory decision-making. ECDIS may also have variable accuracy and reliability and may come from entities that are not subject to regulatory oversight and consumer protections.”

The letter also states that the self-learning nature of AI systems “increases the risks of inaccurate, arbitrary, capricious, or unfairly discriminatory outcomes that may disproportionately affect vulnerable communities and individuals or otherwise undermine the insurance marketplace in New York.”

The New York DFS laid out expectations within the letter for how insurers can develop and manage this technology in a way that mitigates any potential harm to consumers. According to the DFS guidance, insurers are expected to:

“Technological advances that allow for greater efficiency in underwriting and pricing should never come at the expense of consumer protection,” Superintendent Harris said in a release. “DFS has a responsibility to ensure that the use of AI in insurance will be conducted in a way that does not replicate or expand existing systemic biases that have historically led to unlawful or unfair discrimination.”

Related: