Ethical AI use helps insurers stay ahead of regulations
For now, the onus is on insurers and insurtech companies to ensure AI does not cross ethical boundaries.
Businesses of any type have an ethical responsibility to conduct their affairs with care, respect, accountability, honesty and respect for the law. Ethics is especially essential in the insurance industry where moral obligations collide with government regulations. The rise of artificial intelligence (AI) in insurance has brought business ethics to the forefront, with many insurers weighing the benefits of AI against the potential for liability when regulations catch up to technological advancements.
No universal AI regulations
As of May 2, 2024, 10 U.S. states have adopted the recommended AI regulations from the National Association of Insurance Commissioners (NAIC), with Oklahoma being the latest. These laws have various requirements, which include submitting AI-based algorithms and training data sets and disclosing if AI-based algorithms are used in the review process. Other countries, including members of the European Union, China, Canada and Australia, have enacted similar regulations. The Organization for Economic Co-operation and Development and the United Nations are working on global guidelines for AI.
More AI-specific regulations are coming down the pike. In states without oversight, the onus is on the insurer and insurtech companies to ensure AI in insurance does not cross ethical boundaries.
Ethical conundrum
Generative AI uses data and patterns to create unique content and is constantly learning through interactions with human users. Insurers use AI in underwriting, claims, customer service, marketing and other departments. Property and casualty insurance carriers often implement AI in automating claims processing and detecting fraud, as well as in risk assessments, such as determining roof age through virtual inspections.
The applications for AI seem endless, but insurers must be mindful of the regulatory environment, or lack thereof, and consider the ethics. U.S. lawmakers and policyholders have valid concerns about AI in insurance, with worries about biased data leading to higher premiums for certain groups of people or denial of coverage.
A conservative approach
Many insurers are wondering what the regulatory landscape will look like next month, let alone next year. Insurtechs are recommending a conservative approach that follows the strictest regulations to date.
Ethical AI starts with the data sets and insurers must use unbiased data to avoid discriminating decisions. For example, CoreLogic develops its programs with legal expert input to weed out compliance concerns and data biases.
Another critical component is transparency, which is the focus of many U.S. regulations. Insurers should be open about their AI algorithm, how it’s used to make decisions and how it interacts with users, according to Zendesk. Departments using AI should always have a human element, making sure an agent, broker or other insurance professional maintains oversight.
A conservative AI approach guarantees fair premium prices and claims processing and safeguards insurers from future liability as regulations reshape the Wild West of AI in insurance.
Related: