Keeping the robots trustworthy: The ethics of artificial intelligence
As the use of AI expands into the insurance space, it must reduce bias while still providing accurate outcomes.
Insurance is a business of decisions. This is especially true across the policy lifecycle, from underwriting to renewal and throughout the claims process. Every day, insurance professionals make critical choices that influence both the insurer and the insured and can have lasting impacts on the business. They answer questions such as, how much risk is associated with a new policy? Is the risk worth underwriting and how should the policy be written to best mitigate that risk?
When a policyholder does make a claim, a claims professional must decide if it is legitimate, or shows indications of fraud. Does the policyholder prefer to manage the claim online without intervention by the insurer? If a claims professional is involved, is the claim appropriate for straight-through processing and rapid payment to save time and effort?
Insurers make hundreds of macro and micro-decisions in the process of underwriting policies and settling claims and over the past few years, insurance company executives have been exploring how artificial intelligence (AI) can help automate or optimize decision-making across the policy lifecycle and claims process. With a better understanding of the possibilities AI offers, insurance market players are beginning to invest more significantly in these technologies.
AI is a matter of trust
Even though the industry’s comfort level with AI is rapidly increasing, one important question remains; “Can I trust the robot’s decision?” The answer to this question is critical, especially in situations where AI is required to make accurate decisions quickly and efficiently. Since many view AI as a “black box” technology, there is a concern that the decisions derived via AI may not necessarily be the best. For many, the basis for their concern revolves around bias.
How can we know the algorithms we’re trusting to make important decisions are not being unduly influenced? Can we trust that factors such as gender, ethnicity, the neighborhood in which a customer lives, or other potentially problematic data is not influencing decisions such as “is this claim suspicious or not?”
Minimizing bias
The good news is that the industry has taken notice and is taking steps to minimize the risk of bias. For example, in 2019, a group of 52 experts assembled by the European Commission published its “ethical rules for trusted AI.” The Commission’s goal is to support the development and adoption of ethical and trustworthy AI in all economic sectors provided it is “ethical, sustainable, human-oriented and respectful of fundamental values and rights.” This is very similar to guidelines recently adopted by The National Association of Insurance Commissioners (NAIC) which states, “AI systems must not be designed to harm or deceive people and should be implemented in a manner that minimizes negative outcomes.”
Building an unbiased algorithm
We are already in a position to reduce the impact of bias, especially when using AI to support better decision making in the claims and underwriting process. The industry must rethink the problem it is trying to solve. The problem must be deconstructed into multiple “sub-problems” and treated individually to derive the optimum outcome.
An example of this concept is using AI to help identify potentially fraudulent claims. It’s crucial to understand that AI should not be trying to determine whether or not a claim is fraudulent. AI is should be making decisions about various aspects of the claim to determine if they are suspicious or not. As such, algorithms must be designed to identify questionable behaviors and determine the extent of suspiciousness. Fundamentally, a claim is not suspect simply because of a policyholder’s ethnicity or gender, or the neighborhood in which they live. Algorithms must represent this fact.
The human element of AI
It is also essential to point out that AI is only ever as good as the humans behind it. In the example of using AI for claims fraud detection, if the goal is to produce an algorithm that results in “the robot says fraud,” this is incorrect. Instead, work toward a goal of the “robot thinks fraud and why” to ensure we’re arriving at the right outcome. Not only that, but a human must also look at the result of “robot thinks fraud” to determine if the decision the robot came to is right and how it came to that decision. Letting the AI know when it gets it right, and when it gets it wrong is crucial. Fundamentally, building a better, unbiased AI is a process, one in which humans and robots work side by side, learning from each other and adjusting accordingly.
Keeping the robots trustworthy
AI has the potential to drive incredible business value for insurers by enabling accurate and efficient decision making at scale. To do so effectively, the industry must be diligent about building and deploying AI designed to make the best decisions. This fundamentally means without bias. By keeping the robots focused on the right things, reinforcing when they make the right decisions, and fine-tuning their algorithms when they don’t, we can keep the robots trustworthy.
Eric Sibony is co-founder and chief science officer of Shift Technology, a provider of AI-native fraud detection and claims automation solutions for the global insurance industry. He has supervised the design of the solution and its evolution, as well as the R&D on the algorithms it uses. Contact him at eric.sibony@shift-technology.com.
Related: