Make lemonade out of Lemonade: Takeaways from recent events

Lemonade's new lawsuit highlights the question of how insurance is moving thoughtfully, safely and cautiously with new technologies.

A new lawsuit against Lemonade and the firm’s recent Twitter snafu provide an object lesson about the challenges of balancing demonstrations of innovation with public fears about how companies are using AI. (Photo: Gabby Jones/Bloomberg)

Being a disruptor is hard. It requires taking disproportionate risks, pushing the status quo, and — more often than not — hitting speed bumps.

Recently, Lemonade hit a speed bump in its journey as a visible disruptor and innovator in the insurance industry when a privacy class-action lawsuit over its alleged collection and use of biometric data was filed on August 20. I am not privy to any details or knowledge about the case or what Lemonade is or isn’t doing, but the Twitter event and public dialogue that built up to this moment brings forward some reflections and opportunities every carrier should pause to consider.

Let’s take a moment to make lemonade out of Lemonade events.

We should be talking about and demonstrating how we’re moving thoughtfully, safely and cautiously with these new technologies. That’s how we’ll build confidence in the general public, regulators, legislators, and other vital stakeholders.

Fear and scrutiny are mounting

Pay attention, AI innovators; if we don’t more intentionally engage and address the risks of algorithmic systems and our intended use of consumer data with the public and regulators, we are going to hit a massive innovation speed bump. If all we do is talk about “black boxes,” facial recognition, phrenology, and complex neural networks without also clearly investing in and celebrating investments and efforts in AI governance and risk management, the public and regulators will push pause.

Media coverage and dialogue about AI’s risks are getting louder. Consumers are concerned, and in the absence of more proactive industry messaging about responsible AI efforts and consumer-friendly visibility into how data is being used, regulators are reacting to protect individuals.

In July, Colorado passed SB-169. As a fast follow-up to the NAIC AI principles last year, Colorado’s law is the most direct scrutiny into insurance algorithmic fairness, management of disparate impact against protected classes, and expectations for evidence of broad risk management across algorithmic systems. We will see how many states follow this lead, but insurance should not only watch for state legislation and DOI activity. The FTC and U.S. Congress are also actively developing policies and laws aiming to create greater oversight of AI and data.

Responsible is not perfect — that’s ok

Regulators are trying to find the balance between enabling innovation and protecting consumers from harm. Their goal is not a perfect and fault-free AI world but establishing standards and methods of enforcement that reduce the likelihood or scope of incidents when they happen. And they will happen.

Regulators across the U.S. are realistic. They know they will never be able to afford or attract the level of data science or engineering talent to deeply and technically interrogate an AI system, so they will need to lean on controls-based processes and corporate evidence of sound governance. They are hungry for the industry to demonstrate increased methods of organizational and cross-functional risk management proactively.

I find a lot of regulatory inspiration from two other U.S. agencies. The Food and Drug Administration (FDA) offers this concept of Good Machine Learning Practices (GMLP). The Office of the Comptroller of the Currency (OCC) recently updated its model risk management handbook and emphasized a life cycle approach to mitigating the risks of models and AI. Both recognize that minimizing AI risk is not simply about models or the data but much more broadly also about the organization, people and processes involved.

Slow down the ‘black box talk

Talking about “black boxes” everywhere is not only inaccurate but also counter-productive.

I’ve talked to and collaborated with hundreds of executives and innovation leaders across major regulated industries, and I’m challenged to identify a single example of an ungovernable AI system making consequential decisions about their customers’ health, finances, employment or safety. The risk is too immeasurable.

The most common form of the broad technologies we colloquially call “AI” today is machine learning. These systems can be built with documentation of governance controls and business decisions made through the development process. Companies can evidence the work performed to evaluate data, test models, and verify actual performance of systems. Models can be instrumented to be recorded, versioned, reproduced, audited, monitored and continuously validated. Objective verification can be performed by internal or external parties.

These machine learning systems are not impossibly opaque black boxes, and they are absolutely driving a positive impact on our lives. They are creating vaccines for COVID-19, new insurance products, new medical devices, better financial instruments, more safe transportation, and greater equity in compensation and hiring.

We are doing great things without black boxes, and in time, we will also turn black boxes into more governable and transparent systems, so those, too, will have a great impact.

Risk management, not risk elimination

Risk management starts from a foundation of building controls that minimize the likelihood or severity of an understood risk. Risk management accepts that issues will arise.

AI will have issues. Humans build AI. We have biases and make mistakes, so our systems will have biases and make mistakes. Models are often deployed into situations that are not ideal fits. We are relatively early in understanding how to build and operationalize ML systems. But we are learning fast.

We need more companies to acknowledge these risks, own them, and then proactively and proudly show their employees, customers and investors that they are committed to managing them. Is there a simple fix for these challenges? No, but humans and markets are generally forgiving of unintentional mistakes. We do not forgive willful ignorance, lack of disclosures, or lack of effort.

Let’s make lemonade out of Lemonade

Returning to where we started, this Lemonade event has provided an object lesson about the challenges of balancing demonstrations of innovation with public fears about how companies are using AI.

Companies building high-stakes AI systems should establish assurances by bringing together people, processes, data and technology into a life cycle governance approach. Incorporate AI governance into your ESG initiatives. Prepare for the opportunity to talk publicly with your internal and external stakeholders about your efforts. Celebrate your efforts to build better and more responsible technology, not just the technology.

We have not done enough to help the broader public understand that AI can be fair, safe, responsible and accountable, perhaps even more so than the traditional human processes they often replace. If companies do not implement assurances and fundamental governance around their systems — which are not nearly as complex as many regulators and members of the public believe they are — we’re going to have a slowdown in the rate of AI innovation.

Anthony Habayeb is co-founder and CEO of Monitaur, an AI governance software company. From his earliest days as a strategy consultant at Accenture, Anthony envisioned founding his own company and has intentionally directed his career to develop the experience to do so. He is a proven business leader, managing over $200M of global P&L over the course of his career at companies like Yahoo!, Monster, and Gatehouse Media. He has launched and scaled new products and business units but also led the transformation and improvement of broken ones. Most recently, he served as an executive guiding Propel Marketing’s growth from $4M to $60M in four years. His work advising three AI-driven startups as well as research and discovery in the ML/AI community convinced him that a new kind of assurance was required to unlock the vast potential of intelligent systems in the future. Monitaur is the product of that realization. 

The opinions expressed here are the author’s own. This article is published here with permission from Monitaur. 

Related: