Technology is changing the underwriting of health care risks

Tricky liability concerns: What happens if a patient under a doctor's care is injured and artificial intelligence (AI) is involved?

Globally, artificial intelligence applications in health care are raising profound questions about medical responsibility. Here, an associate professor at the University of Tokyo inspects 3-D digital images of brain tumor, cranial nerves and blood vessels of a brain scanned and created by a magnetic resonance imaging. (Photo: Kiyoshi Ota/Bloomberg)

Digital innovation has already begun to transform the face of health care worldwide. Ground-breaking new ideas and tools have shown their potential to offer patients faster, more accessible and more efficient care.

While the future might look bright for the health care industry, there are profound obstacles to overcome for health care providers and insurers alike.

The health care industry is one of the last industries to undergo a “digital revolution,” and it is with good reason: Many industries like retail and banking have little to no high-risk downside from the widespread, rapid adoption of technology. However, in health care, whether it’s a computer glitch, cyberattack or just an ill-informed piece of artificial intelligence (AI), the consequences of a mistake can escalate into life or death situations.

Artificial intelligence applications

In 2018, Rock Health, a venture fund that specializes in digital health, reported that nearly $3 billion was pumped into digital health companies leveraging AI and machine learning (ML) — making it one of the hottest areas for funding and drawing huge media attention.

Many providers have subsequently begun to question the speed at which AI has made inroads into medical practices, especially when it relies on machine learning. This questioning came shortly after the approvals of several AI-powered technology solutions for magnetic resonance imaging (MRI) and computed tomography (CT) image analysis tools by the Food and Drug Administration (FDA).

In more recent months, increased pressure and scrutiny from the medical community encouraged the FDA to release a new framework on AI/ML software to provide the industry with legal and regulatory clarity.

But this still doesn’t address the issue of liability: What happens when a patient is injured under a doctor’s care and there is AI involved?

Profound responsibility questions

Globally, AI is raising profound questions about medical responsibility. Normally, when something goes wrong, the source of the blame can easily be traced. For example, a misdiagnosis would usually be the responsibility of the presiding doctor. Or a faulty medical device that gives an incorrect read, and ultimately harms a patient, would likely see the manufacturer held to account.

But what does such precedent mean for AI?

This has become such a widespread concern that in June 2019, the American Medical Association (AMA) adopted a new policy, “Augmented Intelligence in Health Care.” It provides the basic framework for the evolution of AI in health care, and it helps identify the appropriate steps to educate the industry around how AI technology works and how to evaluate its applicability, appropriateness and effectiveness in caring for patients.

The AMA also released some guidance on liability: Their stance is, “If a doctor subsequently fails to properly warn a patient and adequately disclose the risks and benefits associated with the product (artificial intelligence), it is the doctor who will face liability.”

However, they added, “If a patient becomes injured by use of an AI technology, current legal models are insufficient to address the realities of these innovations.”

This will no doubt be met with horror from the medical community, which cringes at the thought of having to swallow a liability pill for these emerging medical errors that might be completely outside of their direction, control or knowledge base raises concerns — all at a time when “no fault, no blame” approaches have been successfully enforced.

Many will be keeping an eye on the development of case law involving AI. In health care, this is almost non-existent at this juncture. But a trio of lawyers believe there will be one critical focus: whether or not the software simply repeats clinical guidelines. If the software does, and its recommendations are followed, the doctor will probably be legally OK, they write. If the software doesn’t, and its recommendations are poor, there’ll likely be legal liability for the doctor.

Despite this uncertainty around safety, quality and liability issues, many are keen to press ahead with AI developments in health care. Providers cite that “misdiagnoses are the leading cause of malpractice claims globally and machine learning could greatly diminish health care and legal costs by improving diagnostic accuracy.”

With the large differences in opinion over this topic and absent any case law to set a precedent, many stakeholders must brace for a decade-long debate over who makes the ultimate decision on patient care: the technology or the traditional health care provider?

Brave new underwriting world

For the insurance industry, it’s important to understand that as the health care and technology sectors continue to intertwine, practitioners and companies operating will start to experience a wider range of risks. This will cause underwriters to rethink historical solutions.

It would be reasonable to assume that the basic “health care” or “medical” incident liability triggers have become outdated and not fit for modern times. These triggers and their definitions have been eye-wateringly static despite the global rise of technology in health care in the past decade, and the knock-on effect of this will undoubtedly lead to finger-pointing between insurers or, at worst, denied claims.

A recent report by The Doctors Company (TDC) highlighted the broad range of risks experienced by health care providers in modern times. From 2010 through 2017, the industry saw a 400% rise in “electronic related” claims against providers, which is a direct result of the adoption of electronic medical records.

Lloyd’s of London also recently called for cyber coverage clarity in insurance policies. Many will be watching on with intrigue to see how the health care liability market will address this mandate. Will this lead to a willingness to affirmatively cover bodily injury arising from cyber events, or the application of absolute exclusions?

Much like the medical community, insurers will need to approach underwriting these emerging exposures with caution. Many tried-and-tested predictive models will no longer be enough, and weighting will need to be applied for these emerging risks, which are present, real and getting harder to predict.

In many respects, this is an incredibly difficult task when also having to juggle sky-rocketing claims inflation in medical malpractice. But the key will be for underwriters to combine skills, learned underwriting technology and medical malpractice to mitigate the risk of the digital health revolution.

While health care liability insurers will have a choice whether or not to adapt to this revolution, it is going to be impossible for them to ignore it.

Timothy Boyce (tboyce@cfcunderwriting.com) is the health care practice leader at CFC Underwriting, Ltd.

See also: