Developing insurance issues as deepfakes continue improving

Commercial risks posed by deepfakes span reputational damage, lost revenues, business interruption, extra expenses and drops in share prices.

It is only a matter of time until we see deepfakes of executives making statements that negatively affect a company’s stock price and deepfakes being used to infiltrate a company’s network to steal money or intellectual property. (Credit: Shutterstock.com)

Though deepfakes are not new, their realism is improving exponentially. With improved realism comes the inability of viewers to distinguish fact from fiction.

As the line between reality and fiction grows blurrier by the day, it is only a matter of time before viewers act on the information provided through a deepfake to their detriment. And, where corporations are involved — even if they were the subject of the misinformation — viewers may look to the corporation to make them whole for any financial injuries. Thus, for corporations seeking to reduce cyber risk and protect their bottom line, the question of whether insurance will respond to such fact patterns is not something that should be deferred to the future. The artificial synthesis of video has the potential to wreak great havoc on individuals, businesses and society. Businesses need to be prepared for this disruption and consider whether insurance assets will respond.

Risk of deepfakes from pornography and politics to the boardroom

Deepfakes gained notoriety in the same space that many new technologies have —pornography. Deepfakes then moved from pornography to politics. For example, in 2019, a 2016 sketch of Jimmy Fallon portraying then-candidate Donald Trump pretending to brag about a recent primary win to Barack Obama, played by actor Dion Flynn, was manipulated with machine-learning to appear as though it depicted the real Donald Trump and Barack Obama.

Also, in 2019, a digitally altered video of Nancy Pelosi appearing to slur drunkenly through a speech was widely shared on social media. Though the video was quickly debunked as a fake, former President Trump posted the clip of his political rival on Twitter with the caption: “PELOSI STAMMERS THROUGH NEWS CONFERENCE.”

Deepfakes have spread to the business world as well. In one well-known example that may be a harbinger of things to come, cybercriminals impersonated a business executive’s voice using AI software and demanded a fraudulent transfer of $243,000.

Deepfake videos have also been used to make public statements on behalf of notable executives, such as when a deepfake of Mark Zuckerberg showed him purportedly admitting that Facebook’s true goal was to manipulate and exploit users.

Now that deepfakes can be created with much more ease than in the past (and will only be easier and cheaper to make as technology improves), almost anyone can wreak havoc with a few clicks. Thus, it is only a matter of time until we see deepfakes of executives making statements that negatively affect a company’s stock price and deepfakes being used to infiltrate a company’s network to steal money or intellectual property.

Deepfakes in the insurance context

Insurance coverage for deepfakes is in its nascent stages. According to a 2020 report prepared by Marsh, deepfakes “are outpacing the law,” and insurers are assessing the potential risks to businesses.

Commercial risks posed by deepfakes span reputational damage, lost revenues, business interruption, extra expenses and drops in share prices. For example, cybercriminals can infiltrate a company’s network to conduct reconnaissance and then leverage that information with a voice deepfake of an executive to fraudulently authorize a large transfer of money from the company to the criminals’ accounts. From an insurance standpoint, this type of loss may be covered by a number of insurance policies, including cyber or crime, depending on the policy language.

Similar to the Mark Zuckerberg deepfake, hackers, competitors or other bad actors will be able to publish convincing deepfakes of corporate executives making statements or engaging in activities that are potentially injurious to the company’s reputation and may cause a decline in revenues, share prices, or both.

The technology already exists to broadcast a deepfake of Elon Musk stating that he has surreptitiously programmed the cameras in Tesla’s cars to record images and send the images to the U.S. government, when, in fact, the opposite is true. Such a statement would undoubtedly damage Tesla’s reputation and share price and could conceivably lead to a shareholder derivative lawsuit and other types of lawsuits. In this instance, various insurance coverages, including cyber, D&O and E&O, could be responsive to such losses and liabilities. Further, crisis management, a coverage provided in some policies, could also assist in containing the reputational fallout from such an event.

A key issue with insurance coverage for deepfakes will be the trigger of coverage. Of course, the unprecedented and evolving nature of deepfakes complicates hypothetical coverage queries, as the unique facts and policy language will undoubtedly vary for every loss scenario.

Nevertheless, one thing is certain: Any company may be a target of deepfakes, and the financial impact could be significant. As such, it is critically important to consider how insurance assets might be brought to bear in the event of losses and liabilities resulting from a deepfake.

Implications of ‘G&G Oil’

The Indiana Supreme Court weighed in on a crime policy’s coverage of a ransomware attack and bitcoin ransom payment, in which court held that an Indiana trial court improperly granted an insurer’s motion for summary judgment because fact issues existed as to whether the cyber criminals’ intrusion into the insured’s computer system was done through trickery, whether the loss was direct, and other issues about the application of the policy language to the facts.

While G&G Oil did not deal with deepfakes, it does illustrate that insurance coverage for emerging cyber threats and losses may be found beyond the bounds of a standalone cyber policy. Accordingly, insureds will want to consider all potential coverages in assessing the role that insurance might play in mitigating losses and liabilities arising from deepfakes.

As is the case with all technology-related risks, the best protection is an up-to-date security system that includes regular training to educate employees about cutting-edge schemes. However, as deepfake technology progresses, even the most security-conscious employee may fall prey to the deepfakes. Moreover, even with the best internal training measures, those external to a company — customers, vendors, and the general public — remain vulnerable to deepfakes.

Consequently, business executives, risk managers, in-house counsel, and the insurance industry must anticipate that losses and liabilities arising from deepfakes will increase in the coming years. It is prudent to be prepared to mitigate and respond to these new risks with preventative measures and insurance assets.

Peter A. Halprin is a partner, Jacquelyn M. Mohr is a senior managing associate, and Nicolas A. Pappas is an associate at Pasich LLP.

Opinions expressed here are the authors’ own. 

Related: