Deepfake AI can hoodwink savvy business leaders and employees

AI has given threat actors a leg up in deepfake technology, costing one company $26.5 million.

“It’s possible that there are current losses from deepfakes that companies haven’t even uncovered yet,” said David Ledet, counsel in Reed Smith’s Insurance Recovery Group. “I think we still don’t know how widespread deepfakes can grow to be or how severe the losses might be.” (Credit: TensorSpark/Adobe Stock)

Deepfakes are becoming more sophisticated as artificial intelligence (AI) enhances fraudsters’ abilities to hoodwink employees. These virtual recreations can mimic voices and personas on voicemails and video calls, convincing employees – even those with cybersecurity training – to hand over data, VPN access or funds.

“It’s a rapidly evolving area in risk management with changing coverages and undeveloped law, and it’s definitely something that I think companies should keep top of mind,” said David Ledet, counsel in Reed Smith’s Insurance Recovery Group. Ledet shares the two most common forms of deepfake fraud are face swapping and voice mimicking. “It used to take a large set of voice samples to convincingly imitate [someone], but it’s advancing, and now you only need a couple words or sentences to imitate someone’s voice.”

Real-world deepfakes 

Earlier this year, Taylor Swift was the victim of a deepfake AI scandal, where bad actors distributed pornographic images of the U.S. popstar generated with AI. The deepfakes swiftly spread across social media and were viewed by tens of millions of internet users before the images were scrubbed. Other stars are coming under fire for using deepfake technology to imitate the personas of deceased celebrities to enhance their careers.

“Drake used AI to record a verse by Tupac in a diss track aimed at Kendrick Lamar,” said Ledet. “Those were convincing enough reproductions that Tupac’s estates lawyers got involved and sent a cease-and-desist letter to Drake’s camp… That same technology can be used to imitate a colleague or a CEO at a company in order to manipulate company employees.”

One of the most extensive and notorious deepfakes happened in Hong Kong, in which cybercriminals fooled a finance worker at a multinational firm into transferring $25.6 million. The employee received an email with a suspicious request for a secret transaction from the company’s chief financial officer (CFO) in the U.K. A video call with the supposed CFO and several other staff members eased the worker’s doubts because they looked and sounded like his colleagues. However, all meeting attendees were threat actors using deepfake technology, reported Hong Kong police.

“Frankly, that’s one of the first ones,” said Ledet of the Hong Kong deepfake debacle. “Whereas, ransomware has been around now for a while…People probably know someone who’s been affected by it or had a competitor or themselves deal with it. I think deepfake just isn’t that prevalent quite yet… You can see interplay between those types of fraud. I think deepfakes are just another tool that once they’re more common, companies will be more aware of their risk.”

Identifying and stopping deepfakes

Similar to phishing, business leaders and employees must communicate with each other and be vigilant about recognizing deepfake fraud attempts, says Ledet. Companies can use software designed to detect AI-manipulated photos and videos, but these programs can only do so much as cybercriminals constantly evolve their techniques to avoid detection.

The first step to stopping deepfake threat actors is understanding the risk and then exploring the coverage available through cyber insurance brokers to ensure the company has a policy that encompasses its assets and needs, says Ledet. Most businesses do not have a cybersecurity provision calling out deepfakes specifically, though some policies may cover certain types. However, phishing and email scam policies and protocols, such as training and awareness campaigns, can also apply to deepfake AI. Business leaders can direct employees to contact the IT department if they encounter suspicious activities and use different types of authentications where appropriate.

“If someone called me purporting to be someone else from a number I didn’t recognize, but it sounded like them, then simply saying ‘Hey, I’ll call you back on your company number or on your cell phone,’ could be one way to get a level of confidence that you’re speaking to who the person purports to be,” said Ledet, adding that separate approvals for certain actions can lower the risk.

As with all cyber threats, there’s a learning curve with deepfakes. Adapting responses to other cyber threats, such as prompt identification and proactive measures, can help companies avoid fraud and financial losses from deepfake AI.

Related: