The threat of deepfakes and their security implications
Advanced editing technology is making it easier for anyone to change online avatars or create synthetic personas.
Technology is making possible what we previously thought was unimaginable. Photos and audio of the deceased can now be brought to life. Advanced editing technology, once the exclusive domain of the movie industry, is now available to the average internet Joe. Anyone can download a mobile phone app, pose as a celebrity, de-age themselves, or add realistic visual effects that can spruce up their online avatars and virtual identities. All this and more can be made possible through deepfake technology — basically a form of artificial intelligence (AI) capable of creating synthetic audio, video, images and virtual personas.
Common deepfake techniques
Early forms of deepfakes were low-quality audio, video or images that were falsified by amateurs who superimposed ordinary faces on movie clips or made celebrities say absurd things. This digital manipulation has matured to create media that is often indiscernible to the naked eye.
According to Europol, common ways to create deepfakes include:
- Face Swap: Superimposing a face of one person onto another.
- Attribute Editing: Altering characteristics of a person in the video such as speech, style, hair color, etc.
- Face Re-enactment: Transferring facial expressions from one video to another in the target video.
- Fully Synthetic Material: Real material used to train machines on what people look like, resulting in a picture that is completely made up, for example, thispersondoesnotexist.com or www.generated.photos.
The implications of deepfakes for organizations
Although deepfake applications might seem like an innocent form of entertainment on the surface, it carries serious risks for businesses, governments and society as a whole. Threat actors can easily manipulate videos, swap faces, change expressions or synthesize speech to defraud and misinform individuals and companies. What’s more, people are being bombarded with information and it’s becoming increasingly difficult to distinguish between what’s real and what’s fake.
Attackers can combine social engineering and deepfakes to win the trust of victims, exploit their weaknesses and lead them to an action. For several industries, deepfakes can have terrifying implications.
- Financial Services:
In 2020, fraudsters used AI voice cloning technology to scam a bank manager into initiating wire transfers worth $35 million. There are other scams too where deepfakes come into play, such as ghost fraud, where the persona of a deceased person is used to access online services, apply for credit cards, secure loans and benefits, open a new account and take out loans that are never paid. Deepfakes are particularly concerning for property, which is experiencing a rapid rise in touchless claim processing. Scammers can upload altered, manipulated or synthetic photos and media on self-service platforms, which significantly amplifies the risk of fraud.
- Politics:
Disinformation in politics is a long-established practice. Deepfakes can be leveraged as a strategic tool for spreading disinformation, manipulating public opinion, stirring civil unrest and causing political polarization. As a recent example, a deepfake video of Ukrainian president Volodymyr Zelensky, urging Ukrainians to lay down arms was broadcast on Ukrainian TV.
- Stock Markets:
A threat actor wants to make a quick profit through stock manipulation. He creates deepfake profiles of leading influencers and shares them on social media and stock market forums. As the stock price jumps, the threat actor cashes out before the stock corrects. A deepfake video of Elon Musk promoting a fake trading platform went viral on social media.
- Justice Systems:
Fake evidence (using deepfakes) can be planted in the court of law, proceedings can be delayed or manipulated, and further issues of cross-examinations may arise if one party testifies affirmatively concerning the deepfake video while the opposing party denies the contents of the video. For example, in a custody battle in the UK, doctored audio files and footage were submitted to the court as evidence.
- Other Scenarios
According to the FBI, deepfakes can lead to Business Identity Compromise (BIC) attacks that can result in significant financial and reputational damages. Additionally, deepfakes can facilitate a variety of criminal activities such as online harassment and bullying, fraud and extortion, non-consensual pornography and online child exploitation. The FBI noted an emerging trend where malicious actors use deepfakes to pose as job interviewees to gain access to company systems.
How can organizations mitigate the risks of deepfakes?
According to our own studies, individuals and organizations can lower the risk of deepfakes by practicing cyber best practices and implementing security controls such as phishing-resistant multi-factor authentication and zero trust to help reduce the risk of identity fraud. Security awareness training will need revamping with special attention paid to this highly believable threat. Look out for visual indicators like distortions, warping, inconsistencies in images and video, strange head and body movements, along with syncing issues between face and lip movement, and any associated audio.
- Encourage employees to flag suspicious activity to security teams.
- Review authorization processes for financial transactions in the context of deepfakes.
- Validate facts through multiple and independent sources of information.
- Take extreme caution with malicious propaganda, especially topics that are politically divisive or inflammatory.
By next year about 20% of all account takeover attacks will use deepfake technology. When organizations can recognize this growing threat, spread awareness around it, develop regulations and frameworks that mandate transparency and frameworks, and utilize the widest available range of human-centered and technological solutions, only then will institutions have a fighting chance to mitigate this growing threat.
Steve Durbin is chief executive of the Information Security Forum, an independent, not-for-profit association dedicated to investigating, clarifying, and resolving key issues in information security and risk management by developing best practice methodologies, processes, and solutions that meet the business needs of its members. ISF membership comprises the Fortune 500 and Forbes 2000. Find out more at www.securityforum.org.
Related:
Insurance remains industry with the most global digital fraud growth
How deepfakes threaten insurance claims automation
3 unintended consequences of well-intentioned cyber regulations