Summoning the demon: The new risks of artificial intelligence
The first incident of AI voice fraud allowed criminals to mimic a CEO’s voice and steal over $200,000.
“We should be very careful about artificial intelligence… With artificial intelligence, we’re summoning the demon.” — Elon Musk at MIT’s AeroAstro Centennial Symposium
As technology progresses, new risks and opportunities for fraud escalate. We’ve seen “deepfake” videos, where images of celebrities or politicians are manipulated to say or do things they never did. A deepfake is a technique for human image synthesis based on artificial intelligence (AI). Initially considered an “image” application, deepfakes have now been utilized for voice mimicry. This year marked the first reported incident of an AI-generated voice deepfake used to commit a major fraud.
According to an August 2019 report in The Wall Street Journal, a United Kingdom firm’s CEO was conned into transferring over $243,000 to cyber thieves utilizing a voice deepfake. The CEO of the unidentified company believed he was on the phone with his boss, the chief executive of their German parent company. After speaking with “the voice,” he followed orders to immediately transfer €220,000 (approx. $243,000) to a bogus bank account of a Hungarian supplier.
The voice belonged to a criminal using AI voice technology to mimic the firm’s German chief executive. This new form of cyber fraud was insured by Euler Hermes Group SA, a subsidiary of Munich-based financial services company Allianz SE. According to Rüdiger Kirsch of Euler Hermes, the victim even recognized his boss’s German accent, and it even carried his specific vocal “melody.”
The cyberthief called three times. The first was to initiate the transfer. The second, to deceptively claim it had been reimbursed. The third seeking a follow-up payment. At that point, the unsuspecting target noticed the reimbursement had not appeared. He then noticed the call came from an Austrian phone number.
The second payment was halted, but the first had already been transferred from the Hungarian account to one in Mexico and then disbursed to multiple international locations.
How was this possible?
How do we start to understand how a convincing voice deepfake is even possible? Not only does one company claim to have invented the technology, they boasted about it.
This summer, researchers from Dessa, a Toronto-based AI firm, announced they had produced a perfect voice simulation of popular podcaster and comedian Joe Rogan.
According to Dessa, they produced the most realistic AI voice simulation ever heard to date. Their source material was Joe Rogan, one of the most popular podcast hosts in the United States. With over 1,356 episodes to date, his shows supplied hours of voice samples for the technology to absorb.
The imitation of Rogan’s voice was created using a text-to-speech program named RealTalk, which generates lifelike speech using only text inputs. So the user only needs to type the words, no speaking required. The final result included subtle nuances such as breathing and the “ums” and “ahs.” If sufficient data is available, the program would be able to mimic anyone’s voice. (The deepfake of Rogan can be heard on Dessa’s own YouTube link.)
Fortunately, with the technical expertise required for RealTalk, the general public can’t use the technology – yet. What about the near future? Apps already exist for simple video deepfakes created on our cellphones. Will voice AI technology progress to where only a few seconds of a target’s voice will be required to create a perfect voice deepfake?
A technology similar to RealTalk was used by the cyber thieves to mimic the German executive’s voice. That incident was the first known case of a voice deepfake used for fraud. It was reported in March 2019, two months before Dessa’s public unveiling of RealTalk.
The theft is being investigated by Europol’s European Cybercrime Center. No suspects have been identified, and nothing is known about the software they used or how they gathered vocal samples to mimic the executive. Like Joe Rogan’s plentiful voice samples, could the executive have had multiple speeches available online?
Regrettably, in our business, we can no longer trust the voice of someone who introduces themselves or gives commands. With the arrival of both video and voice deepfakes, confidence in calls and videos could begin to decline.
What’s the exposure?
According to the 2018 Pindrop Voice Intelligence Report, incidents of voice fraud escalated 350% between 2013 and 2017. The inevitable combination of artificial intelligence and synthetic voice creation will increase risks for fraud.
With advances in AI, an increasing number of firms are moving to chatbots to automate many aspects of service reps’ phone conversations. Will a chatbot be able to recognize if it’s speaking to another synthetic voice?
Ironically, AI technology is also touted to help insurers and financial industries identify fraud using predictive analytics and anomaly detection. But the recent theft using a voice deepfake is a direct example of the technology’s misuse for criminal purposes. Attackers may soon use voice AI to automate phishing attacks or to bypass voice verification safeguards.
Pros and cons of voice AI
When Dessa publicized its voice A.I., it was not for nefarious purposes. There’s a useful potential for speech synthesis. Benefits include:
- Enhanced communication for people with speech disabilities, or who speak through text devices, such as people with Parkinson’s, cerebral palsy or stroke patients.
- Customers can speak to a voice “assistant” that sounds as natural as talking to a friend.
- Customized applications, such as devices using a voice that’s comforting to the user; apps endorsed by celebrities, etc.
However, with any light, there must be dark (the demon part.) Risks with voice AI include:
- A synthesized voice is used to gain access to secured systems or locations by mimicking officials.
- An enemy uses voice AI to enter military or government facilities.
- An audio (or video) deepfake of a politician or world leader is used to manipulate election results or to incite disorder.
- Using it to impersonate someone for the purposes of harassment or threats.
Warnings for the fraud arena
We will see an increase in cybercrimes involving artificial intelligence in the near future. We’ve already seen deepfakes imitate celebrities and public figures in video format, and a voice deepfake has now been used to fraudulently steal substantial funds.
Will forms of AI or deepfakes be the next frontier for insurance fraud? As technology progresses, opportunities for deepfake phishing and spearphishing (phishing for targeted individuals) will become easier to commit. Imagine such scenarios:
- A deepfake of a CEO reports negative financial results, instantly impacting the company’s stock value.
- A deepfake voice of an executive directs its accounting department to route funds to bogus accounts.
- Deepfakes of someone participating in dishonest activities for the purposes of blackmail, threats or harassment.
- General insurance fraud: deepfaked video of accidents, individuals getting injured.
What we can do
It is imperative to educate our peers and employees with the most current security awareness training. It must be an ongoing effort, as the perpetrators are constantly modifying their tactics.
For financial risks, verification techniques should be employed before any funds are transferred. Two-factor authentication is a simple technique to add extra security. A low-tech tactic if an employee is contacted with an irregular request is to check the number calling and ask to call back (the actual department, not just the “call back” feature.)
Companies should have an increased knowledge of cyber fraud coverage. Companies – and individuals — should protect themselves from internal and external cyber fraud. Carriers can work with you to assess your fraud risks to cover any exposures for financial loss.
There are valuable applications for artificial intelligence. But we need to prepare defenses to ensure our communications are genuine and secure. Awareness and dialogue is the first step, as well as persuading officials and lawmakers to take action and to create viable safeguards.
Richard Wickliffe, CPCU, ARM, CLU, (RLWickliffe@yahoo.com) has worked in the insurance industry for over 20 years. He enjoys writing and speaking about unique fraud and insurance trends. He is the recipient of the FBI’s Exceptional Service in the Public Interest Award and is also the author of crime fiction, where he was awarded Best Popular Fiction by the Florida Book Awards.
Related: