How can insurers protect against deepfake fraud?
From faked injuries to created auto accident damage, bad actors are finding new ways to file false claims.
Deepfakes — phony images or videos that look like the real thing, but aren’t — are everywhere these days. Some are perhaps easy to spot and, as a result, don’t really hurt anyone — but others, such as those used by insurance fraudsters, can be used to swindle insurers into paying for phony claims, or even worse, fool a buyer into believing a substandard vehicle is safe, risking their lives. In order to avoid losses and weed out phony claims, insurers are going to need advanced fraud detection systems.
Advanced communication technology, it turns out, is a double-edged sword for insurers. By providing customers with the opportunity to upload images and data, companies can save themselves — and customers — a great deal of time, effort, hassle and money. It’s a trend that flowered during the pandemic and continues to grow.
For example, an uploaded claim that checks out could be paid off far more quickly and efficiently than a claim based on a manual inspection of a vehicle by a company agent. An AI-based analysis could quickly assess the legitimacy of the claim, ensuring that the customers get the money they are entitled to and protecting the insurer from overpaying for a claim. Uploaded, AI analysis-based claims are a boon to both customers and companies.
However, the same system also provides opportunities for dedicated scammers to steal from insurers. Deepfake technology today has advanced to the point where, without advanced fraud detection systems, it’s extremely difficult to discern if an image is legitimate. In fact, according to surveys, more than 80% of insurance industry professionals are concerned that they could fall victim to deepfake frauds. Fortunately, there are some techniques insurers can use to avoid being ripped off; if fraudsters are using advanced technology to cheat insurers, then insurers have to use even stronger technology to avoid becoming victims.
Making ‘fake’ look ‘real’
How would a deepfake insurance rip-off scam work? Presumably, it would require a degree of technological sophistication, with fraudsters perhaps using a play-for-pay service. The service would take images of their customer’s vehicle and use advanced deepfake applications to doctor them. The fraudster would then use it (as well as others) to file a claim, with the online service getting a flat fee or a percentage of the claim. An even easier — and cheaper — way to do this is to download an image of vehicle damage from one of the many image databases available today, and perhaps self-adjust the image using Photoshop.
That kind of scam would probably work best for small payment claims; with the deepfake tech creating the kind of damage you might get in a fender-bender. It’s unlikely that a fraudster would use a basic technique like this to claim that their vehicle was totaled because that would require far more extensive faking than an individual would likely be willing to take a risk for. Such extensive fraud would more likely be carried out by an organized group, which would use not only deepfake tech to produce images — but would also create a phony ecosystem of attorneys, witnesses and other features to convince an insurer being asked to pay out tens of thousands of dollars in claims. Such gangs have been operating since long before the deepfake era — and now with advanced technology, the job of changing images for profit gets even easier.
Protecting against deepfake claims
Many deepfakes are created using adversarial neural networks, where two machine learning (ML) algorithms are employed to ensure that the forgery looks authentic. These are extremely difficult to detect and analyze, and require matching “firepower” in the form of deep learning neural networks.
To protect themselves, insurers need to deploy big data analysis systems that can detect anomalies in images — indications that the image is not a “natural” one. These anomalies can range from the way light is reflected off a vehicle to the relative position of sunlight or artificial light to the blood flow in an individual’s face. Advanced big data systems can analyze millions of images to see if elements of an image match another one — from another accident that occurred far away. Another technique that has shown success is to utilize analysis systems that are based not on individual images, but on scans of a vehicle — utilizing numerous, sequential images, which makes it much harder for scammers to file claims based on fraudulent criteria.
In addition, advanced systems using neural networks can even “unwind” the process used to create images, which are constructed by adversarial neural networks. Deep learning-based neural networks can analyze aspects of the image in order to determine if it has been manipulated in that manner.
Clearly, advanced data, machine vision and AI-based analysis are necessary in order to combat the deepfake scourge. There are clear advantages for insurance companies to allow their customers to make claims online or via apps — but any goodwill and savings they accrue could be wiped out by savvy fraudsters who know how to use advanced technology to steal from firms. As online and app-based claims become more popular, insurers are going to need to prepare and defend themselves.
Neil Alliston is VP product and general manager, Europe, for Ravin.ai. Contact him at neil@ravin.ai.
Related:
It’s a wrap: Key issues insurers faced in 2022