By Max Dorfman, research writer, Triple-I
Some good news about the deepfake front: Computer scientists at the University of California have been able to detect manipulated facial expressions in deepfake videos with higher accuracy than today’s state-of-the-art methods.
Deepfakes are complicated forgeries of an image, video or audio recording. They have been around for several years, and versions are available in social media apps, such as Snapchat, which have face-changing filters. But cybercriminals have begun using them to pretend to be celebrities and executives who create the potential for more harm from fraudulent allegations and other forms of manipulation.
Deepfakes also have the dangerous potential to be used in phishing attempts to manipulate employees to provide access to sensitive documents or passwords. As we previously reported, deepfakes pose a real challenge for companies, including insurance companies.
Are we prepared?
A recent study by Attestiv, which uses artificial intelligence and blockchain technology to detect and prevent fraud, investigated US-based businessmen regarding the risks to their companies linked to synthetic or manipulated digital media. More than 80 percent of those surveyed realized that deepfakes posed a threat to their organization, with the three main problems being reputation threats, IT threats and fraud threats.
Another study, conducted by a CyberCube, a cybersecurity and technology company specializing in insurance, found that the fusion of domestic and corporate IT systems created by the pandemic, combined with the increasing use of online platforms, makes social engineering easier for criminals.
“As the availability of personal information increases online, criminals are investing in technology to take advantage of this trend,”; said Darren Thomson, CyberCube’s head of cybersecurity strategy. “New and emerging social engineering technologies such as deep-fake video and audio will fundamentally change the cyber-threat landscape and become both technically feasible and economically feasible for criminal organizations of all sizes.”
What the insurance companies do
Deepfakes can facilitate the submission of fraudulent claims, the creation of falsified inspection reports and possibly counterfeit assets or the state of assets that are not real. For example, a deepfake can evoke images of damage from a nearby hurricane or tornado or create a non-existent luxury watch that was insured and then lost. For an industry that is already suffering from $ 80 billion in fraudulent allegations, the threat is great.
Insurance companies could use automatic deepfake protection as a potential solution to protect against this new mechanism of fraud. Nevertheless, questions remain about how it can be applied in existing procedures for filing claims. Self-service insurance is particularly vulnerable to tampered with or fake media. Insurance companies must also consider the possibility of deep fraudulent technology to create huge losses if these technologies were used to destabilize political systems or financial markets.
AI and rule-based models for identifying deep forgeries in all digital media remain a potential solution, as does digital authentication of photos or videos at the time of recording to “manipulate” media at the time of recording, preventing the insured from uploading their own photos. Using a blockchain or immutable ledger can also help.
As Michael Lewis, CEO of Claim Technology, says: “Running antivirus on incoming attachments is non-negotiable. Shouldn’t the same thing apply to controlling fraud on every image and document?”
The research results at UC Riverside may offer the beginning of a solution, but as an Amit Roy-Chowdhury, one of the co-authors put it: “What makes the deepfake research area more challenging is the competition between creation and discovery and prevention. With more advances in generative models, deepfakes will be easier to synthesize and harder to distinguish from real ones. ”