Deepfakes Go Mainstream: From Banks to Elections, the Reality-Warping Tech Explodes
- By Lachlan Colquhoun
- February 13, 2024
Not so long ago, a world of AI deepfakery seemed some way in the future, and here we are, in early 2024, and it's going mainstream.
Stories about Taylor Swift and Donald Trump deepfakes are old news already as the media feasts on new examples of the practice, some of which open up a whole new world of cyber criminality and scamming.
Criminals used to blast safes and hold up banks, but in February 2024, they used deepfakes. It can be very lucrative, as shown by the USD25 million siphoned off a Hong Kong company after an employee was fooled.
In what was a very elaborate sting, the criminals created deepfakes of the employee’s bosses based in the U.K.
A raft of technologies out there now can help, such as open-sourced machine learning generative adversarial networks (GANs) and avatar-building SaaS applications like HeyGen.
These are exciting and innovative technologies that, in the wrong hands, can be used for evil rather than good.
On video calls to Hong Kong, the deepfakes issued instructions for transferring funds, which the employee dutifully performed in 15 transactions over a week.
The employee was reportedly suspicious until the video calls, which included deepfakes of the company’s U.K.-based CFO, co-workers and some external stakeholders.
The scary part of this scam is that, like the most effective business strategies, it is imminently scalable.
In one fell swoop, the story should undermine trust in every Zoom or Teams call anybody has in the future.
Campaigning by deepfake
The political realm also gave us fresh, deepfake examples this week in the highly contentious election in Pakistan.
Former President and cricket champion Imran Khan has been getting around the fact that he has been imprisoned using AI-generated versions of himself.
This started in December 2023 when he appeared via AI at campaign rallies, with the content based on notes he provided from his jail cell.
His party's social media lead said the result felt like a "65-70% match" of the former President. (One immediately wonders if the Hong Kong scammers were 65-70% believable. If they were, the employee may have been a little gullible?)
“By 2026, Gartner predicts deepfake attacks on face biometrics will mean that 30% of enterprises will no longer consider identity verification solutions to be reliable.”
During the polling, Khan's deepfake was back in action last week, which put his PTI party in the lead.
Khan has been behind bars since August 2023 but released a victory speech last Friday, with his AI-generated version telling voters their “massive turnout has stunned everyone.”
With a highly contentious election coming up in the U.S. this year and the U.K. also set for a poll that could lead to a change of Government, there is little doubt that voters in those countries are set to be deluged with deepfakes.
Combined with the likelihood of supercharged social media campaigns, many of which are likely to spread false claims and conspiracy theories, we appear to have reached a new peak in our "post truth" era.
Digital media technology has been anything but a positive force for spreading knowledge and accurate information.
So, while we brace for a world where, more than ever, nothing is as it seems and can be literally taken at face value, cybersecurity practitioners have a new battleground.
By 2026, Gartner predicts cyberattacks using AI-generated deepfakes on face biometrics will mean that 30% of enterprises will no longer consider identity verification and authentication solutions to be reliable in isolation.
Presentation attacks are the most common attack vectors, but digital injection attacks increased by 200% in 2023.
“In the past decade, several inflection points in fields of AI have occurred that allow for the creation of synthetic images,” says Akif Khan, vice president analyst at Gartner and no relation to Imran Khan.
"These artificially generated images of real people's faces, known as deepfakes, can be used by malicious actors to undermine biometric authentication or render it inefficient," he continues.
“As a result, organizations may begin to question the reliability of identity verification and authentication solutions, as they will not be able to tell whether the face of the person being verified is a live person or a deepfake.”
Today, identity verification and authentication processes using face biometrics rely on presentation attack detection (PAD) to assess the user's liveness.
“Current standards and testing processes to define and assess PAD mechanisms do not cover digital injection attacks using the AI-generated deepfakes that can be created today,” says Khan.
This means that cybersecurity officers and vendors have more work to do. Gartner advises that they need to choose vendors who can demonstrate they have the capabilities and a plan that goes beyond current standards and are monitoring, classifying and quantifying these new types of attacks.
“Organizations should start defining a minimum baseline of controls by working with vendors that have specifically invested in mitigating the latest deepfake-based threats using PAD coupled with image inspection,” said Khan.
Once the strategy is defined and the baseline is set, cyber security and risk management leaders must include additional risk and recognition signals, such as device identification and behavioral analytics, to increase the chances of detecting attacks on their identity verification processes.
All this means that the arms race in the cybersecurity war has been ratcheted up again.
Organizations need to leverage data science and filtering technologies. They also need to improve their corporate practices and put pressure on social media companies, legislators and regulators.
Even if they do that, the deepfake deluge is on its way and is set to be one of the main themes of 2024. So, strap yourselves for a wild ride.
Image credit: iStockphoto/StudioM1