Intel Unveils First Real-time Deepfake Detector
- By Paul Mah
- November 30, 2022
Intel has announced FakeCatcher a technology it says can detect fake videos with a 96% accuracy rate and be able to do so in real time. Touted as the world’s first real-time deepfake detector, the detector is designed by Ilke Demir in collaboration with Umur Ciftci from the State University of New York at Binghamton.
FakeCatcher at work
According to Intel, traditional deep learning-based attempts to weed out fakes by evaluating the raw data for signs of inauthenticity; they determine fakes by identifying something wrong with it.
In contrast, FakeCatcher works by seeking out clues that a video is real. It does so by assessing video pixels for subtle clues that denote “blood flow”. This is because veins change color as our hearts pump blood through our bodies, generating blood flow signals.
These signals are collected from all over the face and algorithms are used to translate these signals into spatiotemporal maps. Deep learning is then used to detect if a video is real or fake. Results are returned in milliseconds.
The solution relies on a host of Intel technologies and is optimized for Advanced Vector Extensions 512 and Advanced Vector Extensions 2 instruction sets found in the latest Intel microprocessors. Software used includes the multi-threaded software library Intel Integrated Performance Primitives and the OpenCV toolkit for processing real-time images and videos.
Intel says the real-time detection platform can run up to 72 different detection streams simultaneously on 3rd Gen Intel Xeon processors – though it did not mention resolution and framerate.
Deepfakes: A growing problem
The growing number of easy-to-access AI tools for generating images and videos make deepfake videos a growing threat. Crucially, they are tough to detect in real-time; the current method of detection entails uploading suspect videos for analysis and can take hours. In the meantime, deception due to deepfakes can cause harm and result in negative consequences.
Social media platforms could leverage the technology to prevent users from uploading harmful deepfake videos, says Intel. Moreover, global news organizations could use the detector to avoid inadvertently amplifying manipulated videos.
The Intel press release is short on technical details, however, with a linked infographic that essentially repeat the claims in its press release. We have asked Intel to point us to a white paper and asked if the platform will be released to the public. We will update this article when we hear back.
Image credit: iStockphoto/grinvalds
Paul Mah
Paul Mah is the editor of DSAITrends, where he report on the latest developments in data science and AI. A former system administrator, programmer, and IT lecturer, he enjoys writing both code and prose.