Intel, the technology giant, has developed a deepfake detection system called FakeCatcher that uses blood flow and eye movement analysis to distinguish between real and fake videos. Deepfakes are videos that use artificial intelligence to manipulate faces or create digital versions of individuals, and their prevalence has made quick detection crucial. FakeCatcher relies on a technique called Photoplethysmography (PPG) to detect changes in blood flow, which deepfake faces do not exhibit. It also analyzes eye movement, as deepfake videos often have divergent or unnatural eye movements. Intel claims that FakeCatcher is 96% accurate in identifying deepfakes.
To test the system, a demonstration was conducted using a dozen clips of former US President Donald Trump and President Joe Biden. The system accurately identified the deepfakes, which were lip-synced videos where the mouth and voice had been altered. However, when authentic videos were tested, the system occasionally misidentified them as fake. The system struggled with pixelated videos, where it was difficult to detect blood flow. Additionally, FakeCatcher does not analyze audio, leading to errors in cases where real videos seemed obviously genuine based on voice analysis. While the cautious approach aims to catch all deepfakes, it raises concerns about potentially flagging genuine videos as fake.
The ability of FakeCatcher to work effectively in real-world contexts has been questioned. Experts argue that while the initial evaluation statistics provided by Intel may be accurate, their relevance to real-world use cases remains uncertain. Similar to facial recognition systems’ proclaimed accuracy, actual performance in real-world scenarios can vary significantly. It depends on the difficulty of the test, including factors such as image quality and angles. Researchers are calling for independent analysis of the FakeCatcher system to assess its accuracy and effectiveness.
Detecting deepfakes accurately is crucial as they can be subtle and of varying quality. They can be as brief as a two-second clip in a political campaign advert, and they can even be produced by altering only the voice. The concern is that if a genuine video is mistakenly flagged as fake, it can have serious consequences. Intel’s cautious approach may be aimed at minimizing the risk of not detecting deepfakes, but it could lead to undesired outcomes. The ability to accurately identify deepfakes in real-world scenarios remains uncertain, and further evaluations are necessary to determine the system’s effectiveness.