Spotting Deepfakes: Key Clues to Uncover AI-Generated Videos and Images
Artificial intelligence-generated images and videos have seen significant advancements in the past year, making it increasingly difficult for the general public to spot deepfakes. While some deepfakes serve harmless purposes like de-aging actors or recreating old roles, others have more malicious intentions, such as scamming individuals or spreading misleading political content. With this rise in deepfake technology, it is crucial to be able to discern real from fake.
According to Paul Bleakley, an assistant professor of criminal justice at the University of New Haven, there are key details that can help identify deepfakes, and many of them lie in contextual clues. By observing certain aspects, one can distinguish between genuine videos and AI-generated ones. Here are some indicators to watch out for:
1. Excessive or minimal eye movement: Deepfakes often exhibit abnormal eye behavior, either too much or too little movement.
2. Unnatural facial expressions: AI-generated videos may feature facial expressions that appear forced or unnatural.
3. Lack of emotion or expressiveness: Deepfakes can lack genuine emotions, making the subjects seem robotic or devoid of authentic feelings.
4. Awkward body postures: Pay attention to unnatural body movements or postures that seem awkward or out of place.
5. Artificial teeth or hair: Deepfakes may have inconsistencies in the appearance of teeth or hair, making them appear fake or unnatural.
6. Inconsistencies in movement or audio: AI-generated videos may have discrepancies in movements or audio, indicating manipulation.
Similar inconsistencies can also apply to still images. Some image generators struggle with accurately depicting hands or other body parts, which can be telltale signs of a deepfake.
Identifying deepfake audio poses an even greater challenge. A well-trained AI program can use just a few seconds of a person’s voice to make them say anything. However, there are subtle cues that can help determine if an audio is genuine. Vijay Balasubramaniyan, CEO of the AI voice authentication company Pindrop, mentions that lip movements and the way the tongue affects pronunciation are unique to each individual and cannot be replicated by AI.
Recognizing the growing threat of AI-generated content, some members of Congress are proposing legislation to make such content more identifiable. One suggestion is implementing watermarking, where AI-generated content would be required to have bits of code that label them as AI-generated.
As the technology behind deepfakes continues to advance, it becomes crucial for individuals to be aware of the signs that can help identify these manipulated videos and images. By staying vigilant and educated, we can protect ourselves from potentially harmful and misleading content.
Sources:
– Washington Examiner