Deepfakes are computer-generated media, created with the use of Artificial Intelligence (AI) tools and techniques, with the intent to deceive the audience into believing that the content is real. This technology has been used maliciously to create non-consensual pornography by swapping the faces of unsuspecting victims with porn stars or other celebrities. It has also been used to manipulate public discussion by manipulating audio or video of political figures. To create deepfakes, users require a target video, along with footage of the person whose image is being used, and then map that person onto the video with the help of deep neural networks and Generative Adversarial Networks (GANs).
Fortunately, there are ways to spot deepfakes, such as conducting a reverse image search or investigating who posted it. Aside from the negative implications of deepfakes, there are also positive ways the technology is being used. The HBO documentary “Welcome to Chechnya” used deepfake technology to protect the identities of Russian LGBTQ refugees whose lives were at risk in the film, while also telling their stories.
Organizations like WITNESS, who focus on using media to protect human rights, are getting creative with the use of this technology. According to shirin anlen, a media technologist for WITNESS, the technology isn’t something to fear, but to see as a tool, to then be manipulated for advocacy, political satire, and more.
Many companies are developing ways to detect deepfakes, such as Sensity and Operation Minerva. Sensity’s detection platform works like an antivirus for deepfakes, alerting users when the video is questionable, while Operation Minerva searches for potential deepfakes by comparing them to known videos already cataloged.
It is vital for people to remain informed about the capabilities of this technology, as well as deeply understand the dangers it can bring. Nasir Memon, a professor of computer science and engineering at NYU, said that the ideal approach to this problem is to create education and awareness, as well as the right business models, incentives, policies, and laws.
In conclusion, deepfakes pose a very real and widespread threat, but by being informed and staying alert, users can help ensure they don’t fall victim to malicious content. Fortunately, many companies are exploring ways to protect users, and it is only a matter of time before technology is created to detect deepfakes at scale.