Deepfake explicit images of Taylor Swift have recently been spreading on social media, causing outrage amongst fans and leading to calls for action to address this ongoing issue. The explicit and abusive fake images of Swift quickly gained traction on the social media platform X, prompting her devoted fanbase, known as Swifties, to fight back by flooding the platform with positive images of the singer using the hashtag #ProtectTaylorSwift. Some fans also reported accounts sharing the deepfakes.
Reality Defender, a deepfake-detecting group, reported a significant surge in nonconsensual pornographic material featuring Swift, particularly on X. These images also made their way onto platforms owned by Meta, such as Facebook, as well as other social media platforms. Unfortunately, these explicit deepfakes spread to millions of users before some of them were taken down.
Experts and researchers have observed an increasing number of explicit deepfakes in recent years, as the technology used to create these images has become more accessible and user-friendly. In a report released in 2019 by AI firm DeepTrace Labs, it was highlighted that these deepfakes predominantly targeted women, including Hollywood actors and South Korean K-pop singers.
Brittany Spanos, a senior writer at Rolling Stone who teaches a course on Swift at New York University, suggests that Swift’s fans are swift to mobilize in support of the artist, particularly when she faces situations of wrongdoing. Spanos believes that if Swift were to pursue legal action, it would have significant implications. She draws comparisons to Swift’s 2017 lawsuit against a radio station DJ who allegedly groped her, where she was awarded $1 in damages. This symbolic sum of money held great value for women during the MeToo movement.
In response to the circulation of the fake images, X stated that it strictly prohibits the sharing of non-consensual nude images on its platform and is actively removing any identified images while taking appropriate actions against the accounts responsible for posting them. However, X has faced criticism for significantly reducing its content-moderation teams since Elon Musk took over the platform in 2022. Meta, on the other hand, condemned the explicit content and stated that it is working to remove it from its platforms.
Reality Defender researchers have identified at least a couple dozen unique AI-generated images depicting Swift. The most widely shared ones were football-related, portraying a painted or bloodied Swift in a way that objectified her and, in some cases, depicted violent harm against her deepfake persona.
The researchers believe that the images were likely created using diffusion models, specific types of generative artificial intelligence models capable of producing photorealistic images based on written prompts. While Mason Allen, head of growth at Reality Defender, didn’t attempt to determine the images’ provenance, OpenAI, the organization behind DALL-E, one of the prominent diffusion models, states that it has safeguards in place to prevent the generation of harmful content.
Microsoft, which offers an image-generator based partly on DALL-E, has also launched an investigation to determine if its tool was misused. The company, like other commercial AI services, strictly prohibits the production of adult or non-consensual intimate content. Microsoft CEO Satya Nadella acknowledged the need for stronger AI safeguards and expressed determination to take action swiftly.
The incident involving Taylor Swift has prompted federal lawmakers to emphasize the importance of implementing better protections against deepfake porn. U.S. Representatives Yvette D. Clarke and Joe Morelle, both Democrats, have introduced bills to increase restrictions and criminalize the sharing of deepfake porn online. They argue that deepfakes have become increasingly pervasive and can have a significant impact on women in the digital world.
It is essential to address this issue proactively and take necessary measures to protect individuals from the harmful effects of deepfake technology. Swift’s experience serves as a stark reminder of the urgent need to combat the spread of explicit deepfakes and safeguard the dignity and privacy of individuals, particularly women, in the digital age.
Overall, the circulation of explicit deepfake images of Taylor Swift on social media has sparked outrage and prompted calls for action. It highlights the prevalence of deepfake technology and the urgent need for stronger safeguards to protect individuals from its harmful consequences.