Outrage has erupted over the circulation of AI-generated deepfake images featuring singer Taylor Swift. Fans of Swift and politicians expressed their displeasure at the virality of these images on X, a former Twitter platform, as well as their availability on other platforms. One particular deepfake image of the renowned American artist garnered a staggering 47 million views on X before being removed. The image had been live for approximately 17 hours before its deletion.
While deepfake images of celebrities are not a new phenomenon, concerns have been raised by activists and regulators about the proliferation of easy-to-use AI tools that can generate harmful or toxic content. This incident, specifically targeting Taylor Swift, the second most widely listened-to artist on Spotify, has shed a new light on the issue, with her legion of fans expressing their outrage.
Danisha Carter, an influencer, wrote on X, The only ‘silver lining’ about it happening to Taylor Swift is that she likely has enough power to get legislation passed to eliminate it. You people are sick.
Analysts highlight that X is one of the largest platforms for pornographic content globally, given its more lenient nudity policies compared to Meta-owned platforms like Facebook and Instagram. Apple and Google, the gatekeepers for online content through their app stores, have tolerated this aspect of X.
In response to the incident, X issued a statement emphasizing its strict prohibition of posting Non-Consensual Nudity (NCN) images and its zero-tolerance policy towards such content. The platform assured users that all identified images were being actively removed, with appropriate actions being taken against the responsible accounts. X also stated that it was closely monitoring the situation to promptly address any further violations.
Representatives for Taylor Swift have not yet responded to requests for comment on the incident.
Democratic congresswoman Yvette Clarke from New York, who has supported legislation against deepfakes, remarked, What’s happened to Taylor Swift is nothing new. For years, women have been targets of deepfakes without their consent. And with advancements in AI, creating deepfakes is easier & cheaper.
Republican congressman Tom Keane voiced concerns about the rapid advancement of AI technology surpassing the establishment of necessary safeguards. He stressed the importance of protecting victims of deepfakes, whether they are celebrities like Taylor Swift or young individuals across the country.
Deepfake audio and video targeting politicians or celebrities, particularly women, often involve sexually explicit and graphic content readily available on the internet. Software enabling the creation of these images is widely accessible online. In the first nine months of 2023 alone, approximately 113,000 deepfake videos were uploaded to popular porn websites, according to research cited by Wired magazine.
Amidst this backdrop, a startup’s research in 2019 revealed that 96 percent of deepfake videos on the internet were pornographic.
The incident involving Taylor Swift underscores the need to address the alarming trend of deepfakes through the establishment of safeguards and legislation. Stakeholders are grappling with the challenge of controlling the uncontrollable flood of toxic and harmful deepfake content facilitated by user-friendly AI tools.
As the discussion evolves, it remains crucial to strike a balance between the potential dangers of deepfakes and the preservation of free speech and creativity.