President Joe Biden’s administration is urging tech companies and financial institutions to take action against the growing market for sexually abusive artificial intelligence (AI) deepfake images. These disturbing images, created using generative AI tools, can easily manipulate a person’s likeness into sexually explicit content, which is then shared across various online platforms, posing a significant threat to victims’ privacy and well-being.
The White House is calling on companies to voluntarily commit to specific measures aimed at curbing the creation, dissemination, and monetization of nonconsensual AI-generated images, particularly explicit content involving children. By encouraging the private sector to step up and address this issue, officials hope to prevent further harm to individuals, especially women and girls who are disproportionately targeted by such abusive imagery.
The document shared with The Associated Press outlines a series of actions for AI developers, payment processors, cloud computing providers, search engines, and major app stores like Apple and Google. It emphasizes the importance of disrupting the monetization of image-based sexual abuse, restricting payment access to sites that promote explicit images of minors, and removing such content from online platforms.
One of the most notable victims of pornographic deepfake images is singer-songwriter Taylor Swift, whose fans rallied against the spread of AI-generated abusive content earlier this year. Additionally, schools have been grappling with the issue of AI-generated deepfake images of students, highlighting the urgent need to address this growing problem.
While the Biden administration has previously secured commitments from tech giants to enhance AI safeguards, there is a recognition that legislative action is necessary to fully address the issue. Efforts to combat the spread of AI-generated child abuse imagery also face challenges from the availability of open-source AI models and the lack of oversight over tech tools that facilitate the creation of such content.
As the use of generative AI technology continues to evolve, it is essential for companies and policymakers to work together to protect individuals from the harms of nonconsensual AI-generated images. By implementing comprehensive measures and enforcing stricter regulations, the private sector can play a crucial role in safeguarding against the exploitation of individuals through abusive AI deepfakes.