The proliferation of the internet and artificial intelligence-generated images has raised new concerns about deepfake pornography. Deepfakes are images or videos that have been digitally manipulated using AI technology, and they have become increasingly accessible, enabling anyone to take someone’s face and place it onto the body of a porn actor in videos or images without their consent. This has been done to female celebrities as well as influencers, journalists and anyone with a public profile. Since then, hundreds of deepfake videos have been circulating on the web, with some websites even allowing users to create pornographic images of anyone they wish.
Experts worry the misuse of this technology is becoming increasingly concerning and could worsen the harm to primarily women caused by nonconsensual deepfake porn. Generative AI tools have been developed which take existing data from the internet and generate novel content, thus further facilitating the spread of these videos and images.
Noelle Martin, of Perth, Australia, has experienced first-hand what this problem can do. Ten years ago, Martin inadvertently stumbled upon pornographic images of herself online, created using deepfake technology. She was horrified, but any attempts to get these images taken down were fruitless – either the sites would not respond, or the images simply reappeared. Evidence suggests this issue is not going away any time soon.
In the face of this issue, legislation advocating for a national law to fine companies that do not comply with removal requests of explicit content has been proposed in Australia, but a global solution is needed to properly address this problem as internet laws vary from nation to nation.
Additionally, some AI models have taken initiative to remove explicit content from their databases, like OpenAI, which blocks users from creating AI images of celebrities and prominent politicians. The startup Stability AI has also undergone changes that prevent users from creating explicit images, responding to the reports about users abusing their technology for that purpose.
It is clear that more needs to be done to prevent the misuse of deepfake technology and combat the spread of nonconsensual deepfake porn, as victims of this have their reputations and livelihoods put at risk with no guarantee of justice. As the technology continues to develop, it is essential to ensure there are measures put in place, both culturally and legislatively, to protect people from such a violation of their rights.
OpenAI is a U.S.-based organisation working to advance artificial intelligence research and development, using both policy and technology. Their work combines research, innovation and technology to solve problems posed by AI and create a positive impact on society. They are committed to maintaining open and transparent communication, to build trust with the public.
Noelle Martin is a 28-year-old advocate and legal researcher, originally from Perth, Australia. She is a vocal opponent of deepfake porn, having experienced its devastating effects first-hand, and has devoted much of her time and energy into fighting for its removal and advocating for legislations to better protect victims from such explicit harassment. She believes an international solution is needed to truly address this problem, and encourages people to speak out and take action.