Deepfake Porn: A Growing Concern in the AI Race

Date:

The proliferation of the internet and artificial intelligence-generated images has raised new concerns about deepfake pornography. Deepfakes are images or videos that have been digitally manipulated using AI technology, and they have become increasingly accessible, enabling anyone to take someone’s face and place it onto the body of a porn actor in videos or images without their consent. This has been done to female celebrities as well as influencers, journalists and anyone with a public profile. Since then, hundreds of deepfake videos have been circulating on the web, with some websites even allowing users to create pornographic images of anyone they wish.

Experts worry the misuse of this technology is becoming increasingly concerning and could worsen the harm to primarily women caused by nonconsensual deepfake porn. Generative AI tools have been developed which take existing data from the internet and generate novel content, thus further facilitating the spread of these videos and images.

Noelle Martin, of Perth, Australia, has experienced first-hand what this problem can do. Ten years ago, Martin inadvertently stumbled upon pornographic images of herself online, created using deepfake technology. She was horrified, but any attempts to get these images taken down were fruitless – either the sites would not respond, or the images simply reappeared. Evidence suggests this issue is not going away any time soon.

In the face of this issue, legislation advocating for a national law to fine companies that do not comply with removal requests of explicit content has been proposed in Australia, but a global solution is needed to properly address this problem as internet laws vary from nation to nation.

See also  Controversy Surrounding Italian-Israeli Military Research Collaboration Grows

Additionally, some AI models have taken initiative to remove explicit content from their databases, like OpenAI, which blocks users from creating AI images of celebrities and prominent politicians. The startup Stability AI has also undergone changes that prevent users from creating explicit images, responding to the reports about users abusing their technology for that purpose.

It is clear that more needs to be done to prevent the misuse of deepfake technology and combat the spread of nonconsensual deepfake porn, as victims of this have their reputations and livelihoods put at risk with no guarantee of justice. As the technology continues to develop, it is essential to ensure there are measures put in place, both culturally and legislatively, to protect people from such a violation of their rights.

OpenAI is a U.S.-based organisation working to advance artificial intelligence research and development, using both policy and technology. Their work combines research, innovation and technology to solve problems posed by AI and create a positive impact on society. They are committed to maintaining open and transparent communication, to build trust with the public.

Noelle Martin is a 28-year-old advocate and legal researcher, originally from Perth, Australia. She is a vocal opponent of deepfake porn, having experienced its devastating effects first-hand, and has devoted much of her time and energy into fighting for its removal and advocating for legislations to better protect victims from such explicit harassment. She believes an international solution is needed to truly address this problem, and encourages people to speak out and take action.

Frequently Asked Questions (FAQs) Related to the Above News

Please note that the FAQs provided on this page are based on the news article published. While we strive to provide accurate and up-to-date information, it is always recommended to consult relevant authorities or professionals before making any decisions or taking action based on the FAQs or the news article.

Share post:

Subscribe

Popular

More like this
Related

OpenAI Challenges The New York Times’ Journalism Authenticity

OpenAI questions The New York Times' journalistic integrity amid concerns over AI-generated content. Impacting journalism's future.

Groundbreaking Study Predicts DVT Risk After Gastric Cancer Surgery

Discover a groundbreaking study predicting DVT risk after gastric cancer surgery using machine learning methods. A game-changer in postoperative care.

AI Predicts Alzheimer’s Development 6 Years Early – Major Healthcare Breakthrough

AI breakthrough: Predict Alzheimer's 6 years early with 78.5% accuracy. Revolutionizing healthcare for personalized patient care.

Microsoft to Expand Generative AI Services in Asian Schools

Microsoft expanding generative AI services in Asian schools, focusing on Hong Kong, to enhance education with AI tools for students.