AI-Generated Child Abuse Images Pose Growing Threat to Online Safety
The rise of artificial intelligence (AI) technology is fundamentally changing the way humans interact with computers, and unfortunately, it is also posing new threats to online safety. A recent report by the Internet Watch Foundation (IWF), a non-profit organization dedicated to removing child sexual abuse material (CSAM) from the internet, highlights a concerning trend – a surge in AI-generated CSAM.
During a one-month period, IWF analysts discovered an alarming 20,254 AI-generated images on a single CSAM forum on the dark web. Out of these, 11,108 were deemed potentially criminal and required analysis by 12 IWF specialists, who collectively spent 87.5 hours scrutinizing the content.
Upon assessment, it was determined that these criminal images violated either the Protection of Children Act 1978 or the Coroners and Justice Act 2009 in the UK. Specifically, 2,562 images were classified as criminal pseudo-images, while 416 were classified as criminal prohibited images.
The disturbing reality is that this explicit content can be generated using unrestricted, open-source text-to-image systems. Merely typing a description can produce realistic images that are almost indistinguishable from genuine photographs. This discovery sheds light on the current composition of the IWF’s workload, with AI-generated CSAMs still representing a relatively small portion of their cases. Nevertheless, this rapidly evolving technology has the potential for exponential growth.
The IWF report also underlines the increasing realism of AI-generated CSAMs, presenting immense challenges for both the IWF and law enforcement agencies. Moreover, there is evidence to suggest that these AI-generated CSAMs contribute to the re-victimization of known abuse victims and celebrity children. Additionally, offenders are exploiting this technology to profit from child abuse.
Recognizing the urgency to address this escalating problem, the IWF puts forth several recommendations for governments, law enforcement agencies, and technology companies:
1. Promoting international coordination on content handling.
2. Reviewing online content removal laws.
3. Updating police training to include AI-generated CSAM.
4. Implementing regulatory oversight of AI models.
5. Ensuring that companies developing and deploying generative AI and large language models (LLMs) explicitly prohibit the generation of CSAM in their terms of service.
Failure to curb the growth of AI-generated CSAMs poses a significant threat to the IWF’s mission to eliminate child pornography from the internet. As this technology advances, the realism of AI-generated images will only increase, potentially leading to a rise in child abuse incidents.
The issue extends beyond national borders, as the National Center for Missing and Exploited Children in the United States also reports a sharp increase in AI-generated abuse images. These images not only complicate investigations but also hinder the identification of victims.
Pedophile forums have become hotbeds for sharing instructions on using open-source models to generate these distressing images. While child advocates and US justice officials maintain that these actions are punishable by law, there have been no definitive court rulings on classification or sentencing for this emerging issue.
The alarming rise in AI-generated child abuse images is a sobering reminder of the dark side of technological advancements. It is crucial for stakeholders across the globe to come together and implement robust measures to combat this evolving threat. Only by prioritizing the safety and protection of children can we hope to safeguard the online world for future generations.