Google, Microsoft, Meta, OpenAI, and other leading AI companies have made a significant commitment to safeguarding children online. They have vowed to prevent their AI tools from creating and circulating content that includes child sexual abuse material (CSAM).
The initiative, spearheaded by Thorn and All Tech Is Human, aims to protect children from exploitation through generative AI technology. This groundbreaking step sets a new standard in the industry, focusing on defending children from sexual abuse through the use of AI.
The joint effort aims to stop the creation and dissemination of sexually explicit material involving children on social media platforms and search engines. In 2023 alone, over 104 million files of suspected CSAM were reported in the US. The increase in generative AI could exacerbate the issue, overwhelming law enforcement agencies tasked with identifying victims.
Thorn and All Tech Is Human have released a paper titled Safety by Design for Generative AI: Preventing Child Sexual Abuse, outlining strategies for companies to prevent harm to children using AI. Companies are advised to select data sets for AI models carefully, avoiding those that contain CSAM or adult content, which generative AI could combine. Social media platforms and search engines are urged to remove links to websites and apps facilitating the sharing of child nudity images.
The initiatives seek to combat the rise of AI-generated CSAM, which hinders efforts to identify genuine victims of child sexual abuse. By taking proactive measures, these AI companies are striving to mitigate the negative impact of technology on children and enhance online safety standards.