Google’s AI image generator has been pulled down after it was discovered that the database used to train it contained child sexual abuse material. The nonprofit organization LAION has taken down its 5B machine learning dataset, which is widely used by Google and other platforms, after a recent Stanford study revealed the presence of 1,008 instances of externally validated child sexual abuse material (CSAM) and 3,226 suspected instances in total. This finding highlights the risks of training AI models on large datasets without proper vetting. The issue of AI-generated CSAM has been a concern, with attorneys general from all 50 US states urging Congress to address the problem. The dataset used in training the AI models appears to be tainted, potentially facilitating the generation of CSAM or the use of existing CSAM to create harmful images. The study conducted by the Stanford Internet Observatory revealed that possession of the LAION-5B dataset implies possession of thousands of illegal images. The researchers also found that the datasets may unintentionally store illegal images on researchers’ computers. Disturbingly, LAION’s leadership reportedly knew about the presence of CSAM in their datasets since at least 2021. LAION has promised to remove the offensive content, but there may still be images that have not been discovered. The datasets may need to be eliminated completely. The use of AI in generating CSAM raises significant concerns and highlights the need for stronger measures to address this issue.
Google and Stable Diffusion AI Trained on Dataset Containing Child Sexual Abuse Material: Disturbing Stanford Study Reveals, US
Date:
Frequently Asked Questions (FAQs) Related to the Above News
Please note that the FAQs provided on this page are based on the news article published. While we strive to provide accurate and up-to-date information, it is always recommended to consult relevant authorities or professionals before making any decisions or taking action based on the FAQs or the news article.