AI Image Generators: Research Finds Alarming Percentage of Violent and Dehumanizing Content
Artificial intelligence (AI) image generators have gained immense popularity over the past year, but new research reveals a shocking truth about their content. While these tools, such as Stable Diffusion and DALL·E, can produce unique images from simple prompts, they also have the potential to generate hateful, dehumanizing, and pornographic images with ease. This poses a significant risk, especially when such disturbing or explicit content is shared on mainstream media platforms.
The scarcity of research in this area has hindered the understanding of the dangers and the creation of preventive measures. According to Yiting Qu, a researcher from the CISPA Helmholtz Center for Information Security in Germany, there is currently no universal definition within the research community for what constitutes an unsafe image. To shed light on this issue, Qu and her team conducted a study to investigate the most popular AI image generators, the prevalence of unsafe images on these platforms, and possible solutions.
The researchers fed text prompts from sources known for unsafe content, like the far-right platform 4chan, into four prominent AI image generators. The results were alarming, as 14.56% of the generated images were classified as unsafe. Stable Diffusion had the highest percentage at 18.92%. These unsafe images included sexually explicit, violent, disturbing, hateful, and political content.
This study highlights the inadequacy of existing filters in preventing the creation of uncertain and harmful images. As a solution, Qu developed her own filter, which demonstrated a higher success rate. Additionally, she suggests programming AI image generators not to generate inhumane images by training them exclusively on safe content. By blocking unsafe words from the search function, users would be unable to create harmful image prompts. Furthermore, Qu emphasizes the importance of classifying and removing harmful images that have already circulated online.
Striking a balance between freedom and security of content remains a challenge. However, Qu believes that strict regulation is necessary to prevent the wide circulation of these harmful images on mainstream platforms. This issue is not limited to generating harmful content; makers of AI text-to-image software have also faced criticism for stealing artists’ work and amplifying dangerous stereotypes related to gender and race.
Efforts like the recent AI Safety Summit in the UK aim to establish guidelines and guardrails for this technology. However, critics argue that big tech companies hold too much influence over these negotiations. Regardless, the current management of AI is patchy at best, and severe issues persist.
In conclusion, the rapid rise of AI image generators brings both innovation and risks. The prevalence of violent and dehumanizing content generated by these tools demands immediate attention and better preventive measures. Stricter regulation, improved filters, and classification systems are crucial to mitigate the dissemination of harmful imagery online. As AI technology continues to advance, it is imperative to ensure its responsible and ethical use.