Snapchat’s My AI, DALLE, and Stable Diffusion are among the popular AI tools that have been deemed unsafe for kids by Common Sense Media, an independent nonprofit advocacy group for families. Common Sense Media is known for providing media ratings to help parents evaluate the content their children consume. Earlier this year, the organization announced its plan to add ratings for AI products to its resources for families.
The decision to assess AI products was driven by a survey that revealed 82% of parents wanted assistance in determining the safety of new AI products for their children. However, only 40% of parents knew of reliable resources to make these determinations. Thus, Common Sense Media launched its first AI product ratings, offering nutrition labels for AI tools such as chatbots and image generators.
Common Sense Media reviewed several AI principles including trust, kids’ safety, privacy, transparency, accountability, learning, fairness, social connections, and benefits to society. Ten popular apps, including learning apps, AI chatbots like Bard and ChatGPT, and generative AI products such as My AI and DALLE, were initially evaluated on a 5-point scale. Notably, generative AI products received the lowest ratings.
Tracy Pizzo-Frey, Senior Advisor of AI at Common Sense Media, highlighted the biases present in generative AI due to the models being trained on a vast amount of internet data. These biases include cultural, racial, socioeconomic, historical, and gender biases. Common Sense Media hopes its ratings will encourage developers to implement protections to limit the spread of misinformation and shield future generations from unintended consequences.
Snapchat pushed back against the review, emphasizing that My AI is an optional chatbot and has limitations clearly communicated to users. Despite this, Common Sense Media still raised concerns about the chatbot producing responses that reinforced unfair biases, inappropriate content, and privacy issues.
Other generative AI models like DALLE and Stable Diffusion also exhibited risks, including objectification, sexualization of women and girls, and the reinforcement of gender stereotypes. These AI models have implications beyond inappropriate content and have been utilized to create pornographic materials.
In the mid-tier of Common Sense Media’s ratings were AI chatbots like Google’s Bard and ChatGPT, as well as Toddle AI. While these bots also had biases and inaccuracies, they were less problematic compared to the generative AI models. AI products designed for educational purposes, such as Ello’s AI reading tutor, Khanmingo from Khan Academy, and Kyron Learning’s AI tutor, received positive reviews due to their responsible AI practices, fairness, diverse representation, and transparent data privacy policies.
Common Sense Media plans to continue publishing ratings and reviews of new AI products. The organization aims to provide valuable insights to parents, families, lawmakers, and regulators, ultimately promoting responsible AI practices and safeguarding privacy and well-being.
The introduction of Common Sense Media’s AI product ratings is a significant step towards assisting parents and users in making informed decisions about the safety and suitability of AI tools. These ratings foster transparency and accountability in the AI industry, encouraging developers to address biases, misinformation, and privacy concerns.