Children at Risk: AI Chatbots on Popular Apps Pose Misinformation and Privacy Threats

Date:

Ever since the launch of ChatGPT in late 2022, companies have been racing to deploy their own versions of generative AI tools — sometimes integrating them into existing products that are used by children and teens. For example, the experimental integration of an AI chatbot into Snapchat — a messenger app popular with teens (and which has just been issued a preliminary enforcement notice by the U.K. Information Commissioner) — exposes over 109 million children between the ages of 13 and 17 to this chatbot daily. Moreover, in the free version of the app, the AI chatbot is, by default, the first friend in everyone’s conversation list.

As such, these children and teens inadvertently become the test subjects of technologies whose risks haven’t been fully studied and understood, let alone mitigated. Building on my prior article focused on plagiarism and cyberbullying, I explore the risks of mis- and disinformation and age-inappropriate advice, what the tech industry can do to address these risks, and why this matters from a privacy regulation perspective.

Three characteristics of generative AI increase the issue and potential for harm arising from mis- and disinformation. One is the ease and incredible efficiency of content creation, and the second one is the well-polished and authoritative-sounding form of the output, whether ChatGPT played fast and loose with reality or was as accurate as truth itself.

Third, generative AI has the ability to appear human, form emotional connections, and become a trusted friend like a conventional search engine never could. This is because ChatGPT’s output appears conspicuously human in its conversation style, as it mimics the input on which it was trained. This input includes chat histories on Reddit, fictional conversations from books, and who knows what else. In combination, these three characteristics may significantly increase the likelihood that the output produced by ChatGPT is taken for sound information.

See also  FTC Investigates OpenAI's AI Hallucinations and Their Impact

Here’s what the tech industry can do to protect against mis- and disinformation:

Real-Time Fact-Checking And Grounding: A viable solution for enhancing the trustworthiness of generative AI could be the development of models that incorporate real-time fact-checking and grounding. Grounding in this context refers to anchoring the AI-generated information to validated and credible data sources. The goal is to provide real-time credibility assessments alongside the generated content, thereby minimizing the spread of misinformation. The implementation could be as follows: As soon as the AI system generates or receives information, it cross-references the content against an array of reliable databases, trusted news organizations, or curated fact repositories to confirm its accuracy.

Transparency Labels: Similar to food labeling, tech companies could implement tags that signify the nature of generated content. For instance, AI-Generated Advice or Unverified Information tags could be useful. This could counteract the impression of dealing with a human and encourage increased scrutiny.

As Aza Raskin, co-founder of the Center for Humane Technology, and others have demonstrated, even when a chatbot is informed that the conversation partner is underage, this information can quickly be disregarded, and conversations can take the form of advice on how to hide the smell of alcohol and marijuana to how to conceal a forbidden app from the user’s parents.

The tech industry could respond to the risk of age-inappropriate advice by implementing effective age-verification tools. Currently, although OpenAI does limit access to ChatGPT to users over the age of 18, it takes only a (potentially fake) birthdate and access to an active phone number to set up an account. In fact, this was one of the reasons for the temporary ban of ChatGPT in Italy back in April.

Here’s how that can be done better:

Multifactor Authentication: In addition to a birthdate, a more secure system could employ two or three additional verification steps like privacy-preserving facial recognition or legal documentation checks.

See also  Google Partners with Nigerian Government to Train 20,000 Women and Youth in Digital Skills, Creating 1 Million Digital Jobs

Parental Approval: The system could directly link a child’s account to a parent’s, allowing the parent to control what the child can access, which could add an extra layer of safety.

Dynamic Age-Gating: The technology could be adapted to offer differing levels of access depending on the verified age of the user. Content could be filtered or modified according to the age category, offering a more nuanced interaction.

Frequent Re-Verification: Instead of a one-time verification process, the system could be designed to frequently re-verify the user’s age.

Note that Utah has recently passed legislation requiring social media companies to implement age verification. The law will come into effect in March 2024.

As it concerns many digital services provided directly to children, the consent of minors is only valid if given or authorized by a parental authority as per Art. 8 of the GDPR. Other privacy laws are even stricter and require parental approval for any processing of children’s personal data, such as section 14 of Quebec’s Law 25.

In practice, this consent requirement may be hard to implement, as it isn’t immediately obvious in all instances whether personal data pertains to children, including the data originally scraped from the internet to train ChatGPT in addition to that provided in a prompt by a registered account with OpenAI.

These regulatory requirements and the difficulties around obtaining valid consent from children emphasize the need for technological solutions to prevent the collection of personal information from children and to protect them from the risks that ensue from the interaction with AI.

The concerns that surfaced in my previous article and this one aren’t new, but they’re addressable, and critically so, as we showed how they’re exacerbated in the context of children’s use of generative AI tools such as ChatGPT.

Frequently Asked Questions (FAQs) Related to the Above News

What is the main concern addressed in this article?

The main concern addressed in this article is the potential misinformation and privacy threats posed by AI chatbots, specifically in popular apps used by children and teenagers.

How are children and teens being exposed to AI chatbots?

Companies are integrating AI chatbots, such as ChatGPT, into existing products used by children and teens. For example, Snapchat has integrated an AI chatbot into its messenger app, exposing over 109 million children between the ages of 13 and 17 to the chatbot daily. In the free version of the app, the AI chatbot is the default first friend in everyone's conversation list.

What are the three characteristics of generative AI that contribute to the issue of mis- and disinformation?

The three characteristics of generative AI are the ease and efficiency of content creation, the authoritative-sounding form of the output, and the ability to appear human and form emotional connections.

What can the tech industry do to address mis- and disinformation?

The tech industry can implement real-time fact-checking and grounding, where AI-generated information is cross-referenced against validated and credible sources. Transparency labels can also be used to signify the nature of generated content, providing increased scrutiny. Additionally, effective age-verification tools and parental approval systems can be implemented to prevent age-inappropriate advice.

What are some potential technological solutions to protect children's privacy and prevent the collection of personal information?

Some potential technological solutions include multifactor authentication, parental approval systems, dynamic age-gating, and frequent re-verification of a user's age. These measures can help ensure that personal information is not collected from children without proper consent and protect children from the risks associated with AI interaction.

What are some existing privacy laws and regulations regarding children's personal data?

Consent from a parental authority is required for the processing of children's personal data according to Article 8 of the GDPR. In some cases, such as Quebec's Law 25, parental approval is required for any processing of children's personal data. Utah has also passed legislation requiring social media companies to implement age verification. These laws highlight the importance of implementing technological solutions to protect children's privacy.

Please note that the FAQs provided on this page are based on the news article published. While we strive to provide accurate and up-to-date information, it is always recommended to consult relevant authorities or professionals before making any decisions or taking action based on the FAQs or the news article.

Share post:

Subscribe

Popular

More like this
Related

Obama’s Techno-Optimism Shifts as Democrats Navigate Changing Tech Landscape

Explore the evolution of tech policy from Obama's optimism to Harris's vision at the Democratic National Convention. What's next for Democrats in tech?

Tech Evolution: From Obama’s Optimism to Harris’s Vision

Explore the evolution of tech policy from Obama's optimism to Harris's vision at the Democratic National Convention. What's next for Democrats in tech?

Tonix Pharmaceuticals TNXP Shares Fall 14.61% After Q2 Earnings Report

Tonix Pharmaceuticals TNXP shares decline 14.61% post-Q2 earnings report. Evaluate investment strategy based on company updates and market dynamics.

The Future of Good Jobs: Why College Degrees are Essential through 2031

Discover the future of good jobs through 2031 and why college degrees are essential. Learn more about job projections and AI's influence.