The Feds Investigate ChatGPT’s Hallucination Problem

Date:

The Federal Trade Commission (FTC) is currently investigating OpenAI over concerns that its AI chatbot, ChatGPT, and other products may be violating consumer protection laws. The investigation aims to determine whether OpenAI’s AI systems put people’s personal reputations and data at risk. This could potentially lead to OpenAI CEO Sam Altman facing his first major opposition from the FTC, a combative consumer protection agency.

According to a 20-page letter obtained by the Washington Post, the FTC has requested OpenAI to provide a wide range of documents dating back to June 1, 2020. These documents include details on how OpenAI assesses risks in its AI systems and how it safeguards against the AI making false statements about real people. The FTC is particularly interested in understanding the way OpenAI trains its large language models (LLMs), such as ChatGPT, including the types of data used for training and the extent to which data was collected from the internet through web scraping. OpenAI has been relatively secretive about the precise origins of the data used to train its models.

In addition, the FTC demands a description of any complaints against OpenAI’s system making false, misleading, disparaging, or harmful statements about individuals. Researchers and journalists have presented examples of ChatGPT and other LLMs hallucinating fabricated information. For instance, in one example reported by Gizmodo, ChatGPT inserted a talk show host into an embezzlement court case, despite having no connection to it. Consequently, the affected radio host is now suing OpenAI for libel. The FTC wants to understand the measures OpenAI has taken to filter or anonymize personal information within its training dataset and to decrease the chances of its models generating fabricated statements about people.

See also  UN Assembly Urges Global AI Monitoring as Technology Races Ahead

Furthermore, the FTC seeks more information about OpenAI’s policies and procedures for assessing safety and risk before releasing their products to the public. In particular, the agency wants to see documents detailing the steps taken before the release of OpenAI’s systems and instances where the company decided against launching an LLM due to safety concerns. Although most of the demands in the FTC letter are broad, the agency specifically highlights a March security incident where a bug in OpenAI’s system allowed certain users to access other people’s chat logs and payment-related information. OpenAI had to briefly take ChatGPT offline to address the issue.

OpenAI and the FTC have yet to respond to Gizmodo’s request for comment on the investigation.

The increased action from the FTC has been welcomed by tech watchdog groups concerned about the rapid rollout of OpenAI’s models, such as the Tech Oversight Project. Kyle Morse, the Oversight Project Deputy Executive Director, referred to OpenAI’s history of hasty product releases as reckless and irresponsible.

This investigation is expected to be the biggest regulatory test for OpenAI in the United States thus far. CEO Sam Altman has managed to address many of the fears surrounding AI by testifying before the Senate Judiciary subcommittee and expressing support for new regulations and standards. Altman has made efforts to portray OpenAI as a responsible actor, participating in meetings with lawmakers and signing letters warning about the potential risks of unchecked AI. However, the FTC has shown less receptiveness to Altman’s approach, explicitly stating in multiple blog posts that existing rules and regulations will be enforced, even within the new market of AI.

See also  YouTube Introduces AI-Generated Video Features and Mobile App to Simplify Video Production

FTC chair Lina Khan, in an op-ed published in the New York Times titled We Must Regulate AI Now, emphasized the need to regulate AI and enforce existing laws.

While OpenAI’s cooperation with the investigation remains to be seen, the outcome could have significant implications for the future of AI regulation in the United States.

Frequently Asked Questions (FAQs) Related to the Above News

What is OpenAI being investigated for by the FTC?

OpenAI is being investigated by the Federal Trade Commission (FTC) over concerns that its AI chatbot, ChatGPT, and other products may be violating consumer protection laws. The investigation aims to determine whether OpenAI's AI systems put people's personal reputations and data at risk.

What documents has the FTC requested from OpenAI?

The FTC has requested OpenAI to provide a wide range of documents dating back to June 1, 2020. These documents include details on how OpenAI assesses risks in its AI systems, how it safeguards against the AI making false statements about real people, information about the training of its large language models (LLMs), and the extent to which data was collected from the internet through web scraping.

Why is OpenAI's data collection approach a concern?

The FTC is particularly interested in understanding the way OpenAI trains its large language models (LLMs), such as ChatGPT, including the types of data used for training and the extent to which data was collected from the internet through web scraping. OpenAI has been relatively secretive about the precise origins of the data used to train its models, raising concerns about potential biases and inaccuracies in the training data.

What concerns have been raised about ChatGPT's accuracy?

Researchers and journalists have presented examples of ChatGPT and other LLMs hallucinating fabricated information. For instance, in one example reported by Gizmodo, ChatGPT inserted a talk show host into an embezzlement court case, despite having no connection to it. These instances of fabricated or misleading information generated by ChatGPT have raised concerns about the accuracy and reliability of OpenAI's AI systems.

What specific incident did the FTC highlight in its letter?

The FTC specifically highlighted a March security incident where a bug in OpenAI's system allowed certain users to access other people's chat logs and payment-related information. OpenAI had to briefly take ChatGPT offline to address the issue. This incident further emphasizes the need for the FTC to investigate OpenAI's practices and ensure the security and privacy of users' data.

How has OpenAI responded to the investigation?

OpenAI and the FTC have yet to respond to media requests for comments on the investigation. OpenAI's cooperation with the investigation and its response to the concerns raised by the FTC remain to be seen.

What implications could this investigation have for AI regulation in the United States?

The outcome of this investigation could have significant implications for the future of AI regulation in the United States. It will test the reach of consumer protection laws in relation to AI systems and the responsibility of AI developers like OpenAI. Depending on the findings, it may lead to increased scrutiny and regulations surrounding the development and deployment of AI technologies in the country.

Please note that the FAQs provided on this page are based on the news article published. While we strive to provide accurate and up-to-date information, it is always recommended to consult relevant authorities or professionals before making any decisions or taking action based on the FAQs or the news article.

Aniket Patel
Aniket Patel
Aniket is a skilled writer at ChatGPT Global News, contributing to the ChatGPT News category. With a passion for exploring the diverse applications of ChatGPT, Aniket brings informative and engaging content to our readers. His articles cover a wide range of topics, showcasing the versatility and impact of ChatGPT in various domains.

Share post:

Subscribe

Popular

More like this
Related

Power Elites Pursuing Immortality: A Modern Frankenstein Unveiled

Exploring the intersection of AI and immortality through a modern lens, as power elites pursue godlike status in a technological age.

Tech Giants Warn of AI Risks in SEC Filings

Tech giants like Microsoft, Google, Meta, and NVIDIA warn of AI risks in SEC filings. Companies acknowledge challenges and emphasize responsible management.

HealthEquity Data Breach Exposes Customers’ Health Info – Latest Cyberattack News

Stay updated on the latest cyberattack news as HealthEquity's data breach exposes customers' health info - a reminder to prioritize cybersecurity.

Young Leaders Urged to Harness AI for Global Progress

Experts urging youth to harness AI for global progress & challenges. Learn how responsible AI implementation can drive innovation.