The Federal Trade Commission (FTC) is currently investigating OpenAI over concerns that its AI chatbot, ChatGPT, and other products may be violating consumer protection laws. The investigation aims to determine whether OpenAI’s AI systems put people’s personal reputations and data at risk. This could potentially lead to OpenAI CEO Sam Altman facing his first major opposition from the FTC, a combative consumer protection agency.
According to a 20-page letter obtained by the Washington Post, the FTC has requested OpenAI to provide a wide range of documents dating back to June 1, 2020. These documents include details on how OpenAI assesses risks in its AI systems and how it safeguards against the AI making false statements about real people. The FTC is particularly interested in understanding the way OpenAI trains its large language models (LLMs), such as ChatGPT, including the types of data used for training and the extent to which data was collected from the internet through web scraping. OpenAI has been relatively secretive about the precise origins of the data used to train its models.
In addition, the FTC demands a description of any complaints against OpenAI’s system making false, misleading, disparaging, or harmful statements about individuals. Researchers and journalists have presented examples of ChatGPT and other LLMs hallucinating fabricated information. For instance, in one example reported by Gizmodo, ChatGPT inserted a talk show host into an embezzlement court case, despite having no connection to it. Consequently, the affected radio host is now suing OpenAI for libel. The FTC wants to understand the measures OpenAI has taken to filter or anonymize personal information within its training dataset and to decrease the chances of its models generating fabricated statements about people.
Furthermore, the FTC seeks more information about OpenAI’s policies and procedures for assessing safety and risk before releasing their products to the public. In particular, the agency wants to see documents detailing the steps taken before the release of OpenAI’s systems and instances where the company decided against launching an LLM due to safety concerns. Although most of the demands in the FTC letter are broad, the agency specifically highlights a March security incident where a bug in OpenAI’s system allowed certain users to access other people’s chat logs and payment-related information. OpenAI had to briefly take ChatGPT offline to address the issue.
OpenAI and the FTC have yet to respond to Gizmodo’s request for comment on the investigation.
The increased action from the FTC has been welcomed by tech watchdog groups concerned about the rapid rollout of OpenAI’s models, such as the Tech Oversight Project. Kyle Morse, the Oversight Project Deputy Executive Director, referred to OpenAI’s history of hasty product releases as reckless and irresponsible.
This investigation is expected to be the biggest regulatory test for OpenAI in the United States thus far. CEO Sam Altman has managed to address many of the fears surrounding AI by testifying before the Senate Judiciary subcommittee and expressing support for new regulations and standards. Altman has made efforts to portray OpenAI as a responsible actor, participating in meetings with lawmakers and signing letters warning about the potential risks of unchecked AI. However, the FTC has shown less receptiveness to Altman’s approach, explicitly stating in multiple blog posts that existing rules and regulations will be enforced, even within the new market of AI.
FTC chair Lina Khan, in an op-ed published in the New York Times titled We Must Regulate AI Now, emphasized the need to regulate AI and enforce existing laws.
While OpenAI’s cooperation with the investigation remains to be seen, the outcome could have significant implications for the future of AI regulation in the United States.