The Federal Trade Commission (FTC) is currently investigating OpenAI, a San Francisco-based company, over concerns that its AI language model, ChatGPT, may pose a threat to consumer privacy and disseminate false information. The investigation, reported by The Washington Post, was initiated by the FTC in the form of a 20-page letter sent to OpenAI.
The FTC’s investigation aims to determine whether OpenAI’s large language models (LLMs), which power ChatGPT, have engaged in unfair or deceptive privacy practices, potentially causing reputational harm to consumers. The 20-page letter includes 49 questions, many of which concern confidential matters. OpenAI is required to provide information about the data used to train ChatGPT, the methods of data acquisition, and the company’s measures to ensure that users’ private information is not compromised.
The FTC is particularly interested in obtaining more information about a software bug that resulted in ChatGPT’s website briefly exposing users’ conversation histories and payment details of paid users. OpenAI has also been instructed to suspend any document destruction procedures. Furthermore, the FTC will assess whether monetary relief is necessary for the public interest.
While the FTC declined to comment on the matter, OpenAI has yet to respond to requests for comment. This investigation follows a series of lawsuits aimed at chatbot developers regarding privacy and defamation concerns.
ChatGPT is capable of providing accurate answers on a wide range of topics; however, it has been known to hallucinate and provide incorrect responses, potentially misleading users. Additionally, concerns have been raised about the practice of scraping the web, which involves collecting people’s social media posts without explicit consent, to train and improve chatbots like ChatGPT. Consequently, the FTC’s investigation could shed light on OpenAI’s internal processes, subjecting the company to increased regulatory scrutiny.
Enza Iannopollo, an analyst at research firm Forrester, has highlighted the lack of regulations surrounding LLMs, cautioning that the risks of privacy abuses and harm to individuals will persist as long as these models remain opaque and heavily reliant on scraped data for training.
As the investigation progresses, it has become increasingly apparent that the use of AI language models like ChatGPT presents challenges that could lead to regulatory fines, investigations, and harm to users if not properly regulated. Addressing these concerns will be crucial to mitigating potential risks and ensuring the responsible and ethical use of AI technology.