OpenAI, the creator of the popular chatbot ChatGPT, is currently under investigation by the Federal Trade Commission (FTC) over concerns that the AI-powered tool may have breached consumer law by damaging reputations and posing data risks. The agency has issued a 20-page demand letter to OpenAI, requesting information about how the company addresses potential risks associated with its AI models.
In particular, the FTC is interested in understanding how OpenAI deals with privacy and prompt injection attacks, as well as API and plugin integrations. The agency also wants to know what measures the company has taken to prevent its products from generating false, misleading, or disparaging information about real individuals. This is an important concern, as generative AIs like ChatGPT have often been known to produce inaccurate or misleading content.
OpenAI has faced defamation lawsuits in the past, such as when a radio host accused ChatGPT of falsely associating his name with a criminal issue. Additionally, an Australian mayor threatened legal action after the chatbot allegedly implicated him in a foreign bribery scandal. The tool has even falsely accused a US law professor of sexual assault based on a fabricated news article. These incidents raise questions about the potential for reputational harm caused by OpenAI’s technology.
The FTC is also interested in OpenAI’s data practices. The company has been criticized for its lack of transparency regarding the sources of data used to train its language models. The agency wants to determine whether the data has been scraped from the internet or purchased from third parties. Furthermore, the FTC seeks information on which websites the data has been sourced from and what steps OpenAI has taken to protect personal information during the training process.
Data scraping practices within the generative AI industry have sparked controversy recently. Elon Musk, an investor in OpenAI, threatened to sue Microsoft for allegedly illegally using Twitter data—a issue that also resulted in limitations being placed on users’ tweet views. Reddit, too, faced user backlash when it started charging for access to its API as a response to data scraping by AI companies.
The FTC’s investigation also extends to security incidents that occurred earlier this year. One incident involved a bug that exposed payment-related information, while the other incident revealed the titles of users’ chat histories. Both cases raise concerns about the protection of sensitive user data.
In response to the investigation, OpenAI CEO Sam Altman stated that the company would cooperate with the FTC. Altman emphasized that OpenAI prioritizes user privacy and designs its systems to learn about the world rather than individuals’ private lives.
The FTC has already imposed substantial fines on companies like Meta, Amazon, and Twitter for alleged violations of consumer protection laws. If OpenAI is found to have committed similar violations, it could face similar consequences—a potentially nerve-wracking prospect for other generative AI companies.
Earlier this month, Google revised its privacy policy to explicitly state its right to collect and analyze user-shared web content for AI training purposes. This move reflects the increased focus on AI-driven advancements and the need for clear guidelines and regulations to protect consumer data.