ChatGPT, an artificial intelligence (AI) software developed by OpenAI, is now under investigation by the Federal Trade Commission (FTC). The FTC is looking into whether the popular app has published false information about individuals, potentially posing a legal risk. ChatGPT is known for generating human-like content using AI, but there are concerns about the accuracy and potential harm it may cause.
The FTC’s civil subpoena to OpenAI, which was made public on Thursday, focuses on whether the company has engaged in unfair or misleading practices that could harm users’ reputations. One specific question asks OpenAI to provide details on the steps it has taken to address or mitigate the risks of generating false, misleading, or disparaging statements about real individuals.
This investigation by the FTC, under the leadership of Chair Lina Khan, marks a significant escalation in the government’s involvement in regulating emerging technologies. However, it also raises questions about the FTC’s jurisdiction in matters related to speech regulation and reputational harm. Some argue that these issues fall more within the realm of free speech and may be beyond the FTC’s authority.
OpenAI has not yet responded to requests for comment regarding the investigation. The FTC has broad authority to regulate unfair and deceptive business practices that can harm consumers and ensure fair competition. However, critics argue that the FTC has occasionally overstepped its authority, as exemplified by a recent federal judge’s decision to dismiss the FTC’s attempt to block Microsoft’s acquisition of Activision.
Chair Lina Khan has faced criticism for the FTC’s investigation into Twitter’s privacy protections. Republicans argue that the probe was politically motivated, driven by progressives’ dissatisfaction with Elon Musk’s influence on Twitter and his relaxation of content moderation policies. Khan maintains that the agency’s primary goal is to protect user privacy.
In its subpoena to OpenAI, the FTC also requested information on the company’s data security practices, marketing efforts, training of AI models, and handling of user data. These inquiries are part of the FTC’s broader investigation into ChatGPT and its potential risks.
Last March, the Center for Artificial Intelligence and Digital Policy filed a complaint with the FTC, alleging that ChatGPT is biased, misleading, and poses threats to privacy and public safety. The think tank argued that the software does not meet the FTC’s guidelines for AI use.
The Biden administration has initiated discussions on potential regulations for AI tools like ChatGPT. The Department of Commerce sought public input on accountability measures in April. Lawmakers, led by Senate Majority Leader Chuck Schumer, have also prioritized regulating AI in the current Congress.
Besides concerns about reputational risks, lawmakers are worried that AI tools could be misused for voter manipulation, discrimination, financial crimes, job displacement, or other forms of harm. Deepfake videos, which use AI to depict real people engaging in embarrassing actions or making false statements, have been a particular cause for concern.
However, any significant action, such as enacting new legislation, will likely take months or longer. Lawmakers must also consider the potential impact on U.S. innovation, as the competition with China to dominate the AI market intensifies.
Even the creators of ChatGPT have acknowledged the need for more government oversight in AI development. OpenAI CEO Sam Altman testified before Congress in May, urging lawmakers to establish licensing and safety standards for advanced AI systems. Altman acknowledged the potential risks associated with AI and the necessity of responsible regulation.
As the investigation into ChatGPT unfolds, it will be interesting to see how the FTC navigates the complex landscape of AI regulation and balances the need to protect consumers with the promotion of innovation and free speech.