The U.S. Federal Trade Commission (FTC) has launched an investigation into OpenAI’s ChatGPT, a conversational AI tool, to determine if it poses any harm to consumers. The probe will specifically examine whether ChatGPT violates user protection laws by inadequately safeguarding users’ data. This investigation marks the first regulatory scrutiny into a generative AI tool.
ChatGPT, which relies on extensive training data from the internet, has gained rapid popularity since its launch eight months ago. However, concerns have been mounting over its potential to generate false information. The FTC’s interest in the Microsoft Corp.-backed startup reflects a broader need for oversight in the field of artificial intelligence.
FTC Chair Lina Khan, who recently testified before Congress, has been a vocal critic of AI chatbots and emphasizes the importance of early vigilance in regulating artificial intelligence. While the FTC declined to comment on the ongoing investigation, OpenAI’s CEO, Sam Altman, expressed disappointment over the leak of the FTC’s document request but affirmed the company’s commitment to consumer safety and compliance with the law.
This FTC investigation comes after a series of hearings in May, during which Altman called for increased regulation and independent audits of AI systems. Altman has even suggested the establishment of a separate agency to oversee AI regulation, emphasizing the potential risks associated with the technology.
In March, the Center for Artificial Intelligence and Digital Policy urged the FTC to investigate and impose a six-month moratorium on the release of AI models like GPT, requesting the implementation of necessary safeguards to protect consumers, businesses, and the commercial marketplace.
The federal regulation of artificial intelligence has struggled to keep pace with the technology’s rapid development. Khan has raised concerns about the harm AI devices can cause to consumers and whether leading tech companies engaged in generative AI are leveraging data to discriminate against competitors.
In a recent op-ed, Khan highlighted the disruptive potential of AI while cautioning against the concentration of power in tech companies that track and sell users’ personal data. She identified collusion, monopolization, mergers, and value favoritism as areas of concern in the deployment of generative AI.
As part of its investigation, the FTC has requested documents from OpenAI related to complaints of ChatGPT making false, misleading, or harmful statements about individuals. The regulatory body aims to determine if OpenAI engaged in unfair or deceptive practices that caused reputational harm to consumers.
Leading tech companies in Silicon Valley have also expressed the need for AI regulation. Google, the creator of rival chatbot Bard, has called for a multi-layered, multi-stakeholder approach to AI governance. Microsoft, echoing Altman’s sentiments, has advocated for the establishment of a centralized government agency to oversee AI regulations.
The Senate Judiciary Committee recently held its first hearing on AI and copyright issues, where representatives from the music and tech industries discussed topics such as fair use and intellectual property protection.
During a congressional hearing on Thursday, FTC Chair Lina Khan faced criticism from Republican lawmakers who accused her of being overly aggressive in her antitrust stance. Despite such opposition, the investigation into OpenAI’s ChatGPT reflects the growing recognition of the need for robust regulation in the field of artificial intelligence.