OpenAI, the artificial intelligence startup behind ChatGPT, is currently under investigation by the US Federal Trade Commission (FTC) over potential harm to consumers. The investigation is centered around allegations of consumer harm resulting from data collection and the dissemination of false information.
The FTC has sent a detailed 20-page letter to OpenAI, requesting information on their AI training models, data privacy practices, and security measures. The probe aims to determine whether OpenAI has engaged in unfair or deceptive practices regarding privacy, data security, and risks of harm to consumers.
This investigation marks OpenAI’s first major regulatory hurdle in the United States and reflects the growing attention being paid to AI technologies as they become more widely used across industries.
OpenAI’s CEO, Sam Altman, has previously acknowledged the need for regulatory oversight in the fast-growing AI industry. Altman has actively participated in Congressional hearings and collaborated with lawmakers to shape AI policy.
Concerns regarding accuracy and privacy have been raised not only about OpenAI’s ChatGPT but also about other AI systems like Microsoft’s Bing, Google’s Bard, and Baidu’s Ernie. These issues have come to the forefront of regulatory discussions.
OpenAI has also faced regulatory scrutiny overseas. In March, Italy’s data protection authority banned ChatGPT due to unlawful collection of personal data and the lack of an age-verification system. OpenAI reinstated access after implementing necessary changes.
FTC Chair Lina Khan has advocated for tech company regulation during their early stages to address concerns about the type of data being fed into AI systems. Khan has cited reports of sensitive information appearing in AI outputs.
The investigation by the FTC may prompt OpenAI to disclose more information about its AI training and data sourcing methodologies. While OpenAI has previously been transparent about this information, recent concerns about competition and legal disputes have led to reduced disclosure.
Chatbots, like ChatGPT, have brought about significant changes in software development and utilization. These AI systems possess impressive capabilities such as answering complex questions and generating creative content. However, the blending of fact and fiction in AI outputs, also known as hallucination, has raised concerns.
An advocacy group for ethical technology use, the Center for AI and Digital Policy, has previously urged the FTC to halt OpenAI from releasing new commercial versions of ChatGPT, citing bias, disinformation, and security concerns. OpenAI has been working to improve ChatGPT and reduce instances of biased or harmful output through reinforcement learning.
The FTC investigation is expected to last several months and may involve depositions of OpenAI executives. However, some experts, like former FTC staff member Megan Gray, are skeptical of the agency’s technical capacity to fully evaluate OpenAI’s practices.
The outcome of the investigation and its potential consequences remain to be seen. As the AI landscape continues to expand, regulatory scrutiny will play a crucial role in shaping the industry and protecting consumer interests. OpenAI’s case highlights the need for responsible AI development and usage to mitigate potential harm and ensure transparency.