Title: OpenAI Faces FTC Investigation Over Defamation Concerns Surrounding AI-generated Content
OpenAI, the renowned developer of the ChatGPT AI assistant, is currently under investigation by the US Federal Trade Commission (FTC) for potential violations of consumer protection laws. The probe follows allegations that OpenAI’s AI models have led to reputational harm and put personal data at risk, according to reports from The Washington Post and Reuters.
To delve into the matter, the FTC recently dispatched a detailed 20-page information request to OpenAI. The agency’s focus is primarily centered around OpenAI’s risk management strategies pertaining to its AI models. Specifically, the investigation seeks to uncover whether the company has partaken in deceptive or unfair practices that have resulted in reputational damage to consumers.
The FTC is also keen on understanding how OpenAI has addressed the possibility of its products generating false, misleading, or disparaging statements about real individuals. Within the AI industry, such instances of fabrication are commonly referred to as hallucinations or confabulations.
The Washington Post speculates that the FTC’s emphasis on misleading or false statements is a direct response to recent incidents that involved OpenAI’s ChatGPT. Notably, the AI assistant was involved in a case where it allegedly fabricated defamatory claims about Mark Walters, a radio talk show host from Georgia. Walters subsequently filed a defamation lawsuit against OpenAI after the AI assistant falsely accused him of embezzlement and fraud linked to the Second Amendment Foundation. Another incident involved the AI model falsely attributing sexually suggestive comments to a lawyer on a student trip to Alaska, an event that never took place.
This FTC investigation presents a substantial regulatory challenge for OpenAI, a company that has garnered considerable excitement, apprehension, and buzz within the tech industry since the launch of ChatGPT in November. While OpenAI has dazzled the tech world with AI-powered products that many believed to be years or even decades away, concerns about potential risks associated with their AI models have begun to arise.
As the demand for more advanced AI models intensifies throughout the industry, government agencies worldwide have started scrutinizing the activities transpiring behind the scenes. Confronted with rapidly evolving technology, regulators such as the FTC are actively working towards applying existing rules that cover AI models, encompassing areas like copyright, data privacy, and specific issues related to the data used for training these models and the content they generate.
In a bid to oversee the progress of AI technology and ensure the establishment of necessary safeguards, US Senate Majority Leader Chuck Schumer called for comprehensive legislation in June. The senator plans to hold forums dedicated to this subject later in the year.
OpenAI has a two-week window from the time of receiving the information request to schedule a call with the FTC. During this call, they will have the opportunity to discuss potential modifications to the request or address any concerns related to compliance.
This isn’t the first regulatory hurdle that OpenAI has faced. In March, the company encountered resistance in Italy when regulators temporarily barred ChatGPT due to allegations of breaching the European Union’s GDPR privacy regulations. OpenAI managed to reinstate the ChatGPT service by implementing age-verification features and offering European users the option to block their data from being utilized for training the AI model.
As OpenAI navigates this FTC investigation, the outcome will influence the future direction of AI regulations. The focus on consumer protection and mitigating the risks associated with AI-generated content is becoming increasingly crucial in an era where AI capabilities are advancing at an unprecedented pace.