OpenAI, the company that created the ChatGPT language model, is facing a lawsuit filed by a radio host, Mark Walters, for defamation. The incident occurred when a journalist named Fred Riehl requested ChatGPT to summarize a real federal court case by linking it to an online PDF. However, the response generated by the program was false and misleading. It falsely claimed that Mark Walters had defrauded and embezzled funds from a non-profit organization called the Second Amendment Foundation. It mentioned that Walters had pocketed $5 million, which was not true. The journalist, Fred Riehl, chose not to publish this information but instead tried to verify the response from another source.
However, it is still unclear how Walters discovered that ChatGPT had generated these false responses. Typically, these language models generate misleading responses, which are referred to as hallucinations. Although users are aware of these glitches, they often ignore them since they do not result in any harm. However, in this case, ChatGPT’s response caused actual harm to Mark Walters, resulting in the lawsuit.
It is not the first time that such incidents have occurred. ChatGPT’s incorrect responses have led to severe consequences, one of which was when a professor threatened to fail his entire class after ChatGPT stated that students were using AI to complete their essays. In the second instance, a lawyer faced possible disbarment after using the program to research fake legal cases.
In light of these issues, OpenAI has issued a small disclaimer on ChatGPT’s homepage, warning users that the AI can occasionally generate false information. It is crucial that OpenAI engineers work on alleviating the problems with ChatGPT’s responses since more trouble could ensue for various professionals, including the company itself.
The lawsuit was filed on June 5th in Georgia’s Superior Court of Gwinnett County, with Mark Walters seeking monetary compensation from OpenAI. The amount has not been disclosed yet. It remains to be seen how the case will play out, but it highlights the importance of ensuring that language models generate accurate and reliable information.