A radio host from Georgia is suing OpenAI, the developer of ChatGPT, over its chatbot falsely publishing information accusing him of embezzling non-profit money. The incident occurred when Fred Riehl, the editor-in-chief of a gun website, requested a summary of the course case of The Second Amendment Foundation v. Robert Ferguson from OpenAI’s AI-powered chatbot. The chatbot allegedly provided a quick summary that directly mentions Mark Walters, accusing him of embezzling money from the non-profit foundation. Walters claims he has never worked with the foundation or was affiliated with it in any way.
Walters filed a lawsuit on June 5th in Gwinnett County Superior Court, claiming that OpenAI was negligent in publishing libelous material regarding him. OpenAI’s chatbot not only implicated Walters in the case, but also provided legal details of the complaint lodged against him. However, none of the given details were found to be accurate or true. Walters is suing for damages to his reputation, but UCLA Law School professor Eugene Volokh believes that there is a lack of evidence to suggest damages incurred, and the plaintiff should prove that OpenAI acted with actual malice.
This suit highlights the pitfalls of AI technology known as hallucinations and emphasizes the need for its makers to fix these problems. While the case exposes concerns over the potential liability of AI-generated content, it also highlights how the proliferation of chatbots and other language models is likely to lead to further legal action. Whether Walters can win the case or not, lawsuits such as this signify an emerging trend in how AI-generated material must be treated with care, as the consequences can be far-reaching.