OpenAI, the company behind the popular AI language processing model called ChatGPT, is facing some serious issues. The company has been accused of lack of transparency in the usage of user data and caught in a difficult situation in Europe due to the upcoming AI law. On top of that, the company recently received its first defamation lawsuit.
The lawsuit was filed by Mark Walters, a radio host from Georgia, who is accusing ChatGPT of falsely claiming that Walters had been accused of fraud and misappropriation of funds from a non-profit organization. The allegations were generated in response to a request from journalist Fred Riehl, who asked ChatGPT to summarize an ongoing federal court case unrelated to Walters.
As a part of their response, the AI mixed real information with invented data, including these false accusations. Walters states that these accusations were false and malicious and made with the intention to harm his reputation and subject him to public hatred, contempt, or ridicule.
It is currently unclear whether the case can proceed under U.S. law as Section 230 protects internet companies from legal liability for information produced by third parties hosted on their platforms.
It is important to note that AI language processing models like ChatGPT are essentially parrots that provide responses based on the information they’ve been trained on. When they lack information or don’t understand the request, they tend to provide incorrect data, leading to situations like this.
It is essential to verify the information provided by chatbots if they are being employed for tasks and make sure the data they produce are accurate. This incident serves as a reminder of the need for caution when using AI applications and ensuring they are functioning correctly.
In conclusion, the lawsuit serves as a warning that AI language processing models like ChatGPT are not infallible and can lead to dire consequences. As AI continues to become more advanced, it is essential to hold these models to higher standards and regulate their usage to prevent incidents like these.