ChatGPT, an AI chatbot created by OpenAI, has caused problems for a US lawyer who recently faced sanction in a case. Lawyer Steven A Schwartz was representing a man suing an airline when he used the AI chatbot to do his research and cited six bogus judicial decisions. The opposing counsel discovered the non-existent citations and Judge Kevin Castel confirmed that these were incorrect. Schwartz admitted that he thought the projects were legitimate when ChatGPT apologized for its previous confusion and insisted the case was real. In addition, the AI chatbot maintained that the other presented cases were legit.
OpenAI is a research laboratory based in San Francisco, California, and is devoted to advancing machine learning research. The lab has a range of projects seeking to improve the AI capability of machines and robotics. ChatGPT is a research study done by OpenAI, which surfaced issues, when it falsely named an innocent law professor from the US, Jonathan Turley, among legal scholars who had previously sexually harassed students.
Schwartz expressed deep regret for using ChatGPT as a supplement to his research. He promised that he would never use the AI chatbot in the future without verifying the authenticity of the sources. As Judge Castel considers sanctions, this case serves as an important reminder for lawyers and other legal professionals to thoroughly check all sources when drafting documents. Furthermore, ethical considerations should be taken into account when using any robotic technology in legal practice.