A New York personal injury attorney could face sanctions for citing fabricated cases generated by an artificial intelligence tool known as ChatGPT. Steven Schwartz filed a federal court brief on behalf of his client, citing six fake cases, including Varghese v. China Southern Airlines Co., Ltd., which was allegedly issued by the United States Court of Appeals for the Eleventh Circuit in 2019. However, none of these cases were real, and the quotes and internal citations provided were bogus.
Schwartz claimed that he used ChatGPT to assist with legal research and find relevant cases. After being asked a series of questions, the tool generated a lengthy analysis of whether Schwartz’s client’s case was untimely, as well as a complete set of fake citations in support. Schwartz did not independently verify that the cases were real and is now facing potential sanctions, including paying a penalty to the court, paying the defendant’s attorney’s fees and costs, or having his right to practice revoked in front of the United States District Court for the Southern District of New York.
This is not the first time ChatGPT has faced legal troubles. Mark Walters filed a lawsuit against the company that created ChatGPT, OpenAI, L.L.C., after the tool identified Walters as one of the defendants in a summary of a lawsuit pending in federal court in the Western District of Washington. However, the allegations made against Walters in the summary were fabricated, and he was never named as a party to the lawsuit.
While Schwartz and Walters’ cases may serve as cautionary tales for those using AI tools to do their homework, they also highlight the potential dangers of relying solely on technology and failing to independently verify information. A hearing for Schwartz’s case is scheduled for June 8th.