A New York lawyer, Steven Schwartz, is facing a sanction hearing on June 8, after it was revealed that an AI program, ChatGPT, was used to create an affidavit he submitted. The affidavit contained fabricated court decisions that comprise of false quotes and false internal citations. In response, Judge Kevin Castel stated that “[t]he incident presents the court with an unprecedented situation.” Schwartz has since apologized for using the AI program, admitting that he was unaware of its potential for inaccuracy.
The rise in popularity of ChatGPT has caused equal optimism and skepticism about the AI program, with some questioning its reliability. Similarly, the law firm, Levidow, Levidow & Oberman, in which Schwartz works, released a court filing saying that Peter LoDuca, another attorney at the firm, is also facing sanctions in connection to the affidavit. Bart Banino, a lawyer from Condon & Forsyth that represents Avianca, told the New York Times that he noticed the fabricated cases in the affidavit, and was initially skeptical that a Chatbot was to blame.
Avianca is the airline in the lawsuit where the man claimed to be injured by a serving cart aboard its flight. Avianca, LoDuca, and Shwartz, all declined to comment to Insider about the situation.
ChatGPT is a generative AI program that has seen its popularity rise in recent months. It is the first real-time AI system that generates copy from a given prompt. It also has natural language capabilities and can quickly generate sentences that sound like a human being wrote them. Despite this potential, caution must be taken when using this program to generate legal documentation as the output may contain false information.