Two lawyers have been fined a joint total of $5,000 and ordered to reach out to judges featured in the fake cases generated by artificial intelligence (AI) tool, ChatGPT. This is the first time that major sanctions have been put in place for the use of AI in the legal field. New York-based law firm, Levidow Levidow & Oberman, P.C., expressed their intention to comply with the court’s order but they “respectfully disagree” that anyone at the firm “acted in bad faith.” In a statement to Forbes, the firm continued, “We made a good faith mistake in failing to believe that a piece of technology could be making up cases out of whole cloth.”
The lawyers in question, Michael Schwartz and Jillian Scheinfeld, had attempted to use six cases, which they believed to be genuine precedent cases, to support a lawsuit against Avianca Airlines. Schwartz went back to ChatGPT to provide fabricated legal documents after the authenticity of the case documents came under scrutiny from the opposing party. David Castel, the presiding judge, considered sanctions against the lawyers and threw out the lawsuit against Avianca Airlines, ruling that it was filed too late.
The use of AI in law has been a complex issue, and this is one of the first times that a case has been so heavily sanctioned due to misusing the technology. Schwartz admitted to using ChatGPT but claimed that he had no intention to deceive the court and acted in good faith. The court found that the cases provided did not exist and contained false judicial decisions, quotes and internal citations.
As technology continues to evolve and affect various industries, it is important to maintain ethical and proper use of it. This case serves as a reminder that AI cannot be completely relied on to conduct legal procedures. Lawyers must maintain their responsibility to ensure they are using genuine research and evidence, or they could face serious consequences, as seen in this case.