The New York Times has reported on a lawsuit where a lawyer relied on an artificial intelligence program called ChatGPT to write a legal brief only to discover that ChatGPT was unreliable. The program suggested references from non-existent court cases and contained quotes that were fake, leading Judge P. Kevin Castel to schedule a hearing to determine whether to impose sanctions on the lawyer. The case highlights the importance of validating everything generated by artificial intelligence systems, with the judge describing the event as an unprecedented circumstance. The case will now become a permanent talking point in the legal profession’s compulsory legal education on the impact of AI.
Levidow, Levidow & Oberman is a legal firm that has encountered problems when using ChatGPT. The firm’s lawyers were among the attendees to the artificial intelligence in the legal profession webinar that outlined the importance of validating AI output.
Steven A. Schwartz is the unfortunate lawyer who relied on ChatGPT. Schwartz threw himself on the mercy of the court, admitting to using the AI program despite not being aware of the possibility that its content could be false. Schwartz’s colleague, Peter LoDuca, whose name appeared on the brief, claimed not to be involved in the research in question and said that he had no reason to doubt the sincerity of Schwartz’s work or the authenticity of the opinions cited in the problematic case.