Artificially Unintelligent: Attorneys Penalized for Improper Use of ChatGPT in Personal Injury Case – United States
Using AI platforms for legal purposes may seem like a smart move, but recent events have highlighted the need for caution. In a notable case, two attorneys were sanctioned by Judge P. Kevin Castel of the Southern District of New York for submitting a brief written by ChatGPT, an artificial intelligence tool. The attorneys faced reprimand as their submission contained fabricated judicial opinions, quotes, and citations.
The case that led to Judge Castel’s order involved a personal injury claim. Roberto Mata, represented by the sanctioned lawyers, sought to hold Avianca, an airline, accountable for injuries he sustained from a metal serving cart during a flight in 2019. In response, Avianca filed a motion to dismiss, citing the expiration of the statute of limitations. In an attempt to refute this argument, Mr. Mata’s lawyers submitted a 10-page brief advocating for the case to proceed. However, Avianca’s legal team discovered that the law cited in the brief did not exist and raised the issue with the Court. Rather than rectifying their mistake, the attorneys persisted in their deception and only admitted the truth later on, much to their detriment.
During the sanctions hearing, one of the lawyers claimed he operated under the false belief that ChatGPT could not fabricate cases on its own, despite being unable to locate some of the generated cases. It turns out that the artificial intelligence platform did reference real cases and the names of actual judges, but interspersed them with fabricated content. For instance, Judge Castel scrutinized the Varghese v. China Southern Airlines Co., Ltd., 925 F.3d 1339 (11 Cir. 2019) decision presented by the lawyers. While the decision contained flaws inconsistent with typical appellate courts’ judgments and featured nonsensical legal analysis, it did include references to legitimate cases. When questioned about the authenticity of the case, the AI platform even asserted that it could be found in reputable legal databases like Westlaw and LexisNexis.
In response to the attorneys’ misconduct, Judge Castel invoked Rule 11 and issued a sanctions order, jointly fining the lawyers $5,000. The intention behind the penalty was to deter similar behavior, but the Judge decided against mandating an apology, as it would lack sincerity.
The incident serves as a warning to the legal profession about the reliance on technology, particularly AI. It may be the first scandal of its kind, but it won’t be the last. Judge Brantley Starr, a federal district judge in Texas, recently issued an order advising lawyers against using any form of artificial intelligence, including ChatGPT, Harvey.AI, or Google Bard, in drafting legal briefs. Judge Starr highlighted that while attorneys make an oath to uphold the law and represent their clients impartially, AI systems lack those ethical considerations. Therefore, he required attorneys appearing before him to submit a certificate affirming that their briefs were free from generative artificial intelligence or that any AI-generated language underwent human verification for accuracy.
While there’s no denying the potential of advanced AI technology surpassing human capabilities someday, the current reality is that these systems lack a sense of duty, honor, or justice, as Judge Starr aptly points out. Lawyers must exercise caution before fully relying on such technology, as a human touch and discernment may always be irreplaceable.
Artificially Unintelligent: Attorneys Sanctioned For Misuse Of ChatGPT