Two lawyers in Manhattan, Steven A. Schwartz and Peter LoDuca, have been fined $5,000 for submitting a legal brief filled with fictitious cases and citations. The surprising revelation is that the document was generated using an artificial intelligence program called ChatGPT. The judge who presided over the case, P. Kevin Castel, not only imposed the fine but also ordered the lawyers to send a copy of his opinion to each of the real-life judges whose names were mentioned in the fabricated filing.
While Judge Castel did not require an apology from Schwartz and LoDuca, he criticized their actions, stating that submitting fake opinions causes harm by wasting time and money for the opposing party and taking up the court’s valuable time. Moreover, it promotes cynicism about the legal profession and undermines the authenticity of judicial rulings.
The case revolved around a lawsuit filed by their client, Roberto Mata, who sought to hold the airline Avianca accountable for an injury he allegedly sustained during a flight. The fake cases and citations were used to argue that the lawsuit should be allowed to proceed despite the statute of limitations having expired.
The judge scrutinized the fabricated decisions and revealed glaring inconsistencies and flaws, highlighting the lawyers’ violation of their responsibilities and their subsequent attempt to conceal their actions. He emphasized that had they come clean earlier, the consequences would have been different.
The use of artificial intelligence programs like ChatGPT in the legal profession has ignited a debate within the tech community about the potential dangers of relying too heavily on AI. This case serves as a reminder of the need for caution and ethical responsibility when utilizing such technology.
In response to the judge’s order, the lawyers’ firm, Levidow, Levidow & Oberman, stated that they would comply with the ruling but disagreed with the finding that anyone at the firm acted in bad faith. They acknowledged their mistake, expressing embarrassment and remorse.
Although Judge Castel did not refer the lawyers for disciplinary action, the authorities may independently investigate the matter, which could lead to private reprimands or public sanctions, including suspension or disbarment.
From an ethical standpoint, legal experts believe that the extensive publicity surrounding this case has served as a form of sanction against Schwartz and LoDuca. Moving forward, it is essential for legal professionals to exercise caution and not be swayed by the allure of generative AI.
In a separate ruling, Judge Castel dismissed Mata’s lawsuit against Avianca based on the airline’s argument that the statute of limitations had expired. Avianca’s lawyer, Bart Banino, expressed satisfaction with the court’s decision and believed it was the right outcome.
Ultimately, this case underscores the importance of upholding the integrity of the legal profession and maintaining ethical standards in the face of technological advancements. It serves as a warning to legal practitioners to exercise prudence and critical judgment when harnessing the power of AI in their practice.