Lawyers Use AI-Generated Fake References, Face Criticism in Johannesburg Court
Lawyers arguing a case in the Johannesburg regional court have come under fire for using fabricated references generated by an AI language model called ChatGPT. The court ruling revealed that the lawyers presented entirely fictitious names, citations, facts, and decisions, leading to the imposition of punitive costs on their client.
Magistrate Arvin Chaitram highlighted the significance of independent reading in legal research. While acknowledging the efficiency of modern technology, he emphasized the need for a balanced approach that incorporates good old-fashioned independent reading.
The case in question involved a woman suing her body corporate for defamation. The body corporate trustees’ counsel argued that they couldn’t be sued for defamation. In response, the plaintiff’s counsel, Michelle Parker, mentioned previous judgments addressing this question but claimed insufficient time to access them. The court granted a postponement to allow both parties to gather the necessary information to support their arguments.
During the two-month postponement, the lawyers attempted to locate the references cited by ChatGPT, only to discover that while the AI language model had provided real citations referring to actual cases, those cases were unrelated to the ones mentioned. Furthermore, the cited cases and references were irrelevant to defamation suits involving body corporates and individuals. Subsequently, it was revealed that the judgments had been sourced through ChatGPT.
Magistrate Chaitram ruled that the lawyers hadn’t intentionally misled the court, attributing their conduct to overzealousness and carelessness. Consequently, no further action was taken against the lawyers, apart from the punitive costs order. Chaitram deemed the embarrassment associated with the incident to be a sufficient punishment for the plaintiff’s attorneys.
This reliance on ChatGPT’s fabricated content is not limited to South Africa. In the United States, lawyers were recently fined for submitting a court brief filled with false case citations from ChatGPT. They faced consequences for presenting non-existent judicial opinions containing fabricated quotes and citations.
These incidents serve as cautionary tales, underscoring the dangers of uncritical reliance on AI-generated content without verifying its accuracy. The case in the Johannesburg court and the incident in the US highlight the critical importance of evaluating AI-generated content diligently, especially within the legal field.
While AI tools can provide valuable assistance, legal professionals must exercise caution and verify the authenticity and relevance of the information provided. Maintaining a balance between technological efficiency and independent reading remains crucial for accurate and reliable legal research.