Title: Lawyers Criticized for Using AI-Generated References in Johannesburg Court
Lawyers arguing a case in the Johannesburg regional court have come under fire for relying on fake references generated by an AI language model called ChatGPT. The judgment revealed that the information presented by the lawyers, including names, citations, facts, and decisions, were entirely fictitious. Consequently, the lawyers’ client was slapped with punitive costs as a consequence.
Magistrate Arvin Chaitram emphasized the need for a balanced approach to legal research, pointing out that while modern technology can be efficient, it should be accompanied by traditional independent reading.
This incident arose during a defamation case where a woman was suing her body corporate. The body corporate trustees’ counsel argued against the possibility of suing a body corporate for defamation.
In response, Michelle Parker, the plaintiff’s counsel, mentioned that past judgments had already tackled this issue. However, due to insufficient time to access them, the court granted a postponement to give both parties an opportunity to gather the necessary information to support their arguments.
During the two-month extension, the lawyers attempted to locate the references mentioned by ChatGPT. To their dismay, they realized that although ChatGPT had provided real citations referring to actual cases, those cases were unrelated to the matter at hand. Furthermore, the cited cases and references were irrelevant to defamation suits involving body corporates and individuals. It was eventually revealed that the judgments had been sourced through ChatGPT, an AI language model.
Magistrate Chaitram ruled that the lawyers did not intentionally mislead the court but displayed an excessive eagerness and carelessness. Consequently, no further action was taken against the lawyers, except for the imposition of punitive costs. Chaitram deemed the embarrassment associated with the incident to be a sufficient punishment for the plaintiff’s attorneys.
The dependence on fictitious content generated by ChatGPT is not an isolated incident in South Africa. In the United States, lawyers were recently fined for submitting a court brief filled with false case citations from ChatGPT. The lawyers and their firm faced consequences for submitting non-existent judicial opinions with fabricated quotes and citations.
These incidents serve as a warning about the dangers of uncritically relying on AI-generated content without verifying its accuracy. Both the case in the Johannesburg court and the incident in the US highlight the importance of critically evaluating AI-generated content, especially in the legal field.
While AI tools can provide valuable assistance, legal professionals must exercise caution and confirm the authenticity and relevance of the information provided. Maintaining a balance between technological efficiency and independent reading is crucial for accurate and reliable legal research.