Title: South African Lawyers Fined for Using ChatGPT’s Fake Info During Court Case
Lawyers at the Johannesburg regional court in South Africa found themselves in hot water after using fake references generated by ChatGPT during a recent case. The judgement not only called out the lawyers for their deceitful practices but also imposed a punitive costs order on their client.
The judgement, which stated that the names, citations, facts, and decisions are fictitious, emphasized the need for a balance between modern technology and traditional independent reading in legal research. Magistrate Arvin Chaitram highlighted the importance of relying on reliable sources rather than solely relying on technology.
The case that sparked this controversy involved a woman suing her body corporate for defamation. The trustees’ counsel argued that a body corporate cannot be accused of defamation, while the plaintiff’s counsel, Michelle Parker, claimed that there were previous judgements that had already addressed this issue, but she hadn’t had the opportunity to access them.
In order to gather the necessary information to support their cases, Magistrate Chaitram postponed the proceedings to late May, allowing both parties ample time to source relevant information. Over the following two months, the lawyers involved attempted to track down the cases and citations mentioned by ChatGPT, only to discover that while the cases and citations were real, they were unrelated to the matter at hand.
Ultimately, it was revealed that the lawyers had sourced their information through the medium of ChatGPT. Magistrate Chaitram concluded that the misleading references were the result of overzealousness and carelessness rather than an intentional attempt to deceive the court. As a result, no further action was taken against the lawyers apart from the punitive costs order.
Magistrate Chaitram stated that the embarrassment experienced by the plaintiff’s attorneys likely served as punishment enough. This incident echoes a similar case in the United States, where lawyers were fined for including incorrect case citations from ChatGPT in a court brief.
The lawyers involved in the US case, Steven Schwartz and Peter LoDuca, were ordered to pay a fine of $5,000 and were accused of neglecting their responsibilities by submitting non-existent judicial opinions with fabricated quotes and citations from ChatGPT. Even after their claims were challenged by judicial orders, they continued to stand by the fake opinions. The court also mandated that the lawyers send a transcript of the hearing and the judge’s opinion to each falsely attributed judge identified by ChatGPT.
South Africa is not the only country where lawyers have blindly trusted and included information from ChatGPT, underestimating its potential for misinformation. It is clear that in the legal field, there is a need for a cautious and diligent approach when utilizing AI technology to supplement research.
In conclusion, these incidents serve as a reminder that lawyers must carefully scrutinize any information obtained through AI tools and cross-reference it with reliable sources. Balancing the power of technology and human judgment is crucial to maintaining the integrity of legal proceedings and ensuring that accurate information is presented in court.