Fake cases and real consequences: the risks of relying on AI in litigation
In a recent case in the First-tier Tax Tribunal, the use of generative AI technology in legal proceedings has come into question. The case in question, Harber v HMRC, has generated significant commentary and has highlighted the potential pitfalls of relying on AI in litigation.
Mrs Harber, who represented herself in the case, submitted nine references to supposed FTT decisions that she claimed supported her defense of reasonable excuse. However, neither HMRC nor the Tribunal could find any trace of these supposed cases in their records. When asked if the cases had been sourced from an AI system, Mrs Harber merely responded that her submissions had been prepared by a friend in a solicitor’s office.
The Tribunal then had the task of determining whether the cases were genuine FTT judgments or if they were AI-generated. It carried out a review of the FTT websites and the British and Irish Legal Information Institute (BAILII) to investigate the matter. It was assisted in its search by a US case, Mata v Avianca, where illegitimate ChatGPT generated cases had been used by barristers. The judge in that case identified stylistic and reasoning flaws that were not typically found in US Court of Appeals decisions.
This case raises important questions about the role of AI in the courtroom. The Tribunal in Harber agreed with the observations made by the US judge, stating that the reliance on AI promotes cynicism about the legal profession and threatens the authoritative value of case precedent. The increasing accessibility of AI means that these issues will become even more prevalent in the future.
The Solicitors Regulation Authority (SRA) has also expressed concerns about the use of AI in the legal market. In its 2023 Risk Outlook Report, the SRA highlighted incidents where AI-drafted legal arguments contained non-existent cases, warning that such errors could lead to miscarriages of justice. It emphasized the importance of responsible use of AI, with all outputs carefully checked for accuracy before being relied upon.
While AI can be a valuable tool to support lawyers with time-consuming tasks, it should not be viewed as a complete replacement for human judgment and discretion. The very nature of the legal process relies on the constructive engagement of all parties involved, and AI should augment, not replace, lawyers. The courts themselves are exploring the use of AI but stress the need for human intervention, accountability, and responsibility.
In conclusion, the Harber case serves as a reminder that AI should be used responsibly and with caution. The reliance on AI-generated material can undermine the integrity of legal proceedings and waste valuable time and resources. As AI becomes more widely accessible, parties should remain skeptical of cases presented as supporting their arguments. The inherent discretion and judgment of humans are essential in upholding the fairness and justice of legal proceedings. The future use of AI in litigation is inevitable, but it must be approached with care to protect the integrity of the legal system.