Generative AI, such as ChatGPT, has raised concerns over its potential misuse and the legal and workplace risks associated with it. A recent case in the Tax Tribunal shed light on these dangers, highlighting the challenges that judges and employers face in identifying when AI is misused or produces incorrect information.
The case, Harber v The Commissioners for His Majesty’s Revenue and Customs, involved an appellant who presented summaries of non-existent cases to support her appeal. These summaries were provided to her by a friend with a legal background, but the Tribunal discovered that they were likely created by generative AI. The appellant was unaware that the cases were not real, emphasizing the need for individuals to be able to discern AI-generated content.
Similar concerns were raised in the US case of Mata v Avianca, where an attorney relied on summaries of artificially generated cases. The court in Mata identified stylistic and reasoning flaws in the fake judgments, which cast doubt on their authenticity. The Tribunal judge in Harber also noted stylistic points that helped reach a conclusion. These cases highlight the plausibility but incorrectness of AI-generated content, sounding alarms for both judges and employers.
The legal dangers extend to Employment Tribunal claims, where high volumes of cases and reliance on first-instance decisions can complicate matters. Many litigants may lack the means to verify the authenticity of cases themselves, increasing the risk of submitting AI-generated materials. The case of Harber serves as a reminder for judges and lawyers to verify the genuineness and authority of all materials referenced in tribunals.
Workplaces are not exempt from these risks either. In the cases of Harber and Mata, the individuals relying on fake cases were unaware of their AI origins. Employers must prioritize transparency regarding the use of generative AI to ensure employees can scrutinize materials accurately. However, simply prohibiting the use of generative AI may lead to unsanctioned use and the absence of appropriate parameters.
For employers, having policies in place to prevent material risks arising from employees’ input into generative AI is crucial. This includes avoiding the inappropriate input of confidential information, personal data, and unintended copyright breaches. Managers, supervisors, and other employees who may receive AI-generated material should also receive training on how to handle and verify its accuracy.
An overreliance on AI can be equally dangerous, with the assumption that it always provides correct answers and unbiased information. Employers must implement the right level of human intervention and oversight when using AI technologies. Precautions should include thorough checks for accuracy, human reviews of first drafts created by technology, and verification of references or sources to ensure their authenticity.
In conclusion, the case of Harber highlights the legal dangers and workplace risks associated with generative AI. Judges, lawyers, and employers must navigate the challenges posed by AI’s potential misuse, emphasizing the importance of transparency, verification, and human oversight. Balancing the benefits and risks of AI technology is crucial to mitigate any potential harm it may cause.