A US lawyer has found himself in a difficult situation due to a court hearing over his legal team’s use of an AI chatbot, ChatGPT, for legal research. According to a recent BBC report, the judge ruled that the legal brief submitted for a man who sued an airline over a personal injury contains references to example legal cases that do not exist.
The lawyer behind the brief, Stephen Schwartz, clarified that the lead lawyer, Peter LoDuca, had no knowledge of the research being done by ChatGPT. Schwartz admitted that he “greatly regrets” relying on the chatbot and was unaware of any inaccuracy contained in its content. The lawyer has since sworn to never again use the AI for legal research without full verification of its authenticity.
ChatGPT is an AI-powered chatbot that is used for a range of tasks, including creating original text. It is designed to mimic a human writing style and respond to questions in language that sounds natural. Despite its widespread use, concerns have been raised at a government level about the potential dangers of artificial intelligence, including the possibility of bias and false information spreading.
The Manhattan-based law firm, Levidow, Levidow & Oberman, which is handling the case is set to face a court hearing scheduled for 8 June. The lawyer and his legal team are expected to explain their conduct in the matter.
Whether AI chatbots have a role to play in legal research remains to be seen. For now, it is a reminder of the need for human verification of AI-generated content, even if its seemingly authoritative and reliable.