Lawyers for a passenger lawsuit against Avianca relied on an AI chatbot for research, but the bot cited bogus legal cases. Judge Castel summoned the lawyers for a hearing after noticing the citations were false. Chatbots may not always be reliable sources of factual information.
This article examines the case of Roberto Mata v Avianca, a legal case involving an injury sustained from a metal serving cart. US lawyer Steven A Schwartz wrote a legal brief and submitted it but with quotes and citations made up by a generative AI. A warning for the limits of AI technology is discussed as this case serves as an example of the danger of over-reliance on AI. Avianca is Colombia’s largest airline with numerous safety protocols in place. Schwartz is a lawyer with 30 years of experience in litigation.
Avianca Inc., one of the largest airlines in Colombia, is embroiled in litigation with Roberto Mata. A 10-page brief of citations was submitted by Mata’s lawyers, however none could be verified. It was later revealed that lawyer Steven A. Schwartz had used ChatGPT’s AI program for the legal research. Judge Castel will make a ruling to determine potential repercussions. This case raises the question- are robots ready to replace human knowledge workers?
Lawyer Steven Schwartz of Levidow, Levidow and Oberman is facing consequences of using OpenAI's ChatGPT AI tool to supplement research for a legal brief in a case against airline Avianca. But when the court couldn't find the references in the submission, a hearing was held by Judge P Kevin Castel. Schwartz admitted to using the AI program and assured the court not to use it without verification in the future.
Explore the evolution of tech policy from Obama's optimism to Harris's vision at the Democratic National Convention. What's next for Democrats in tech?