A lawyer in New York has recently confessed to using an AI-powered tool to write a legal complaint, which turned out to be full of made-up case law. He admitted that he was not aware of the capabilities of the tool and wishes he had researched it before using it. Steven Schwartz, who has been practicing law for 30 years, heard about ChatGPT from his college-aged children and from various articles, but had never used it professionally.
During the court hearing, Schwartz admitted that he did not comprehend the true capabilities of ChatGPT and believed it to be like a super search engine with greater reach than standard databases. However, programs like ChatGPT create responses by analyzing fragments of text that can follow other sequences based on a statistical model, which could be ingested from countless examples on the internet. This is something that most people who use these AI-powered tools do not comprehend, which could lead them astray.
In the Avianca case, the lawyer’s dependence on the AI tool led him to write a motion with fake case law, which raised concerns about the use of AI for legal work. Critics warn lawyers about their limitations and the dangers of relying entirely on such models that they do not comprehend. Irina Raicu, who directs the internet ethics program at Santa Clara University, has emphasized that this case identifies the majority of people who use AI tools, including ChatGPT, and do not fully comprehend the tool’s capability and limitations.
In conclusion, this case raises concerns about the use of AI tools in the legal field. It is essential to research and comprehend such tools’ limitations before using them professionally. Not all AI tools work in the same way, and some are more sophisticated than others, depending on the purpose they are designed for. As the use of AI-powered tools continues to increase, it is paramount for people to stay informed and comprehend how they work to avoid potential errors.