A US lawyer is facing disciplinary action after using an AI-powered tool, ChatGPT, for legal research and citing fake cases during a personal injury hearing. The opposing side questioned the argument, leading the judge to write that six cases presented by Schwartz were not accurate. The lawyer admitted to using ChatGPT to look up relevant cases, not understanding that AI tools do not always provide accurate information. Although AI tools have many uses, they cannot replace human intelligence and interpretation, especially in legal research. Schwartz will appear in a disciplinary hearing alongside the lead lawyer, LoDuca, who was unaware of how the research was conducted.
ChatGPT is an AI-powered tool that generates text based on a database of information derived from a snapshot of the Internet in 2021. The tool is free and can summarize online information quickly and accurately. However, it can display behavior dubbed hallucination where it gives answers that cite sources and details that do not exist. The tool is popular for generating ideas for content, outlines of documents, headlines, and advertising copy. It can produce multiple versions of a text in seconds through simulation, making it useful for spelling and grammar checks.
Steven A Schwartz is the US lawyer facing disciplinary sanctions after citing fake cases during a personal injury case. Inadequate understanding of AI tools led Schwartz to rely on ChatGPT for legal research. As a result, his side presented a legal argument that referred to several non-existent cases, leading to disciplinary action. Schwartz admitted to using the free public version of ChatGPT, which generates text based on a database derived from a snapshot of the internet in 2021. The lead lawyer, Peter LoDuca, was not involved in the research and not aware of how it was conducted.