The use of artificial intelligence in the legal profession is a hot topic these days. At a recent technology law conference, several speakers praised ChatGPT, an AI tool that can help lawyers research and prepare briefs. However, one speaker, Browning, expressed concerns about the ethics of using ChatGPT, including its tendency to provide fabricated information. As it turns out, Browning’s concerns were well-founded. A New York lawyer, Steven Schwartz, recently submitted a brief that quoted six fictitious cases, including Varghese v. China South Airlines and Martinez v. Delta Airlines. The judge assigned to the case was shocked by this clear violation of ethics and has scheduled a sanctions hearing for June 8.
ChatGPT is an AI tool that can help lawyers research and prepare briefs. While it has the potential to save lawyers time and increase efficiency, it is important to be aware of its limitations and potential drawbacks. One issue is the occasional hallucinations that ChatGPT can produce, which can result in the submission of briefs that contain fabricated information. This can lead to serious ethical violations, as was the case with the brief submitted by Steven Schwartz. Lawyers using ChatGPT should be vigilant about double-checking any information the tool provides and avoiding relying solely on its outputs.
Steven Schwartz is a New York lawyer who recently submitted a brief that contained six fake legal cases, leading to a scheduled sanctions hearing. Schwartz had been representing plaintiff Roberto Mata in a personal injury lawsuit against Avianca Airlines. The fact that Schwartz cited non-existent cases as supporting evidence highlights the need for lawyers to be diligent in their research and to avoid relying solely on technology such as ChatGPT without verifying its accuracy. The case has drawn attention to the importance of maintaining ethical standards in the legal profession.