The use of artificial intelligence (AI) tools in the legal industry has become increasingly common in recent years. However, a recent incident involving the chatbot model ChatGPT highlights the need for caution, as well as proper guidance and regulation. New York lawyers Steven Schwartz and Peter LoDuca cited six nonexistent court cases in a brief they submitted, only to later realize that they had been created by ChatGPT. While AI tools can be helpful for tasks like document discovery and contract reviews, legal professionals must ensure that they do not rely too heavily on them without proper safeguards. Professional conduct rules require lawyers to exercise competence and take responsibility for the work done by non-lawyers, including AI tools. This incident is a reminder of the importance of checking one’s work and developing policies around AI use in the legal industry.
ChatGPT is an AI chatbot model developed by OpenAI, a research non-profit organization. The technology is one of the many generative AI tools that can carry out human-like conversations and quickly gather information from the internet.
Steven Schwartz and Peter LoDuca are New York lawyers who have been practicing at the law firm Levidow Levidow & Oberman for over two decades. The sanctions hearing relating to the case of citing nonexistent court cases in a brief submitted by Schwartz and LoDuca is set to take place on June 8th.