Attorneys face consequences after using AI to generate false citations in court documents. Two attorneys from the well-respected firm Levidow, Levidow & Oberman, P.C. have been ordered to explain their use of the AI model ChatGPT during legal research after a judge discovered fake case citations in a court document. The attorneys had used the tool to extract human-like text generation capabilities but ended up creating fictitious court cases as references. The discovery came to light when the defense counsel could not find the cited cases in any legal databases. The Judge has issued an order for both attorneys to show cause and explain their unconventional research methods at a hearing in the Southern District of New York. The incident highlights the risks of over-reliance on AI tools in professional settings and how they can yield incorrect information with severe consequences, including stiff penalties and potential breaches of professional conduct.
Levidow, Levidow & Oberman, P.C. is a reputable law firm based in New York City that has been serving clients for over 30 years. It has earned respect in the legal community for its unique, aggressive approach to litigation and providing tailored legal services. They are known for their expertise in handling complex commercial litigations, labor, and employment disputes, and personal injury lawsuits, amongst others.
Peter LoDuca and Steven A. Schwartz are the two attorneys who have come under scrutiny for their use of AI-generated citations in court documents. They are both members of Levidow, Levidow & Oberman, P.C. and have been ordered to appear in court to explain their unconventional research methods. If found in violation of professional conduct rules for their use of AI to create fake citations, they could face stiff penalties and loss of reputation.
This incident raises concern about the level of reliance on AI tools in professional settings. Even though AI tools can provide sophisticated capabilities, they can yield incorrect information, leading to severe professional and legal consequences. It’s an important wake-up call for the legal community, and proper precautions must be taken when implementing AI tools in a professional setting. It’s important to remember that AI is a tool and not a substitute for human expertise.