Lawyers using artificial intelligence (AI) for legal research should take note from a recent case in the Southern District of New York. Lawyers representing a plaintiff in Mata v. Avianca used the generative AI program ChatGPT to perform legal research, but the program had fabricated citations and decisions, which the lawyers unknowingly submitted to the court. In response, the Southern District of New York demanded that the plaintiff’s counsel explain why they should not be sanctioned for citing fake cases and scheduled a hearing for June 8, 2023. The incident highlights the need for lawyers and non-lawyers using AI to double-check and independently verify the output, as AI software can provide inaccurate responses that appear legitimate.
Although AI software has the potential to assist with sifting through voluminous data and drafting portions of legal documents, human supervision and review remain critical, particularly in legal contexts. Non-lawyers using AI to set up business structures or access legal information should also take caution, as AI doesn’t always provide accurate information. Hence, the output generated by AI software should be double-checked and verified through independent sources. While ChatGPT frequently warns users that they should consult a lawyer when asking legal questions, many users find it useful. However, users need to understand the limitations of AI software and not rely solely on the software’s output.
In conclusion, the incident in the Southern District of New York underscores the potential drawbacks of using AI in a legal context. Although AI has revolutionized legal research and drafting, it is not yet reliable for legal questions. Lawyers and non-lawyers should use AI cautiously and only as a tool, understanding its limitations to provide accurate legal responses. While AI has massive potential, independent review and verification remain essential in legal contexts to avoid using fabricated information.