A lawyer in Colorado Springs, Zachariah Crabill, narrowly avoided a legal nightmare thanks to the high-profile mishap of Steven Schwartz and Peter LoDuca, who used ChatGPT to conduct legal research and filed phony cases in federal court. Crabill filed a motion to set aside a summary judgment, citing fake cases generated by ChatGPT. The young attorney, who had previously used the platform to answer simple questions accurately, trusted the results of his subsequent searches despite the system returning dozens of nonexistent cases. It wasn’t until the day of the hearing that Crabill realized his mistake and informed the court accordingly.
The judge had already figured out the fake cases before Crabill could rectify the situation and the attorney now faces the possibility of a complaint. It is crucial for lawyers to verify their work and not rely solely on ChatGPT, Westlaw, Lexis, Fastcase or other legal research tools. While there’s nothing wrong with asking the system to answer questions, it is a must for attorneys to verify the results of any research they conduct to avoid such malpractice in court.
The use of AI in legal research is becoming more common, but it is essential for lawyers to double-check the answers provided by these tools. The new technology can save attorneys time and assist them in their research, but it can also lead to disastrous consequences if used with a blind eye. It is crucial for lawyers to maintain their professional ethics and to make sure their work is accurate to avoid facing legal trouble.