Lawyer Claims Being Tricked by ChatGPT into Citing Nonexistent Cases

Date:

A New York lawyer recently found himself in trouble after using an AI language model to draft an affidavit that contained fabricated legal cases. The attorney, Steven Schwartz from Levidow, Levidow & Oberman, admitted during a sanctions hearing before a New York judge that he had been duped by the AI tool, called ChatGPT. The affidavit was for a lawsuit related to an individual who claimed to have been injured by a serving cart on a flight. It was later discovered that it contained six bogus court cases with fabricated quotes and citations.

During the hearing, Judge P. Kevin Castel questioned how Schwartz had missed the fake cases, describing ChatGPT’s contributions as legal gibberish. Schwartz attempted to defend himself by claiming that he thought ChatGPT’s output were mere excerpts and that he believed these cases couldn’t be found on Google. However, the judge pointed out inconsistencies and deemed the content as legal gibberish.

Peter LoDuca, the other lawyer involved in the case, distanced himself from the research used in the affidavit, stating in court that he should have been more skeptical. LoDuca expressed regret for the incident and vowed that such a mistake would never happen again.

The sanctions hearing concluded without Judge Castel issuing a decision regarding possible penalties for the lawyers. The law firm’s attorneys were not immediately available for comment following the hearing.

This case highlights the potential pitfalls of relying solely on AI tools for complex legal work. While AI language models can offer valuable assistance in legal research, it is crucial for legal professionals to exercise due diligence and verify the accuracy and authenticity of the information they obtain. Maintaining human oversight and critical thinking remains essential to ensure the integrity and reliability of legal proceedings.

See also  Can ChatGPT and Other AI Systems Face Libel Charges? - An Interview

This incident serves as a cautionary tale for all legal professionals, emphasizing the importance of double-checking research for accuracy and authenticity. The use of AI technology has the potential to assist lawyers in research. However, it is necessary to be cautious and maintain a human element in the legal process. This oversight will ensure that the reputation and reliability of legal proceedings are protected.

Frequently Asked Questions (FAQs) Related to the Above News

Please note that the FAQs provided on this page are based on the news article published. While we strive to provide accurate and up-to-date information, it is always recommended to consult relevant authorities or professionals before making any decisions or taking action based on the FAQs or the news article.

Share post:

Subscribe

Popular

More like this
Related

Samsung Unpacked Event Teases Exciting AI Features for Galaxy Z Fold 6 and More

Discover the latest AI features for Galaxy Z Fold 6 and more at Samsung's Unpacked event on July 10. Stay tuned for exciting updates!

Revolutionizing Ophthalmology: Quantum Computing’s Impact on Eye Health

Explore how quantum computing is changing ophthalmology with faster information processing and better treatment options.

Are You Missing Out on Nvidia? You May Already Be a Millionaire!

Don't miss out on Nvidia's AI stock potential - could turn $25,000 into $1 million! Dive into tech investments for huge returns!

Revolutionizing Business Growth Through AI & Machine Learning

Revolutionize your business growth with AI & Machine Learning. Learn six ways to use ML in your startup and drive success.