Lawyer in Trouble After Utilizing AI Tool ChatGPT for Research, Fake Cases Uncovered

Date:

Since the release of ChatGPT back in November 2022, it has been put to an array of various uses from writing essays to solving codes. The AI chatbot was seen as a positive step by many users, raising expectations and reliability. But, nearly six months after the release, its negatives were also exposed. Tech experts have started to discover that the chatbot can ‘hallucinate’ and provide false information with such confidence that the recipient might not question its accuracy. This consequently leads to disseminating misinformation and even getting people in legal trouble.

Recently, a lawyer based in New York has faced a court hearing as an aftermath of his use of ChatGPT for legal research. The court discovered that some of the legal cases mentioned by the lawyer were invented and false. The judge gave this case an unprecedented circumstance. The lawyer, Peter LoDuca, was not aware of how the research was conducted and was not involved in the process.

The mistake originated from a colleague of LoDuca’s, Steven A Schwartz, a veteran lawyer, who used the AI chatbot to look for similar cases in the past. According to Schwartz, he was not aware of the possibility that the AI could generate false information. In a statement, he issued an apology and promised to never rely on AI chatbot for his legal research without thorough verification.

The brief case mentioned in the article happened almost a month ago and involved a man bringing a lawsuit against an airline. When the man’s legal team used a brief to cite past court cases, the airline’s lawyers debunked them as they could not trace the aforementioned cases.

See also  How to Remove Betsi Cadwaladr University Health Board from Special Measures: Expert ChatGPT Advice

The present situation has further been highlighted by Prabhakar Raghavan, senior vice-president at Google and head of Google Search. He shared his concern on the use of AI chatbot and the possibility of it coming up with fabricated information that is highly convincing.

Therefore, it becomes important to reprove and double-check the work done using AI chatbot to not be taken unaware by such false information. It is also important to keep in mind that the use of AI chatbot for legal research purposes should be done with utmost degree of caution and accuracy.

Frequently Asked Questions (FAQs) Related to the Above News

Please note that the FAQs provided on this page are based on the news article published. While we strive to provide accurate and up-to-date information, it is always recommended to consult relevant authorities or professionals before making any decisions or taking action based on the FAQs or the news article.

Share post:

Subscribe

Popular

More like this
Related

Obama’s Techno-Optimism Shifts as Democrats Navigate Changing Tech Landscape

Explore the evolution of tech policy from Obama's optimism to Harris's vision at the Democratic National Convention. What's next for Democrats in tech?

Tech Evolution: From Obama’s Optimism to Harris’s Vision

Explore the evolution of tech policy from Obama's optimism to Harris's vision at the Democratic National Convention. What's next for Democrats in tech?

Tonix Pharmaceuticals TNXP Shares Fall 14.61% After Q2 Earnings Report

Tonix Pharmaceuticals TNXP shares decline 14.61% post-Q2 earnings report. Evaluate investment strategy based on company updates and market dynamics.

The Future of Good Jobs: Why College Degrees are Essential through 2031

Discover the future of good jobs through 2031 and why college degrees are essential. Learn more about job projections and AI's influence.