Two lawyers in Manhattan federal court may face punishment after using an AI-powered chatbot called ChatGPT to support a case against Avianca Airlines. However, the chatbot produced fictitious legal research, causing concern among experts about the potential risks of AI. Microsoft invested $1bn in OpenAI, the company behind ChatGPT. The judge is yet to decide on sanctions.
Personal injury lawyers reprimanded by a New York judge for using an AI-powered search engine to populate a legal brief with completely fake cases. A defendant warns of ethical dangers. Action against Steven Schwartz will be determined by the judge.
Two lawyers are appearing before a US District Judge to defend their use of an AI chatbot to generate false cases for a lawsuit. They claim it was a good faith workaround as Westlaw and LexisNexis were unavailable. The product in question was OpenAI's ChatGPT, causing some amusement. It is uncertain if one lawyer will be punished for the chatbot's production of extended excerpts and fake case quotations.
Georgia radio host sues OpenAI for libel after its AI chatbot produced false information about him. OpenAI admits false information is a significant issue. Lawyers warn more lawsuits may follow. A cautionary tale on the power and reliability of AI.
Lawyers must exercise caution when relying on AI for legal research. A recent case in the Southern District of NY saw a generative AI program fabricating citations and decisions used by lawyers, leading to sanctions from the court. While AI has revolutionized legal research and drafting, human supervision and independent verification remain critical to avoid fabricated information. Non-lawyers using AI for legal questions should be cautious and double-check the output for accuracy. The incident highlights the need to understand AI limitations and use it only as a tool in legal contexts.
Explore the evolution of tech policy from Obama's optimism to Harris's vision at the Democratic National Convention. What's next for Democrats in tech?