US lawyer apologizes for a false brief submitted to a civil case involving Avianca that was generated by a chatbot. Steven Schwartz cited non-existent cases to support his litigation claim, blaming his first-time use of ChatGPT. Read more on this story.
New York lawyer Steven Schwartz faces sanctions after using an AI model to create a fabricated legal brief, citing non-existent court decisions. Schwartz claimed he was duped by the tool, but his reputation has been significantly damaged. As AI and natural language processing advance, it is crucial to understand their limitations. The judge has yet to make a ruling on the sanctions.
A New York lawyer used an AI tool to file a court brief with phony legal precedents. The tool admitted to providing inaccurate information. The lawyer has requested leniency, stating he had no intention to fool anyone. The judge's decision on possible sanctions remains unknown.
Two lawyers are appearing before a US District Judge to defend their use of an AI chatbot to generate false cases for a lawsuit. They claim it was a good faith workaround as Westlaw and LexisNexis were unavailable. The product in question was OpenAI's ChatGPT, causing some amusement. It is uncertain if one lawyer will be punished for the chatbot's production of extended excerpts and fake case quotations.
Explore the evolution of tech policy from Obama's optimism to Harris's vision at the Democratic National Convention. What's next for Democrats in tech?