ChatGPT can expedite legal research and brief preparation, but it comes with a risk of fabricated information. Steven Schwartz, a New York lawyer, was caught citing fake legal cases, leading to an ethics violation hearing. Lawyers should verify ChatGPT information and maintain ethical standards.
Lawyer reprimanded for attempting to use fake ChatGPT-generated cases during a hearing for personal injury case against an airline. ChatGPT not designed for detecting AI content. Importance of understanding technology emphasized.
Lawyer Steven Schwartz of Levidow, Levidow and Oberman is facing consequences of using OpenAI's ChatGPT AI tool to supplement research for a legal brief in a case against airline Avianca. But when the court couldn't find the references in the submission, a hearing was held by Judge P Kevin Castel. Schwartz admitted to using the AI program and assured the court not to use it without verification in the future.
Attorney Steven Schwartz of law firm Levidow, Levidow and Oberman faced consequences for using ChatGPT to generate a false brief during a case against the Columbian airline Avianca. As a result, a hearing was scheduled to discuss the sanctions and OpenAI’s program sparked of fear among the professions. Read on to learn more about this legal case.
Lawyer Steven Schwartz from New York's Levidow, Levidow, and Oberman recently admitted to using AI software, ChatGPT, for legal research, after it made reference to fabricated court cases. Schwartz is representing Roberto Mata in his lawsuit against Avianca Airlines for injuries he suffered during a flight from El Salvador to JFK in 2019. He and his team are making a case for why the lawsuit should not be dismissed. Get all updates with this tech-fueled legal case from a trusted source.
Explore the evolution of tech policy from Obama's optimism to Harris's vision at the Democratic National Convention. What's next for Democrats in tech?