Lawyers who used AI for case research are in trouble after they cited fictitious and fake cases made entirely by the AI. Steven A. Schwartz and Peter LoDuca, both lawyers at Levidow, Levidow & Oberman, used OpenAI‘s ChatGPT to research cases involving aviation mishaps against Colombian airline Avianca. However, they later became apologetic to a Manhattan federal court judge after citing non-existent cases.
The two lawyers are now facing possible punishment for including references to past court cases that did not exist, which could end their careers. The judge has noticed that some pieces of information do not match the ones on the papers, and the airline’s lawyers also wrote to the judge saying they could not find some of the cases referenced in the brief.
According to BBC News, Schwartz apologised for relying on the AI chatbot and will not use it again for legal research in the future. The legal team has been ordered to explain why they should not be disciplined in a hearing set on June 8.
OpenAI‘s ChatGPT is a powerful tool that can help create content for users in seconds. However, experts and professionals have warned against its use due to the risk of bringing wrong data or information to users. While many people are satisfied with what AI can bring, it is essential to fact-check and research the information obtained through AI chatbots.