Two personal injury attorneys have been sanctioned by a New York federal judge for using artificial intelligence to write a brief that contained nonexistent case law. The lawyers were accused of abandoning their responsibility to check their work, and their behavior was deemed to have been in bad faith when they waited several weeks before coming clean about the embarrassing episode, according to Law360.
The incident was uncovered when the court issued an order to show cause demanding that the lawyers explain the errors in their brief. It emerged that they had used a machine learning algorithm to generate the document, but had failed to review it properly before submitting it to the court.
The judge ruled that the lawyers’ conduct was unacceptable and had caused unnecessary delays in the case. They were ordered to pay a fine and barred from using machine learning to draft legal documents in the future. The judge said the attorneys’ conduct was at the very least, a reckless disregard for their duties, and was an egregious dereliction of professional responsibility.
The use of artificial intelligence has become more prevalent in the legal field, but experts warn that it must be used responsibly, with human oversight and review. Some lawyers have expressed concern that machine-generated documents may not be held to the same ethical and professional standards as those written by humans.
The case highlights the need for lawyers to take responsibility for the work they produce and to ensure that it is accurate and ethical. It also raises questions about the use of artificial intelligence in the legal field and the potential risks associated with its use.
In conclusion, these sanctions serve as a warning to the legal community that the use of AI in legal proceedings is not a substitute for human oversight. Attorneys are responsible for the documents they submit, and their conduct must be in good faith at all times.