A Colorado lawyer has been suspended after using artificial intelligence (AI) to generate fake case citations in a legal brief. Zachariah C. Crabill received the sanction for his actions and subsequently lying about it. This case is believed to be the first in Colorado involving the improper use of AI in the legal profession.
The head of Colorado’s attorney regulation office, Jessica E. Yates, stated that she was unaware of any similar cases in the state. While the state Supreme Court has not yet provided guidance on the legal and professional implications of AI, Justice Melissa Hart has acknowledged the need for the court to educate itself about emerging technology.
Crabill’s suspension was imposed by the presiding disciplinary judge, Bryon M. Large. The agreed-upon narrative between Crabill and attorney regulators detailed how his law firm took on a client who had previously represented themselves. Crabill’s task was to draft a motion to set aside a judgment issued by an El Paso County judge in a civil case.
In an attempt to strengthen his legal citations, Crabill turned to an AI program called ChatGPT. He used the program to search for cases that seemingly supported his client’s position. However, he added these AI-generated case citations to his brief without verifying their accuracy, believing he was saving time and reducing stress.
On the day of the hearing, Crabill realized that the citations in his brief were false. He admitted his mistake to El Paso County District Court Judge Eric Bentley, attributing the errors to a legal intern’s mistake. Later, Crabill filed a corrected motion, which was denied for other reasons apart from the fictitious case citations.
Crabill and the Office of Attorney Regulation Counsel reached an agreement that he had violated his professional duties to act competently, diligently, and honestly. Due to his lack of prior discipline, his acceptance of responsibility, and personal struggles at the time, a two-year suspension was agreed upon. Crabill would serve only 90 days of the suspension if he completed a probationary period.
The case highlights the limitations and risks of relying solely on AI-generated information. Users of AI must exercise caution and verify the accuracy of the results. Mistakes such as those made by Crabill can have serious consequences, including the wastage of time and resources for the opposing party and damage to clients’ arguments.
As technology continues to advance, legal professionals and the courts must adapt and educate themselves to navigate these challenges. By considering the impact of AI and developing rules to accommodate it, they can ensure the integrity and credibility of the legal system.
In a similar case earlier this year, a federal judge in New York sanctioned two lawyers for using an AI chatbot that produced false case citations. Such misconduct undermines the legal system and has the potential to erode trust in judicial rulings.
The use of AI technology in the legal field is a complex issue that requires careful consideration and adaptation of existing rules and regulations. As the legal community and courts grapple with the ethical and practical dilemmas of AI, it is crucial to strike a balance between efficiency and accuracy while upholding the core principles of the legal profession.