Lawyers have trusted legal research providers for years to help them make cases for their clients. However, a Manhattan lawyer recently made the mistake of relying on an AI chatbot, ChatGPT, to back up an argument in a personal injury lawsuit. The resulting chaos has landed Steven A. Schwartz in hot water with the court, which is now holding a hearing on the matter on June 8th.
ChatGPT is part of a new generation of technologies dubbed “generative AI,” which use natural language processing to hold conversations. Though it can be incredibly convincing, ChatGPT has serious accuracy problems. In Schwartz’s case, this meant the AI chatbot unknowingly spun a web of false cases and sources. After the opposing lawyers presented evidence that none of the cited cases actually existed, Schwartz knew he’d made a mistake.
Billionaire entrepreneur Elon Musk believes AI development poses too great a risk to humanity and has gone so far as to call for a six-month pause on its advancement. Whereas Google’s ChatGPT rival Bard has its own accuracy struggles, both of these generative AI systems have the same critical flaw: They’ll often try to provide an answer to a question, even if it isn’t completely accurate. This is especially alarming in light of the number of people who have gotten ChatGPT to write essays for them or consulted it as a factual source.
It’s clear that AI projects like ChatGPT simply cannot be trusted as reliable sources of information. They typically have no concept of truth and will invent any answer they need to. However, it’s important to remember that traditional research sources like Google and Wikipedia are usually much more trustworthy—the difference lies in the fact that the former are constantly monitored by experts, while the latter cares only about sounding impressive. Therefore, it is important for us as human beings to take the time to fact-check any questionable information that an AI chatbot may provide.
Coming to the company mentioned in the article, OpenAI is a research laboratory founded by Elon Musk, Sam Altman, Greg Brockman, and others. This American research laboratory is focused on developing artificial general intelligence and some other aspects of artificial intelligence. OpenAI works to develop technologies with possible large-scale implications for humanity, seeking to use its AI research to benefit humanity and reduce some existential risk.
P. Kevin Castel is the judge in the case mentioned in the article. He was appointed to serve as a United States District Judge of the United States District Court for the Southern District of New York by President George W. Bush in 2003. He has made significant decisions across a range of areas including immigration, civil rights, and criminal justice reform. He has also spoken out against human rights violations including those of vulnerable immigrants, LGBTQ individuals, and those with disabilities.