A New York-based law firm is in hot water after it was discovered that their AI-powered legal research tool, OpenAI ChatGPT, generated false information. The firm used ChatGPT to search for historical cases to support their argument that a client’s personal injury case should be heard using precedent. However, the tool generated citations for cases that did not exist, which led to accusations of misinformation and bias.
The tool’s user, Steven A Schwartz, claimed to be unaware that OpenAI ChatGPT content could be false. In his written statement, he expressed regret for relying on the AI-powered tool, which he had never used before, and pledged to never use it again without verifying its authenticity.
The plaintiff’s lawyer, Peter LoDuca, was not involved in the research and had no knowledge of how it was carried out. Both lawyers have been asked to explain themselves and avoid being penalized.
OpenAI ChatGPT is an AI-powered platform that generates original text on demand using various writing styles. Its database is the internet as it was in 2021, and it has been used by millions of individuals since its inception in November 2022. However, concerns have been raised about the possible hazards of artificial intelligence, including the spread of misinformation and bias.
The case highlights the importance of verifying the authenticity of research, especially in legal settings. It also underscores the need for caution when using AI-powered tools that generate information that can be potentially inaccurate or biased. Legal professionals must be aware of the risks of relying on AI-generated research, and take all necessary precautions to ensure their work is accurate and unbiased.