The credibility of ChatGPT, a popular AI language model, has come under scrutiny following a recent incident in a US court. A lawyer used the technology to look up similar cases to support his argument, but many of the cases ChatGPT created were fraudulent. When the lawyer relied on ChatGPT to confirm that these were genuine cases, the AI model inaccurately confirmed their reality. The incident raises concerns about the reliability of ChatGPT and other similar AI chatbots as sources of information and references. It also highlights the need for legal professionals to exercise greater caution when employing AI to research previous cases. The article recommends using ChatGPT’s summaries with caution and verifying the citations received manually.
Chat-Generative Pre-Trained Transformer (ChatGPT) is a conversation engine that builds on Generative-Pre-Trained Transformer-3.5. It has over 175 billion parameters and draws from web-based sources such as books, journals, and websites. The technology utilises reinforcement learning, resulting in an AI model that can fine-tune conversational tasks and answer a wide range of user questions and enquiries. However, as it incorporates information based on user contributions, it can also generate false information. The article identifies this as a significant limitation of AI chatbots such as ChatGPT.
Steven A. Schwartz is the lawyer at the centre of the ChatGPT controversy. Schwartz used the AI language model to look up previous cases similar to his own to support his client’s argument in court. However, Schwartz was not aware of ChatGPT’s propensity to fabricate information and did not verify its authenticity. The lawyer expressed regret at relying solely on the AI model and pledged to verify its information more strictly in the future.
Overall, the article demonstrates how the use of AI models to generate legal arguments and research cases can lead to inaccurate references and citations. To ensure the reliability of such sources, it is recommended that users exercise caution when employing AI chatbots and manually verify their findings.