Lawyer uses phony ChatGPT cases leading to courtroom chaos

Date:

The credibility of ChatGPT, a popular AI language model, has come under scrutiny following a recent incident in a US court. A lawyer used the technology to look up similar cases to support his argument, but many of the cases ChatGPT created were fraudulent. When the lawyer relied on ChatGPT to confirm that these were genuine cases, the AI model inaccurately confirmed their reality. The incident raises concerns about the reliability of ChatGPT and other similar AI chatbots as sources of information and references. It also highlights the need for legal professionals to exercise greater caution when employing AI to research previous cases. The article recommends using ChatGPT’s summaries with caution and verifying the citations received manually.

Chat-Generative Pre-Trained Transformer (ChatGPT) is a conversation engine that builds on Generative-Pre-Trained Transformer-3.5. It has over 175 billion parameters and draws from web-based sources such as books, journals, and websites. The technology utilises reinforcement learning, resulting in an AI model that can fine-tune conversational tasks and answer a wide range of user questions and enquiries. However, as it incorporates information based on user contributions, it can also generate false information. The article identifies this as a significant limitation of AI chatbots such as ChatGPT.

Steven A. Schwartz is the lawyer at the centre of the ChatGPT controversy. Schwartz used the AI language model to look up previous cases similar to his own to support his client’s argument in court. However, Schwartz was not aware of ChatGPT’s propensity to fabricate information and did not verify its authenticity. The lawyer expressed regret at relying solely on the AI model and pledged to verify its information more strictly in the future.

See also  AI Accessibility: Addressing Trust, Security, and Verification Challenges for Human-Machine Interaction

Overall, the article demonstrates how the use of AI models to generate legal arguments and research cases can lead to inaccurate references and citations. To ensure the reliability of such sources, it is recommended that users exercise caution when employing AI chatbots and manually verify their findings.

Frequently Asked Questions (FAQs) Related to the Above News

Please note that the FAQs provided on this page are based on the news article published. While we strive to provide accurate and up-to-date information, it is always recommended to consult relevant authorities or professionals before making any decisions or taking action based on the FAQs or the news article.

Share post:

Subscribe

Popular

More like this
Related

Investment Banking Fees Surge as Wall Street Lenders Reap Rewards

Investment banking fees surge as Wall Street lenders reap rewards with a revival in dealmaking activity, driving up revenues for major banks.

Bugmapper: AI Revolutionizing Agriculture in Kayseri, Turkey

Bugmapper AI system revolutionizes greenhouse agriculture in Kayseri, Turkey, reducing pesticide use and enhancing food safety.

Bugmapper AI System Revolutionizes Greenhouse Agriculture in Kayseri, Turkey

Bugmapper AI system revolutionizes greenhouse agriculture in Kayseri, Turkey, reducing pesticide use and enhancing food safety.

Infosys Global Head of Strategic Sales Resigns, Hemant Lamba Steps Down

Hemant Lamba steps down as Infosys Global Head of Strategic Sales, leading to speculation in the tech industry.