Combatting ‘AI Hallucinations’: How OpenAI Addresses ChatGPT’s Fabrication Tendency

Date:

OpenAI, the research organization behind the language model, GPT-3, has proposed a new strategy called process supervision to combat the tendency of AI models to hallucinate or fabricate information. Sometimes AI models produce false claims when they lack enough information to arrive at a correct answer. The AI ‘hallucinations’ present a problem for AI models, particularly in domains that require multi-step reasoning, where even a small logical error can produce numerous false conclusions.

The researchers propose using process supervision instead of outcome supervision to train AI models to reward themselves for each correct reasoning step needed to arrive correctly at an answer—instead of just rewarding the final correct conclusion. In their report, OpenAI researchers wrote, These hallucinations are particularly problematic in domains that require multi-step reasoning since a single logical error is enough to derail a much larger solution.

Google’s Chatbot, Bard, produced an untrue statement in a promotional video in February 2021. Additionally, OpenAI’s ChatGPT referring to bogus cases in a recent New York federal court filing was found to produce false information. Attorneys involved in the case face possible legal sanctions for the faulty information. The researchers at OpenAI claim that their proposed strategy could lead to better explainable AI since AI models would follow more of a human-like chain of thought approach.

OpenAI’s proposed strategy rewards AI models for accurate reasoning rather than the final conclusion. This could improve the accuracy and reliability of AI in domains that require multi-step reasoning and prevent inaccurate conclusions about information.

See also  Microsoft Planning to Bring ChatGPT to Windows Desktop Using PowerToys

Frequently Asked Questions (FAQs) Related to the Above News

Please note that the FAQs provided on this page are based on the news article published. While we strive to provide accurate and up-to-date information, it is always recommended to consult relevant authorities or professionals before making any decisions or taking action based on the FAQs or the news article.

Share post:

Subscribe

Popular

More like this
Related

OpenAI Security Breach Exposes AI Secrets, Raises National Security Concerns

OpenAI Security Breach exposes AI secrets, raising national security concerns. Hacker steals design details from company's messaging system.

Breaking Music Industry News: Stay Ahead with Hypebot’s Insider Insights

Stay informed on the latest music industry news and trends with Hypebot's insider insights. Stay ahead in a rapidly changing landscape.

Intellect Design Arena Launches Groundbreaking iGPX Platform for Government Procurement Revolution

Intellect Design Arena's iGPX platform redefines government procurement with AI, efficiency, and sustainability. Revolutionizing public sector practices.

ISRO Announces Bharatiya Antariksh Hackathon for National Space Day

Join ISRO's Bharatiya Antariksh Hackathon on National Space Day to showcase your innovative capabilities in space technology and AI/ML.