OpenAI’s New Chatbot Training Method is Not a Breakthrough, But a Setback

Date:

OpenAI has released a research paper and accompanying blog post noting a potential step forward in the development of chatbots. Known as “instruction-following large language models” (such as OpenAI’s ChatGPT and rivals such as Google’s Bard and Anthropic’s Claude), these systems, according to the AI engineers at these companies, have the potential to transform businesses. However, they are often unreliable, prone to errors and can pose risks such as outputting toxic language or encouraging unsafe or illegal behaviour, leading many firms to search for ways to mitigate these risks.

OpenAI’s latest research centres around a process known as “reinforcement learning from human feedback” to tame the incorrect responses often created by these models. The idea is that humans select the responses generated by the model that best answer the inquiry, and the model focuses its attention on parameters that lead to the correct answers. OpenAI then improved this approach by asking machines to think step-by-step when approaching specific problems, allowing them to understand the process of logical problem-solving better, with engineers rating the results produced throughout each step of the process. Ultimately, machines will still only output what humans have taught them, leading to questions as to whether any such system can genuinely exhibit creativity.

The difficulties of applying these models to humans and their values, adding credibility to the argument that we are yet to see a truly intelligent machine capable of exhibiting creativity.

OpenAI’s findings could be a step backwards in the

See also  Strengthening Data Security with OpenAI's Latest Business Plan

Frequently Asked Questions (FAQs) Related to the Above News

Please note that the FAQs provided on this page are based on the news article published. While we strive to provide accurate and up-to-date information, it is always recommended to consult relevant authorities or professionals before making any decisions or taking action based on the FAQs or the news article.

Share post:

Subscribe

Popular

More like this
Related

WhatsApp Unveils New AI Feature: Generate Images of Yourself Easily

WhatsApp introduces a new AI feature, allowing users to easily generate images of themselves. Revolutionizing the way images are interacted with on the platform.

India to Host 5G/6G Hackathon & WTSA24 Sessions

Join India's cutting-edge 5G/6G Hackathon & WTSA24 Sessions to explore the future of telecom technology. Exciting opportunities await! #IndiaTech #5GHackathon

Wimbledon Introduces AI Technology to Protect Players from Online Abuse

Wimbledon introduces AI technology to protect players from online abuse. Learn how Threat Matrix enhances player protection at the tournament.

Hacker Breaches OpenAI, Exposes AI Secrets – Security Concerns Rise

Hacker breaches OpenAI, exposing AI secrets and raising security concerns. Learn about the breach and its implications for data security.