ChatGPT, an AI language model, is capable of producing convincing yet false statements. Recently, UC Berkeley alumnus John Schulman, the lead developer of ChatGPT, discussed the issue during a public lecture hosted by Berkeley. He offered solutions to tackle the problem and asserted that reinforcement learning could help give AI models the ability to recognize knowledge gaps and answer questions more truthfully.
Berkeley Talks episode 166 saw Schulman explain how an AI language model is trained to maximize the likelihood of a response. Despite this, when patterns are not complete, an AI model will sometimes produce inaccurate information. He labeled these occurrences as “hallucinations” and noted that giving the model permission to respond with “I don’t know” or express uncertainty could help reduce this problem. AI models may also make incorrect guesses based on imperfect knowledge.
Schulman believes it is possible to make the process more reliable and predictive. He proposed that reinforcement learning could be used to teach AI models to recognize if they are making incorrect assumptions and respond accordingly.
John Schulman is a UC Berkeley alumnus who has built a career in artificial intelligence and robotics. After becoming a professor at UC Berkeley, he joined the robotics startup Berkeley Open Arms and became the lead developer of ChatGPT – a natural language processing response system. Schulman believes artificial intelligence has the potential for incredible breakthroughs in the near future.
Berkeley Open Arms is an award-winning robotics startup founded by John Schulman along with several other innovators. The company aims to develop advanced robotic systems and automate repetitive and tedious tasks. Their ChatGPT system is integrated with AI and made available to the public. It is designed to hold natural conversations – providing valuable customer support.