Artificial intelligence company, OpenAI, announced a potential solution to one of the biggest limitations of its popular chatbot, ChatGPT. The AI program is known for its tendency to cite made-up facts, but OpenAI plans to reduce hallucinations by introducing process supervision. Currently, most bots check their output after creating them, but process supervision would require continuous checking of each step of the AI’s solution process. This could potentially enable ChatGPT to solve more complex tasks, reduce factual errors, and produce more interpretable reasoning. However, some experts doubt the effectiveness of this proposal and demand more evidence of its success. OpenAI has not specified when it would apply process supervision to ChatGPT, but the paper explaining the new technique may undergo peer review in the future.
OpenAI is an artificial intelligence company known for its development of some of the world’s most advanced AI technologies, including ChatGPT. The company is dedicated to creating safe artificial intelligence that benefits humanity while reducing associated risks.
Ben Winters is the Senior Counsel at the Electronic Privacy Information Center. He has expressed concerns regarding the effectiveness of OpenAI’s solution and demands more evidence to demonstrate its ability to reduce misinformation and incorrect results.