Researchers are attempting to solve the problem of hallucinations in AI chatbots, which often provide unreliable information due to their lack of understanding of the real world. For example, OpenAI’s ChatGPT and Microsoft’s Bing have been found to give false information to users and often make up answers rather than admitting they don’t know. MIT researchers have proposed a technique of having multiple chatbots generate answers and debate each other until a correct one is chosen, while other companies use human trainers to feedback smarter answers to the bots. However, no solution has been found yet to prevent hallucinations, a key focus of the AI community.
OpenAI, Google, and Microsoft are among the companies mentioned in the article as working on addressing hallucinations in AI chatbots. OpenAI CEO Sam Altman, who testified before Congress that AI could spread disinformation, is also featured. MIT researcher Yilun Du and Microsoft senior research scientist Yis Kamal are also cited.
There are no specific steps listed in the article, but one proposed method involves using a system called SelfCheckGPT in which the same chatbot is asked multiple questions, and if the answers differ, the facts are flagged as possibly containing fabricated information. The article notes the importance of maintaining the original paragraph structure and length.