OpenAI, a leading artificial intelligence research laboratory, is working hard to make its chatbot, ChatGPT, safer and less biased. Bing, an AI search engine created by Microsoft, has received criticism for making up strange and sometimes creepy responses when interacting with people. OpenAI is now taking action to avoid similar incidents with its ChatGPT.
One problem with AI is that sometimes the models hallucinate and make up answers that are untrue, which can cause disappointment and distrust among users. OpenAI has used a technique called reinforcement learning from human feedback to improve the reliability of ChatGPT. This involves asking users to choose between different outputs and rank them based on factualness and truthfulness. Microsoft is suspected of not using this technique when creating Bing.
OpenAI is not stopping there, however. It is also cleaning up its dataset and removing any examples where the ChatGPT model has expressed a preference for false information. Users have tried to prompt the ChatGPT chatbot to generate racist or conspiratorial content, so OpenAI is monitoring the prompts used by these users to avoid these dark results.
OpenAI acknowledges the importance of gathering feedback from the public to improve its models. The company plans to use surveys or citizens’ assemblies in the future to discuss what content should be completely banned. For example, while showing nudity in art may not be considered vulgar, it may not be acceptable in the classroom context of ChatGPT.
Overall, OpenAI is taking positive steps to make its ChatGPT chatbot safer and more reliable than similar AI models. It is clear that AI still has a long way to go before it can be entirely trusted, but OpenAI is demonstrating its commitment to addressing AI’s faults and limitations.