Data scientists have been struggling to find out why large language models (LLMs) like OpenAI’s ChatGPT behave the way they do as if they are inventing facts. As of this morning, OpenAI released a new tool to identify the parts of an LLM that cause certain behaviors, and its code is open source on GitHub.
The tool utilizes a language model to explain the functions of simpler models, such as GPT-2. It operates by running text inputs through the model and picking up when neurons in the model activate. Then, GPT-4, OpenAI’s state-of-the-art text-generating AI, is used to produce an explanation of the neuron’s behavior and measure how well it fits with the reality.
The OpenAI researchers were able to use this new technology to explain about 1,000 different neurons in GPT-2. They acknowledged, however, that this number is small and far from providing much value. Nonetheless, OpenAI believes this tool may help reduce bias and toxicity in language models.
OpenAI’s scalable alignment team leader, Jeff Wu, stressed that the use of GPT-4 was incidental and illustrative of the weaknesses in this area. He noted that any other language models could be utilized and the tool can be tailored to explanation the sources of certain search engines or websites used by neurons.
OpenAI founded in December 2015 by Superstar Investors Elon Musk, Peter Theil, Reid Hoffman, and Jessica Livingston and has become a leader in Artificial Intelligence. AI co-founder and CTO Greg Brockman leads their Turing Fellowship and OpenAI-Commons software team, which develops a range of AI products. William Saunders is the OpenAI’s interpretability team manager and believes that this new tool is a way to trust A.I. and anticipate the system’s problems. Jeff Wu is the head of OpenAI’s scalable alignment team. He claims that this tool can be used in an automated way for every single neuron to come up with explanations for its behavior and score how well it matches the real behavior. Wu is hoping that this model will lead to improved understanding of not just what the neurons are responding to, but also the behavior of the models, the circuits they compute, and how the neurons affect each other.