Contrary to reports, OpenAI’s recent project, known as Q*, is unlikely to pose a threat to humanity. Recent headlines suggested that OpenAI had developed an AI technology that could potentially endanger humanity. However, further investigation reveals that the project might not be as groundbreaking or alarming as it seems.
According to initial reports from Reuters and The Information, OpenAI staff members raised concerns about the potential danger of the internal research project Q*. The project was said to have the capacity to solve certain math problems, albeit only at a grade-school level. Although the researchers believed that it had the potential for a significant technical breakthrough, it is now being debated whether OpenAI’s board ever received the letter in question.
Despite the attention surrounding Q*, experts in the field of AI research are skeptical about the significance of the project. Many researchers, including Yann LeCun, Chief AI Scientist at Meta, believe that Q* is simply an extension of existing work at OpenAI and other AI research labs. In fact, a lecture given by OpenAI co-founder John Schulman seven years ago mentioned a mathematical function called Q*, indicating that it might not be a new development.
Moreover, Q in the name Q* is believed to refer to Q-learning, a well-established AI technique that helps models learn and improve at specific tasks by taking correct actions. The asterisk in Q* might be a reference to A*, an algorithm that explores routes between nodes in a graph. Both Q-learning and A* have been around for a while, with Google DeepMind using Q-learning in 2014 to build an AI algorithm capable of playing Atari 2600 games at a human level.
Although the exact nature of Q* remains uncertain, researchers speculate that it could be connected to approaches in AI related to studying high school math problems. Nathan Lambert, a research scientist at the Allen Institute for AI, emphasizes that OpenAI has previously worked on improving mathematical reasoning with language models, which suggests that Q* might enhance ChatGPT’s code assistance capabilities.
The media narrative surrounding OpenAI’s pursuit of artificial general intelligence (AGI) has also been called into question. Though Reuters implied that Q* might be a step toward AGI, leading AI researchers, including Mark Riedl, a computer science professor at Georgia Tech, dispute this claim. There is no evidence to suggest that OpenAI’s language models or any other technology under development at the company are on a path toward AGI or any doomsday scenarios.
Riedl explains that OpenAI has mainly been a fast follower in the field of AI, scaling up existing ideas. Many of the concepts explored by OpenAI could also be developed by researchers at other organizations. The pursuit of Q-learning and A*, or a combination thereof, aligns with the current trends in AI research undertaken by numerous researchers across academia and industry.
It is worth noting that Q*, with the involvement of Ilya Sutskever, OpenAI’s chief scientist, may contribute to advancements. Based on a paper published by OpenAI researchers in May, if Q* employs similar techniques, it could significantly enhance the capabilities of language models. By controlling the reasoning chains of these models, OpenAI might guide them to follow more desirable paths and reach more logical outcomes, reducing the risk of arriving at malicious or incorrect conclusions.
In conclusion, OpenAI’s project Q*, despite the initial speculation surrounding its potential impact, is unlikely to be humanity-threatening. While it could contribute to advancements in language models, the project is built upon existing AI techniques and research pursued by many other organizations. OpenAI’s focus on improving mathematical reasoning aims to enhance the efficiency and capabilities of their language models rather than posing any existential risk to humanity.