A source with close affiliations with the company disclosed that the superintelligence breakthrough might have fueled the board’s firing of Altman.
OpenAI has gained quite a lot of traction over the past few years, predominantly because of its contributions to the AI space. Uncertainty has been looming in the air at the San Francisco-based tech firm following the board of directors’ sudden firing of the company lead, Sam Altman, citing their lack of confidence in his leadership.
However, Altman was reinstated as OpenAI CEO after over 500 of his counterparts at the firm threatened to leave the company if the board didn’t give him back his job, citing that the move undermined their mission and vision.
And if perhaps you might have thought this is the biggest news to hit the headlines this week, then you are undoubtedly wrong. It’s reported that staffers had written a letter to the company’s board citing a potential breakthrough in artificial intelligence before the weekend-long OpenAI fiasco transpired. According to The Information, the breakthrough could potentially lead to the development of superintelligence within this decade or sooner, with OpenAI Chief Scientist Ilya Sutskever at the forefront.
For those unfamiliar with the concept, it’s essentially an AI system that stacks miles ahead of chatbots like ChatGPT or Copilot and surpasses the cognitive abilities of humans. While this is an incredible feat, it could potentially cause much damage if elaborate measures and guardrails aren’t in place to prevent it from spiraling out of control.
According to sources familiar with the matter, the potential breakthrough of superintelligence fueled the board’s decision to strip Altman of his CEO title. The source further disclosed that one of the company’s top executives, Mira Murati, had spoken to the employees on Wednesday, briefing them about the breakthrough under the project name Q* (Q-Star). While speaking to the employees, Mira also pointed out that they’d sent a letter to the board spotlighting the breakthrough before Altman’s firing.
A source with close ties to the Q-Star project indicated that superintelligence can solve grade-school mathematical problems perfectly. At the same time, these tasks may be considered an easy undertaking given the vast computing resources that the model leverages, but it still shows a lot of promise because this is a recent breakthrough. OpenAI is likely to tap more into this venture and explore greater heights in the future.
Other models perform exceptionally well when generating text but struggle with more complex tasks like solving mathematical problems since they require deep understanding, which would require vigorous training.
According to Reuters, the researchers behind the breakthrough highlighted that the breakthrough could potentially threaten humanity if elaborate measures aren’t in place to govern it. This is believed to be the reason behind Altman’s firing, as the board members were inclined toward the safe side of the technology. Generative AI has been on the spot, with many privacy and safety concerns revolving around it.
OpenAI’s CEO, Sam Altman, has been particularly vocal about his belief in hitting the AGI superintelligence benchmark this year. However, Microsoft’s top executive, Satya Nadella, shared a different train of thought. In an interview earlier this year, the CEO was asked about his thoughts on this particular topic, and he brushed it off under the rug, stating:
I’m much more focused on the benefits to all of us. I am haunted by the fact that the industrial revolution didn’t touch the parts of the world where I grew up until much later. So I am looking for the thing that may be even bigger than the industrial revolution, and really doing what the industrial revolution did for the West, for everyone in the world. So I’m not at all worried about AGI showing up, or showing up fast. Great, right? That means 8 billion people have abundance. That’s a fantastic world to live in.
Do you think the systems in place to govern such breakthroughs are efficient? Share your thoughts with us in the comments.