Recent rumors have been disproved: OpenAI is not currently developing the advanced GPT-5 language model. At an MIT Event, OpenAI’s CEO and co-founder Sam Altman made the statement that the company has no plans of developing the company at this time. Altman stressed the importance of AI safety, making a point to mention the FutureofLife initiative created by his former OpenAI co-founder Elon Musk to put more control of AI development within society.
Sam Altman made the comment that an earlier draft of Musk’s open letter had falsely claimed that OpenAI was currently training GPT-5, to which Altman denied, claiming they are “not and won’t for some time”. This is not necessarily out of desperation from the letter, rather Altman sees it as a miss of technical nuance.
OpenAI is a research lab of advanced Machine Learning, known for popular application such as GPT-4 which fuels Microsoft’s popular chatbot. In order to keep their AI models safe and reliable for users, OpenAI follows best practices for responsible development of AI. This includes the bug bounty program for their chatbot powered by GPT-4, offering rewards of up to $20,000 for those who identify any potential issues that arise.
Although Altman has asserted OpenAI’s commitment to developing safe AI, many governments remain cautious, with Italy ordering OpenAI to stop offering its chatbot in their country, and Germany reportedly pondering doing the same. In the U.S., the Treasury Department has urged caution and certification of models before being launched in the states.
OpenAI started as a non-profit research lab, but recently has turned into a profitable corporation with its 49% investment from Microsoft. Microsoft has leveraged OpenAI’s GPT-4 engine to power its services, including Bing Chat, Bing Image Creator, Microsoft 365 Copilot, Azure OpenAI Service and GitHub Copilot X. Despite the success of OpenAI, Elon Musk has been critical of how it is run, claiming it has become a “closed source, maximum profit company effectively controlled by Microsoft.”
OpenAI is an advocate of regulated and secure AI models. The company propagates that “powerful AI systems should be subject to rigorous safety evaluations,” and will seek to actively engage with governments on the best forms of regulations that can be set up. It is with commitment to responsible development that OpenAI will continue to strive for unlocking the potential of AI.