OpenAI has just announced the establishment of a Safety and Security Committee to oversee its artificial intelligence (AI) projects. This committee, comprising nine members led by CEO Sam Altman and board Chair Bret Taylor, aims to ensure that the company’s machine learning research is conducted in a safe manner.
The formation of this committee comes shortly after OpenAI disbanded its internal Superalignment team, which was dedicated to AI safety. The new Safety and Security Committee will focus on enhancing AI risk mitigation workflows and will be submitting recommendations to the OpenAI board within the next 90 days.
In addition to this committee news, OpenAI has revealed that its engineers are currently training the next frontier model, anticipated to advance the company’s capabilities on the path to Artificial General Intelligence (AGI). This new model is expected to be a significant update beyond the recently unveiled GPT-4o, with Chief Technology Officer Mira Murati hinting at a major update to GPT-4 set for later this year.
OpenAI has reportedly been using Microsoft’s public cloud infrastructure for much of its AI research, with plans for the upcoming LLM (Large Language Model) to have even more parameters for enhanced data processing. There are speculations about the development of a prototype version of the next-generation language model, potentially named GPT-5, which has been tested by select users and hailed as a significant improvement over GPT-4.
As OpenAI continues to push boundaries in AI research and development, the focus on safety and security remains paramount. The company’s strategic initiatives, coupled with the advancements in AI models, highlight OpenAI’s commitment to responsible and innovative progress in the field of artificial intelligence.