OpenAI has taken a significant step towards enhancing safety and security measures for its projects by establishing a new oversight team to make critical decisions in this regard. The technology giant, known for its advancements in Artificial Intelligence and Machine Learning, is setting up this committee as it trains its next AI model.
The committee will be led by CEO Sam Altman and board members Adam D’Angelo, Nicole Seligman, and Bret Taylor. Comprising technical and policy leads within the company, as well as external experts, the team will evaluate existing processes, recommend improvements, and ensure that safety and security remain a top priority at OpenAI.
This move comes after reports that OpenAI disbanded its superalignment security team focused on preventing AI systems from behaving unpredictably. The committee formation is a response to concerns raised by former team members, including OpenAI co-founder Ilya Sutskever and Jan Leike, who left due to disagreements over security approaches.
By consulting internal and external experts, OpenAI aims to strengthen its safety culture and processes, addressing long-term risks associated with AI technology. The company is committed to adopting recommendations that prioritize safety and security, ensuring a balanced and proactive approach to managing potential risks.
The establishment of this oversight committee underscores OpenAI’s dedication to upholding the highest safety standards in the development of AI technology, reflecting a shift towards a more robust and comprehensive security framework.