OpenAI, a company backed by Microsoft Corp (NASDAQ:MSFT), has recently announced the establishment of a new governance committee dedicated to assessing the safety and security of its artificial intelligence models. This decision comes in the wake of the resignation of a key executive and the subsequent disbanding of his internal team.
The committee is set to spend 90 days evaluating the safeguards in OpenAI’s technology and will present a report detailing its findings. The company plans to make these recommendations public after a thorough review by the full board. Additionally, OpenAI has started training its latest AI model amid these changes.
The reshuffling within OpenAI follows the departure of co-founder and chief scientist Ilya Sutskever and key deputy Jan Leike. Leike, who led the superalignment team focused on addressing long-term AI threats, mentioned challenges with computing resources as a reason for leaving. With Sutskever’s exit, OpenAI dissolved his team and redistributed their responsibilities to the research unit led by co-founder John Schulman, who now heads Alignment Science.
Moreover, OpenAI has revised a policy penalizing former employees’ equity if they publicly criticize the company. These recent developments aim to address concerns about the company’s AI technologies’ safety and security.
In conclusion, OpenAI’s new governance team and restructuring efforts mark a pivotal moment for the company as it navigates challenges in the rapidly evolving field of artificial intelligence. Stay tuned for updates on how these changes will impact OpenAI’s future developments.