OpenAI, the maker of the popular chatbot ChatGPT, has released a Preparedness Framework to guide the company’s safe development of increasingly powerful artificial intelligence systems. The framework, described as a living document, aims to track, evaluate, forecast, and protect against catastrophic risks associated with AI breakthroughs. OpenAI’s effort aligns with their voluntary commitment to the White House and other stakeholders for safe and transparent AI development.
Anton Dahbura, an AI expert and co-director of the Johns Hopkins Institute for Assured Autonomy, commended OpenAI’s framework as a good first step. However, he emphasized the complexity of controlling AI systems and described the document as preparation for a significant challenge. The framework outlines OpenAI’s approach to evaluating emerging systems and determining their safety before proceeding. It includes risk level labels and criteria for ranking risks in areas such as cybersecurity, CBRN threats (chemical, biological, radiological, nuclear), and persuasion.
OpenAI’s framework sets strict deployment and development criteria for AI models. Only models with a post-mitigation score of medium or below can be deployed, while models with a post-mitigation score of high or below can be further developed. The company also emphasizes the importance of seeking out unknown-unknowns to proactively identify currently unknown categories of catastrophic risk.
OpenAI CEO Sam Altman has acknowledged both the benefits and risks of powerful AI models. While he believes the benefits outweigh the risks for the systems already rolled out to the public, he emphasizes the need for regulatory intervention by governments to mitigate potential risks. In the absence of widespread regulations, companies like OpenAI develop internal guidelines such as the recently released framework.
The Biden administration has also addressed responsible AI development by releasing a Blueprint for an AI Bill of Rights. The president underlined the seriousness of the responsibility and the importance of getting it right. Dahbura highlighted the challenge of controlling aspects like bias and the existence of inherent imperfections in generative AI models, such as hallucinations or misleading responses. OpenAI’s framework received praise for being more detailed and less abstract than other high-level documents from AI developers.
Dahbura stressed that OpenAI’s proactive approach should not be treated as a superficial exercise. He emphasized the need for a constantly maintained effort with realistic expectations, shared among all stakeholders, including users. OpenAI’s framework sets an example for the safe development of AI, but continued vigilance and collaborative efforts remain crucial.
In the rapidly advancing field of AI, OpenAI’s framework represents a significant step towards responsible and safe development. By establishing evaluation processes, forming safety advisory groups, and setting strict criteria for deployment and development, OpenAI aims to protect against potential risks associated with AI breakthroughs. While challenges remain, the company’s proactive approach highlights the importance of ongoing efforts to ensure the benefits of AI outweigh its potential hazards.