Last week, Sam Altman, CEO of OpenAI, presented to US Congress his plan for the government to regulate advanced artificial intelligence (AI) companies, such as his own. Beyond needing the US to create a new agency to oversee AI, Altman proposed that safety standards and audits should be enacted for the technology. OpenAI co-founders Altman, Greg Brockman, and Ilya Suskever further elaborated upon this idea in a blog post, in which they recommended the International Atomic Energy Agency (IAEA) as a model for regulating an AI with superintelligence.
The IAEA, an organization headquartered in Vienna, works with over 170 member countries to promote the peaceable use of nuclear energy, create a framework to strengthen international nuclear safety, and uphold the Nuclear Non-Proliferation Treaty of 1968. However, while historically successful, the IAEA also has encountered difficulties. North Korea left the organization in 1994 after rejecting inspections and denuclearization. In more recent years, there has been a lack of inspection due to budget cuts and the growing accessibility of nuclear technologies. Altman and his team have stated these problems as being weighed against the risk of AI, mirroring similar risks associated with nuclear technologies.
As for safety standards for AI, OpenAI seeks to avoid governance of smaller models, instead focusing on the potential of significant models. Initial proposals outline that countries should list out required regulations, with the goal of decreasing existential risks, and leaving other specifics such as defining what AI can communicate to individual countries. This mirrors the IAEA’s approach to focusing on existential risks and leaving other topics to individual countries.
OpenAI is a research laboratory founded in 2015 with the twin goal of researching and developing AI technology, and ensuring it is done safely. Over the years, OpenAI has released many advances in AI technology, like the ChatGPT. Over their many years of research, the founders of OpenAI have developed a deep understanding of the potential of advanced artificial intelligence and the safety implications it could have. The OpenAI blog post urging international regulation of AI systems development is evidence of their commitment to responsible and proactive leadership.