OpenAI View on IAEA as AI Regulatory Blueprint


Last week, Sam Altman, CEO of OpenAI, presented to US Congress his plan for the government to regulate advanced artificial intelligence (AI) companies, such as his own. Beyond needing the US to create a new agency to oversee AI, Altman proposed that safety standards and audits should be enacted for the technology. OpenAI co-founders Altman, Greg Brockman, and Ilya Suskever further elaborated upon this idea in a blog post, in which they recommended the International Atomic Energy Agency (IAEA) as a model for regulating an AI with superintelligence.

The IAEA, an organization headquartered in Vienna, works with over 170 member countries to promote the peaceable use of nuclear energy, create a framework to strengthen international nuclear safety, and uphold the Nuclear Non-Proliferation Treaty of 1968. However, while historically successful, the IAEA also has encountered difficulties. North Korea left the organization in 1994 after rejecting inspections and denuclearization. In more recent years, there has been a lack of inspection due to budget cuts and the growing accessibility of nuclear technologies. Altman and his team have stated these problems as being weighed against the risk of AI, mirroring similar risks associated with nuclear technologies.

As for safety standards for AI, OpenAI seeks to avoid governance of smaller models, instead focusing on the potential of significant models. Initial proposals outline that countries should list out required regulations, with the goal of decreasing existential risks, and leaving other specifics such as defining what AI can communicate to individual countries. This mirrors the IAEA’s approach to focusing on existential risks and leaving other topics to individual countries.

See also  Impacts of OpenAI and ChatGPT in Italy 2023

OpenAI is a research laboratory founded in 2015 with the twin goal of researching and developing AI technology, and ensuring it is done safely. Over the years, OpenAI has released many advances in AI technology, like the ChatGPT. Over their many years of research, the founders of OpenAI have developed a deep understanding of the potential of advanced artificial intelligence and the safety implications it could have. The OpenAI blog post urging international regulation of AI systems development is evidence of their commitment to responsible and proactive leadership.

Frequently Asked Questions (FAQs) Related to the Above News

Please note that the FAQs provided on this page are based on the news article published. While we strive to provide accurate and up-to-date information, it is always recommended to consult relevant authorities or professionals before making any decisions or taking action based on the FAQs or the news article.

Share post:



More like this

Canada Boosts Arctic Defence Amid Climate Change Threats

Canada ramps up Arctic defense amid climate change threats with new policy, Arctic-compatible vehicles, and potential nuclear submarines.

OpenAI CEO Sam Altman Joins Billionaire Club, Trails Behind Elon Musk

OpenAI CEO Sam Altman now a billionaire, but still trails behind Elon Musk in tech industry dominance.

Smart Ways Retirees Can Maximize Social Security Checks

Discover 7 smart ways retirees can maximize their Social Security checks, from covering essentials to investing for the future.

Generative AI Surge: ChatGPT Revolutionizes Workplace Dynamics

Discover how ChatGPT is revolutionizing workplace dynamics among younger employees. Explore the rising trend of generative AI tools in the workplace.