OpenAI View on IAEA as AI Regulatory Blueprint

Date:

Last week, Sam Altman, CEO of OpenAI, presented to US Congress his plan for the government to regulate advanced artificial intelligence (AI) companies, such as his own. Beyond needing the US to create a new agency to oversee AI, Altman proposed that safety standards and audits should be enacted for the technology. OpenAI co-founders Altman, Greg Brockman, and Ilya Suskever further elaborated upon this idea in a blog post, in which they recommended the International Atomic Energy Agency (IAEA) as a model for regulating an AI with superintelligence.

The IAEA, an organization headquartered in Vienna, works with over 170 member countries to promote the peaceable use of nuclear energy, create a framework to strengthen international nuclear safety, and uphold the Nuclear Non-Proliferation Treaty of 1968. However, while historically successful, the IAEA also has encountered difficulties. North Korea left the organization in 1994 after rejecting inspections and denuclearization. In more recent years, there has been a lack of inspection due to budget cuts and the growing accessibility of nuclear technologies. Altman and his team have stated these problems as being weighed against the risk of AI, mirroring similar risks associated with nuclear technologies.

As for safety standards for AI, OpenAI seeks to avoid governance of smaller models, instead focusing on the potential of significant models. Initial proposals outline that countries should list out required regulations, with the goal of decreasing existential risks, and leaving other specifics such as defining what AI can communicate to individual countries. This mirrors the IAEA’s approach to focusing on existential risks and leaving other topics to individual countries.

See also  OpenAI CEO Warns Excessive Regulation Could Impede Progress in Artificial Intelligence Development

OpenAI is a research laboratory founded in 2015 with the twin goal of researching and developing AI technology, and ensuring it is done safely. Over the years, OpenAI has released many advances in AI technology, like the ChatGPT. Over their many years of research, the founders of OpenAI have developed a deep understanding of the potential of advanced artificial intelligence and the safety implications it could have. The OpenAI blog post urging international regulation of AI systems development is evidence of their commitment to responsible and proactive leadership.

Frequently Asked Questions (FAQs) Related to the Above News

Please note that the FAQs provided on this page are based on the news article published. While we strive to provide accurate and up-to-date information, it is always recommended to consult relevant authorities or professionals before making any decisions or taking action based on the FAQs or the news article.

Share post:

Subscribe

Popular

More like this
Related

Obama’s Techno-Optimism Shifts as Democrats Navigate Changing Tech Landscape

Explore the evolution of tech policy from Obama's optimism to Harris's vision at the Democratic National Convention. What's next for Democrats in tech?

Tech Evolution: From Obama’s Optimism to Harris’s Vision

Explore the evolution of tech policy from Obama's optimism to Harris's vision at the Democratic National Convention. What's next for Democrats in tech?

Tonix Pharmaceuticals TNXP Shares Fall 14.61% After Q2 Earnings Report

Tonix Pharmaceuticals TNXP shares decline 14.61% post-Q2 earnings report. Evaluate investment strategy based on company updates and market dynamics.

The Future of Good Jobs: Why College Degrees are Essential through 2031

Discover the future of good jobs through 2031 and why college degrees are essential. Learn more about job projections and AI's influence.