OpenAI, a San Francisco-based artificial intelligence company, recently addressed the need for increased regulation surrounding AI technology, warning of the risks posed by the new technology. In a blog post, the team discussed the possibility of what they referred to as “existential risk,” or risks that could potentially put humanity in danger. As a point of comparison, they drew on nuclear energy, another technology that carries a high degree of risk, and argue that an authority similar to the International Atomic Energy Agency (IAEA) should be established in order to monitor and properly manage the advancement of AI technology.
OpenAI went on to discuss the potential upsides and downsides of AI. They point out that within a decade, AI systems could far exceed the skill level of experts in most domains and could potentially carry out as much productive activity as one of the world’s largest corporations today. To take advantage of the immense potential this technology offers, they argue, the risks must be managed appropriately.
OpenAI, founded by billionaire tech investor Tesla and SpaceX CEO Elon Musk, Sam Altman, Greg Brockman, and Ilya Sutskever, is a research laboratory dedicated to using AI to solve the most difficult problems facing the world today. Their mission is to outpace “human intelligence, safely,” and they are developing tools to advance the research and development of artificial intelligence. The company has grown impressively since it was first founded in 2015, and has rapidly become a major player when it comes to the conversation surrounding regulation of AI technology.
In the person of Elons Musk, OpenAI has a very powerful motivating force behind their mission to manage safety in the development of artificial intelligence. As CEO of Tesla and SpaceX, Musk has been a vocal advocate of smart technology and regulations guiding its advancement. By actively engaging in the debate about AI and how it should be regulated, Musk and OpenAI are making a huge impact on the conversation.