The development of artificial intelligence is advancing so quickly that urgent action must be taken to regulate these advancements in order to prevent potentially catastrophic scenarios. OpenAI co-founders Sam Altman, Greg Brockman, and Ilya Sutskever recently released a blog post on the need to regulate AI systems before they are able to significantly exceed AGI in terms of capability. OpenAI has outlined three points as the basis for effective strategic planning, including the need for a balance between control and innovation among AI systems, the importance of an international authority to ensure safety standards are met, and the need for technical capability to maintain control over superintelligent AI systems.
OpenAI is a global leader in artificial intelligence research, making their perspectives on the topic especially informative. The company works to ensure the safety, reliability, and effectiveness of its research, and to use AI for the benefit of the world’s citizens, businesses, and government agencies. Currently, OpenAI is working with its partners to develop an AI system that can exceed expert skill level in most domains within a decade. OpenAI supports the development of these new tools, while advocating for the governance of superintelligence to ensure the safety of those tools.
The main author of OpenAI’s blog post, Sam Altman, is the CEO of OpenAI and a technology entrepreneur. His involvement with the tech community goes beyond his role at OpenAI, having founded tech startups such as Loopt, an early location-based mobile social networking company, and Y Combinator, a business incubator. In his testimony before Congress, Altman was cautionary in his thoughts on the future of AI, open to the idea of regulation if needed. By advocating for an AI regulatory body, Altman looks to protect citizens from potential ramifications of advanced AI technology, while also encouraging the creativity of its developers.
As AI systems approach the same level of expertise as one of today’s largest corporations, OpenAI looks to lead the charge of proactive initiatives towards AI governance. Its three cornerstones to success involve balancing innovation and control, creating a global regulatory agency, and building the technical capabilities to keep superintelligent AI safe. This regulation and guidance is crucial for both the safety of society and for the creativity of AI developers. With its experts and advocates on the forefront of the discussion, OpenAI offers an informed opinion on AI regulation.