Humans Are Unable to Control ‘Superintelligent’ AI, Warns OpenAI Co-Founder
The co-founder of OpenAI, Ilya Sutskever, has issued a warning that humans will have no means of effectively monitoring superintelligent artificial intelligence (AI) systems. In a recent blog post co-authored with head of alignment Jan Leike, Sutskever expressed concerns about the potential disempowerment and even extinction of humanity due to the immense power of superintelligence.
Sutskever and Leike emphasized that they are focused on tackling the challenges posed by superintelligence, which surpasses artificial general intelligence (AGI) in terms of capability. They stated that superintelligence could be realized as early as this decade, highlighting the unpredictable nature of technological advancement.
The blog post raised the issue of the lack of a solution for steering or controlling potentially rogue superintelligent AI. Current techniques, such as reinforcement learning from human feedback, rely on human supervision. However, once AI systems become much smarter than humans, effective supervision will no longer be possible. They stressed the necessity for new scientific and technical breakthroughs to align AI systems with human values.
OpenAI’s objective is to develop an automated alignment researcher with roughly human-level capabilities and utilize significant computing power to scale their efforts in aligning superintelligence. They aim to develop a scalable training method, validate the resulting model, and test their entire pipeline by deliberately training misaligned models to ensure that their techniques detect the worst forms of misalignment.
Over the next four years, OpenAI plans to dedicate 20% of its computing power to address the challenge of superintelligence alignment. This commitment reflects the urgency and importance they assign to this issue.
Earlier this year, Goldman Sachs estimated that the widespread adoption of generative AI, if it delivers on its promised capabilities, could result in the loss or reduction of up to 300 million jobs globally.
OpenAI’s warning serves as a reminder that the advancement of powerful AI systems requires careful monitoring and alignment to prevent potential risks to humanity. With the future of superintelligent AI approaching rapidly, it is crucial for scientists, researchers, and policymakers to make significant strides in developing effective control mechanisms and aligning AI systems with human values.