Title: OpenAI Co-founder Warns of Controlling ‘Superintelligent’ AI to Prevent Human Extinction
Artificial intelligence leader OpenAI’s co-founder emphasizes the need to control superintelligence in order to safeguard humanity from potential extinction, warning that this groundbreaking technology could be extremely dangerous if left unchecked. Ilya Sutskever and Jan Leike, head of alignment at OpenAI, wrote in a blog post on Tuesday that superintelligence has the potential to address crucial global issues but could also lead to the disempowerment or extinction of mankind. They predict that such advancements could be realized within this decade.
To effectively manage these risks, the co-founders argue for the establishment of new governance institutions and the resolution of the problem of superintelligence alignment. This involves ensuring that artificial intelligence systems smarter than humans remain obedient to human intent. However, they concede that current techniques for aligning AI, such as reinforcement learning from human feedback, are insufficient for the scale of superintelligence. Therefore, they stress the urgent need for scientific and technical breakthroughs to develop effective control mechanisms.
To tackle these challenges, OpenAI’s new team has been dedicated to dedicating 20% of its secured computing power and four years toward finding solutions. They acknowledge the ambitious nature of their goal, but remain optimistic that through focused and concerted efforts, they can make progress towards controlling superintelligent AI.
In addition to their efforts in enhancing existing OpenAI models like ChatGPT and mitigating risks, the new team will focus on the machine learning aspects of aligning superintelligent AI systems with human intent. Their aim is to develop an automated alignment researcher that functions at a level comparable to humans. This will involve utilizing substantial computing resources to scale the model and iteratively align superintelligence.
OpenAI plans to devise a scalable training method, validate the resulting model, and stress test its alignment pipeline. By doing so, they hope to pave the way for effective control over superintelligent AI.
As it stands, preventing a potentially rogue, superintelligent AI remains a considerable challenge. OpenAI believes that addressing this issue requires a collaborative effort involving advancements in multiple fields.
While OpenAI acknowledges the uncertainty surrounding their ultimate success, they are committed to working diligently towards controlling superintelligent AI. By devoting significant resources and expertise to this endeavor, OpenAI hopes to safeguard humanity from the existential risks posed by this groundbreaking technology.
In conclusion, OpenAI’s co-founders emphasize the urgency of addressing the risks associated with superintelligent AI. They call for the development of new institutions for governing AI and ensuring alignment with human intent. OpenAI is poised to tackle these challenges head-on, dedicating substantial computing power and forming an expert team dedicated to controlling superintelligent AI. While success is not guaranteed, OpenAI remains optimistic that through focused efforts, significant progress can be made in managing the risks associated with superintelligent AI.