OpenAI, the creator of ChatGPT, is taking proactive measures to address concerns surrounding the potential dangers of artificial intelligence (AI) superintelligence. With the creation of a new team dedicated to mitigating these risks, OpenAI aims to safeguard humanity from the potential negative consequences of superintelligent machines.
Recognizing the need for scientific and technical breakthroughs in order to control AI systems that surpass human intelligence, OpenAI plans to allocate 20% of its compute to this effort over the next four years. The company strongly believes that superintelligence could be the most impactful technology ever invented, with the potential to solve many global problems. However, it also acknowledges the significant risks associated with the immense power of superintelligence, including potential human disempowerment or even extinction.
Although the development of superintelligent AI may seem distant, OpenAI believes it could emerge in the next decade. This urgency has led the company to establish a new team focused on addressing the challenge of aligning AI with human values and intentions. The team’s ultimate goal is to build a human-level automated alignment researcher by leveraging machine learning techniques. This involves training AI systems using human feedback, developing AI that can evaluate other AI systems, and ultimately constructing an AI capable of performing alignment research more effectively and efficiently than humans.
While OpenAI acknowledges that solving the technical challenges of superintelligence within four years is an ambitious goal without guarantees, it remains optimistic. The company is actively recruiting researchers and engineers to join the team, emphasizing the importance of machine learning expertise in tackling this problem. OpenAI’s intention is to share its progress and insights broadly, considering the alignment and safety of non-OpenAI models as essential elements of their work.
The growing concerns surrounding the advancement of AI are well-documented. Geoffrey Hinton, a prominent figure and pioneer in AI, has expressed caution about the increasing danger posed by more sophisticated AI systems. OpenAI CEO Sam Altman has also warned about the potential extinction of the human race caused by AI, drawing parallels between the risks of AI and nuclear war or pandemics. Survey data reveals that a significant majority of Americans worry about the destruction of civilization by AI, and Warren Buffett has even likened the creation of AI to the development of the atomic bomb.
While some experts, such as Meta’s chief scientist Yann LeCun, dismiss these worries as unfounded, OpenAI is actively addressing the potential risks associated with superintelligence. By prioritizing alignment research and safety measures, OpenAI hopes to lead the way in ensuring that AI systems remain beneficial and aligned with human values.