OpenAI, the creator of the popular language model ChatGPT, has recently announced its ambitious goal of achieving superintelligence before 2030. In a blog post, the company revealed that it is forming a team of skilled machine learning researchers and engineers dedicated to addressing the challenge of aligning superintelligence. OpenAI has set aside 20% of its computing resources for the next four years to support this endeavor.
While superintelligence may still seem like a distant concept, OpenAI believes that it could become a reality within this decade. This has prompted the company to assemble a team, coheaded by Ilya Sutskever (co-founder and Chief Scientist) and Jan Leike (Head of Alignment), with the primary objective of solving the fundamental technical problems associated with aligning superintelligence in just four years. The team is composed of researchers and engineers from OpenAI’s previous alignment team, as well as experts from other departments within the organization.
OpenAI is committed to sharing the outcomes of its work extensively and considers contributing to the alignment and safety of non-OpenAI models a crucial aspect of its mission. The new team’s efforts supplement OpenAI’s ongoing initiatives to improve the safety of its current models, such as ChatGPT, and address other potential risks associated with AI, including misuse, economic disruption, disinformation, bias and discrimination, addiction, and overreliance. Although the focus of the new team is on the machine learning challenges of aligning superintelligent AI systems with human intent, they actively collaborate with interdisciplinary experts to ensure that their technical solutions encompass broader human and societal concerns.
Superintelligence has the potential to tackle global challenges, but it also carries significant risks, such as human disempowerment or even extinction. However, current methods of control and regulation are deemed insufficient. Therefore, OpenAI is determined to actively pursue its mission and is currently looking to expand its team by hiring research engineers, research scientists, and research managers. The positions require individuals who align with OpenAI’s mission, possess strong engineering skills, and thrive in a fast-paced research environment. Desired skills include expertise in ML algorithm implementation, data visualization, and ensuring human control over AI systems.
OpenAI’s announcement comes at a time when AI regulation has become a hot topic worldwide, with concerns often drawing comparisons to the threats posed by nuclear warfare. In fact, OpenAI CEO Sam Altman recently testified before the US Senate regarding these concerns. In addition to its efforts in addressing superintelligence alignment, OpenAI has also launched a program to fund experiments aimed at democratizing AI regulations. Through this program, the company intends to grant $1 million to those who contribute the most to addressing safety issues.
OpenAI’s commitment to advancing the field of AI and preparing for the potential challenges posed by superintelligence highlights its dedication to ensuring the responsible and beneficial development of AI technologies. As the company continues to make significant strides, the broader scientific community eagerly awaits the outcomes of OpenAI’s research and its impact on the future of AI and humanity as a whole.