OpenAI, the creator of ChatGPT, has announced the formation of a new team called Superalignment, dedicated to addressing the challenge of aligning superintelligence. Led by Ilya Sutskever, OpenAI’s co-founder and Chief Scientist, and Jan Leike, the Head of Alignment, the team will focus on solving the technical problems associated with aligning superintelligent AI systems with human intent, within a four-year timeframe.
To support this initiative, OpenAI has allocated 20% of its computing resources over the next four years. The team consists of experienced machine learning researchers and engineers from OpenAI’s previous alignment team, as well as experts from other departments within the company.
OpenAI is committed to sharing the outcomes of their work with the wider community, and they consider contributing to the alignment and safety of non-OpenAI models as an essential part of their mission. This new team’s efforts complement OpenAI’s ongoing work to enhance the safety of current models like ChatGPT and address other AI-related risks, including misuse, economic disruption, disinformation, bias and discrimination, addiction, and overreliance.
Superintelligence has the potential to solve global challenges, but it also poses risks such as human disempowerment or even extinction. Therefore, it is crucial to develop effective methods of controlling and aligning superintelligent AI systems.
To fulfill their mission, OpenAI is actively hiring research engineers, research scientists, and research managers. The role of a research engineer involves writing efficient code for machine learning training, conducting experiments, and collaborating with a small team. Additionally, they will explore oversight techniques, study generalization, manage datasets, investigate reward signals, predict model behaviors, and design approaches for alignment research.
Research scientists at OpenAI will develop innovative machine learning techniques, collaborate with colleagues, and contribute to the research vision of the company. Their responsibilities include designing experiments, studying generalization, managing datasets, exploring model behaviors, and designing novel approaches.
Research managers will oversee a team of research scientists and engineers working on alignment and generalization. They will be responsible for planning and executing research projects, mentoring team members, and fostering an inclusive culture. Leadership experience, alignment expertise, and a passion for OpenAI’s mission are desired for this role.
The announcement from OpenAI comes at a time when AI regulation is gaining significant attention worldwide, with concerns being raised about the potential risks and dangers associated with superintelligence. OpenAI’s CEO, Sam Altman, has even testified before the US Senate on this matter.
In addition to their alignment efforts, OpenAI has launched a program to fund experiments aimed at democratizing AI rules and promoting safety. They will grant $1 million to individuals who contribute the most to addressing safety issues in AI.
OpenAI’s ambitious plans and initiatives indicate their commitment to advancing the field of AI while ensuring its safe and beneficial use. With their Superalignment team and ongoing research endeavors, OpenAI is poised to make significant contributions to the development and responsible deployment of superintelligent AI systems.