OpenAI, the company behind the generative artificial intelligence platform ChatGPT, has recently announced the formation of a new research team dedicated to tackling the challenge of controlling superintelligent AI. With a belief that AI systems will achieve superintelligence within this decade, OpenAI aims to develop enough technological breakthroughs in the next four years to effectively steer and manage AI systems that surpass human intelligence.
While superintelligence holds immense potential, OpenAI recognizes the grave risks it poses, including the disempowerment or even extinction of humanity. To address these concerns, OpenAI has assembled a team of machine learning experts, led by co-founder Ilya Sutskever and Head of Alignment Jan Leike. This team will consist of researchers from various OpenAI units and will be solely focused on the alignment of superintelligent AI.
OpenAI has committed to allocating 20% of its resources to this new team and will leverage its previous research and studies to gain an initial advantage. The company has devised a three-pronged strategy to create a human-level automated alignment researcher capable of iteratively aligning superintelligence.
This strategy entails developing a scalable training model, validating the model, and subjecting the alignment pipeline to adversarial methods to validate its effectiveness. OpenAI acknowledges that while the goal is ambitious and success is not guaranteed, the risks are worth taking to tackle this critical problem.
In an effort to foster innovation and collaboration, OpenAI recently launched a $1 million grant for researchers working at the intersection of AI and cybersecurity. The grant focuses on attack-minded projects and aims to support those with up to $10,000 in direct funding.
However, OpenAI has also faced regulatory scrutiny following the launch of its AI platform ChatGPT-3 and its successor ChatGPT-4. In the EU, the company narrowly escaped a ban in Italy and has faced opposition from consumer groups and critics who highlight the risks associated with the platform across various sectors such as finance, Web3, security, news, and education.
Furthermore, OpenAI currently faces a class action lawsuit in the United States, alleging illegal scraping of personal data of millions of individuals for training its AI models. The plaintiffs claim that OpenAI violated privacy and copyright laws by failing to obtain consent from individuals.
To alleviate tensions with regulators, OpenAI CEO Sam Altman has engaged in discussions with EU authorities in Brussels, emphasizing the downsides of excessive regulation. Altman has also embarked on a global tour to address regulatory uncertainty in more than 16 cities across three continents.
As OpenAI forms its new research team to address the challenges of controlling superintelligent AI, the company remains dedicated to taking proactive measures to ensure the responsible development and deployment of AI technology for the benefit of humanity.