Top AI experts and CEOs, including Sam Altman of OpenAI, have issued a warning about the threat to humanity from artificial intelligence (AI). In a statement published by the Center for AI Safety, they called for prioritizing mitigation of the risk of extinction from AI alongside other societal-scale risks such as pandemics and nuclear war. The statement was co-signed by Google DeepMind CEO Demis Hassabis, Geoffrey Hinton, and Youshua Bengio. In March, Elon Musk and Steve Wozniak wrote an open letter asking all AI labs to pause training of AI systems for at least six months. Altman recently emphasized the need for thinking about the governance of superintelligence, which would require special treatment and coordination. The stakeholders said the statement aims to create common knowledge and start a discussion on the growing number of severe risks posed by advanced AI.
The Center for AI Safety is a US-based non-profit organization that aims to maintain and foster developments in AI while serving the common good. The center believes that AI has the potential to transform society positively if its risks and challenges are mitigated and addressed.
Sam Altman is the CEO of OpenAI, an AI research laboratory consisting of top AI researchers and engineers who aim to build safe AI systems to benefit humanity. Altman is a tech entrepreneur, investor, and the former president of Y Combinator, a start-up accelerator. Altman has been vocal about the need to govern AI, mitigate risks, and advance the benefits of AI for society.