Top AI researchers are challenging the prevailing doomer narrative of existential risk from runaway artificial general intelligence (AGI). A statement released yesterday, which was signed by hundreds of experts including the CEOs of OpenAI, DeepMind and Anthropic, argued that advanced AI posed a risk of extinction if its development is not managed safely. Critics of the ‘doomsday’ perspective claim that the focus on existential risk obscures other significant threats including bias, misinformation, high-risk applications and cybersecurity. Many AI researchers do not regard AGI as a significant risk. Some have expressed concern that the x-risk perspective gains an outsized amount of attention in social media and the press and, as a result, distorts the allocation of resources and attention available to address more pressing AI risks.
OpenAI is an artificial intelligence research laboratory consisting of leading experts in the field. The company’s mission is to ensure that artificial intelligence is developed for the common good.
Eliezer Yudkowsky is a key AI researcher, co-founder of the Machine Intelligence Research Institute and renowned for his work on the development of artificial intelligence alignment.