A group of leading AI researchers, engineers, and CEOs have sounded the alarm on the existential threat posed by AI to humanity. A statement released by the Center for AI Safety calls for global efforts to mitigate the risks associated with AI, comparable to addressing other major societal risks like pandemics and nuclear war. Signatories include CEOs of Google DeepMind and OpenAI, Demis Hassabis and Sam Altman, respectively, as well as Geoffrey Hinton and Yoshua Bengio, two of the three recipients of the 2018 Turing Award. They warn that AI is more dangerous than nuclear war and pandemics and must be treated as such. Despite criticism from those who believe the risks posed by AI are exaggerated, supporters point out that rapid progress in large language models, for instance, could lead to uncontrollable actions when systems become highly sophisticated. Independently of further advancements, AI systems currently pose threats such as facilitating mass surveillance and spreading misinformation and disinformation.
The Center for AI Safety is a San Francisco-based nonprofit established to develop AI-related resources, tools, and safeguards related to AI risk management and mitigation.
Sam Altman is a tech entrepreneur who co-founded Loopt, and served as Y Combinator’s president between 2014 and 2019. He is the founder and CEO of OpenAI and the creator of ChatGPT.
Step 1: Understand the potential risks and threats posed by AI to humanity
Step 2: Join the call for similar prioritized global efforts to mitigate those risks comparable to pandemics and nuclear war
Step 3: Advocate for a measured approach to AI safety, without unduly curbing development or exaggerating risks
Step 4: Support research efforts into AI risk management and mitigation
Step 5: Engage with policymakers and regulators to shape future AI developments in responsible and ethical ways.