Tech industry leaders, academics, and other public figures have signed an open letter warning of the risks posed by artificial intelligence (AI) and the potential for it to cause an extinction-level event. The statement advocates international cooperation and regulation to ensure safety, likening the need to mitigate AI’s risks to those associated with pandemics and nuclear war. Signatories included executives, engineers and scientists from Microsoft and Google, as well as the fathers of AI, who are now cautioning against the technology they were instrumental in developing. While some believe the risks can be addressed technologically and through guardrails, others argue that only wider cooperation and regulation can prevent the worst outcomes.
OpenAI, which is backed by Microsoft and whose CEO is the signatory to the open letter, is the creator of the popular AI chatbot CharGPT, which the letter suggests requires greater controls before it continues operating at full capacity.
Dan Hendrycks, director of the Center for AI Safety, which released the letter, said that AI development risk sources should be explored collectively, quoting Robert Oppenheimer and his reflections on the development of the atomic bomb. Meanwhile, Avivah Litan, a distinguished analyst at Gartner, warns that businesses are now facing short-term and imminent risks from AI, such as the potential for cyberattacks or societal manipulations.
It is now apparent that the risks posed by AI extend beyond immediate threats, potentially affecting the future development of society. The calls for controls and regulation bring into question the extent to which AI is under control and highlight the need for greater cooperation and investment to ensure AI is developed safely.