AI, or artificial intelligence, is advancing at an alarming rate and has become a significant risk to society, according to leading tech executives. The likes of Microsoft CTO Kevin Scott and OpenAI founder Sam Altman have warned that AI could be as dangerous to humanity as pandemics and nuclear wars. A statement by the Center for AI Safety, signed by hundreds of executives and academics including those from Google’s DeepMind, ChatGPT developer OpenAI, and AI start-up Anthropic, urged governments and organizations to prioritize regulation of AI to mitigate potential job market problems, public health impacts, and the “weaponization of disinformation,” discrimination, and impersonation. In addition, experts are calling for a focus on existential concerns, highlighting the need to address the societal risks tied to AI.
ChatGPT, an AI-powered platform developed by OpenAI, is referenced in this article as an example of AI’s rapid growth and proliferation.
Sam Altman is one of several tech leaders mentioned in this article warning of the potential danger posed by artificial intelligence. Altman founded OpenAI, the AI development organization behind ChatGPT.