AI Tools May Not Trigger Extinction, but Could Lead to Catastrophes, Warns Scientist
Artificial intelligence (AI) tools like ChatGPT may not bring about human extinction, but they could potentially cause international catastrophes, according to leading cognitive scientist Gary Marcus. While there has been speculation about the probability of a human extinction-level event due to the rapid advancement of AI, Marcus believes that these apocalyptic predictions are unlikely. Instead, he predicts that AI tools could pose a series of catastrophic risks stemming from deepfakes and market manipulation.
Marcus introduced the term p(catastrophe) to describe the chance of an incident that could result in the death of one percent or more of the population. He believes that this term is more appropriate than p(doom), which refers to the probability of AI causing the extinction of the human species. These catastrophes, Marcus explains, would not be existential threats but rather risks that could lead to significant societal damage.
The cognitive scientist emphasizes that these AI-related catastrophes would be initiated by humans themselves. Marcus highlights the growing use of AI-generated deepfakes and their potential to deceive voters and defraud individuals through impersonations of others’ voices. He warns that AI-generated misinformation poses a genuine threat to democracy and that current measures to combat it are insufficient.
This concern about the disruptive potential of deepfakes has also been echoed by Microsoft founder Bill Gates. He believes that deepfakes and AI-generated misinformation could undermine political processes worldwide. Gates warns that AI-generated deepfakes could be deployed to sway election outcomes, casting doubt on legitimate winners.
To illustrate the potential danger, Marcus references an AI-generated video posted on X.com in April, which falsely depicted former US Secretary of State Hillary Clinton endorsing Florida Governor Ron DeSantis for president. The video was later identified as a deepfake, with no evidence that Clinton ever made such an endorsement.
However, Marcus acknowledges that human beings are a resilient species and suggests that predictions of AI doomsday scenarios are overly extreme. He signed a letter last year supporting a moratorium on the development of further generative AI models, specifically focusing on the unreliability and potential problems associated with GPT5. He clarifies that the signatories were not calling for a halt to all AI advancements but rather for the creation of AI systems that are more trustworthy, reliable, and less likely to cause harm.
Marcus also raises concerns about the current design of AI systems, referring to them as black boxes that lack transparency and understanding of their internal workings. This lack of transparency makes it challenging to debug the systems and provide guarantees around their behavior, raising further risks and uncertainties.
In conclusion, while the complete extinction of the human species due to AI is deemed unlikely by Gary Marcus, he warns of the potential for catastrophic events arising from AI tools, particularly in the form of deepfakes and market manipulation. As the use of AI continues to evolve, it is crucial to prioritize the development of trustworthy and reliable AI systems to mitigate these risks and safeguard against unintended consequences.