Elon Musk’s claims that artificial intelligence (AI) ‘will kill us all’ have ‘no proof – yet’, according to a former responsible AI program manager at Google. Musk made these assertions while attending the UK’s global AI Safety Summit in early November. Addressing the audience, he stated that although the chances are small, there is a possibility that AI could pose a threat to humanity. However, the ex-Google insider, who founded the organization Diverse AI to promote diversity in the AI sector, believes that these grandiose fears surrounding AI are currently unfounded.
Experts like Musk and the former Google insider, who goes by the name Duke, have expressed concerns about the potential dangers of AI. These dangers include violations of human rights, perpetuation of harmful stereotypes, privacy breaches, copyright issues, dissemination of misinformation, and the potential for cyber attacks. Some even worry about the use of AI in bio and nuclear warfare. However, Duke maintains that there is no evidence to support these possibilities at present. Nonetheless, she acknowledges that they could become risks in the future.
Many of the exaggerated fears surrounding AI stem from the concept of generative AI and its alleged emergent properties. Generative AI refers to AI systems that can come up with capabilities they were not explicitly trained for, which raises questions about the extent to which AI can advance beyond its initial programming. Duke suggests that these concerns are based on the idea that if AI continues to exhibit emergent properties, it could potentially surpass its creators’ intentions. She compares training AI to raising a child and emphasizes the importance of reinforcement learning to guide its development.
Duke underlines that humans are responsible for building and training AI, leaving little room for excuses if an intelligent machine were to go rogue. While AI development must prioritize the principles of cause-and-effect parenting, the implementation of a global responsible AI framework is also crucial. Duke believes that such a framework, if established from the outset, could address many of these concerns and ensure the ethical use of AI. She emphasizes the need for proper regulation, particularly in governmental AI applications, to maximize the benefits while mitigating the risks.
In conclusion, although Elon Musk’s warnings about the potential dangers of AI have gained attention, a former Google insider suggests that there is presently no evidence to support these claims. While acknowledging the need for caution and responsible development, the ex-Google insider emphasizes that humans have the power to shape AI’s trajectory and must take responsibility for training and guiding it. The adoption of a global framework that prioritizes responsible AI practices can help mitigate the risks associated with AI while harnessing its potential for the greater good.