Ex-Google CEO Eric Schmidt recently spoke out about the ill effects and dangers of artificial intelligence (AI) tools such as ChatGPT. He noted that AI could be an “existential risk” that could harm or even kill a large number of people. This has caused great concern in the world of tech, with industry giants such as Elon Musk joining the call for a moratorium on AI development.
Google, which was lead by Schmidt from 2001 to 2011, is known for its support for emerging technologies. They have played an influential role in shaping the field of AI, developing tools such as Google Duplex that are capable of understanding natural language. However, Schmidt’s warning against the use of AI could put a halt to some of these advancements.
Schmidt’s fear lies with the use of AI in scenarios such as hacking or uncovering new breakthroughs in the field of biology. He explained that AI systems, if misused, can pose a serious threat to humanity. As such, governments must take action to prevent malicious use of AI and create corresponding regulations and monitoring to ensure its safety.
Other tech leaders, including entrepreneur and engineer Elon Musk, have echoed Schmidt’s worries about AI. They have raised similar issues about the potential for an AI-powered spread of false information and an increase of job losses due to automation.
The risks of advanced AI cannot be understated, and as such, regulators must take steps to ensure it is put to good use. Eric Schmidt’s warnings and the support of other industry leaders demonstrate the need for responsible governance and monitoring of AI-based technologies. Doing so will ensure that its potential can be harnessed for the benefit of humanity.