Dr. Geoffrey Hinton, a Turing Prize winner and former Google scientist, was once known for enthusiastically exclaiming among his AI students “I understand how the brain works now!” Nowadays, however, the “Godfather of AI” is warning of the dangers associated with AI. In a recent interview with The New York Times, he mentioned the peril of the rapid expansion of artificial intelligence and the difficulty of preventing “bad actors” from using it for malicious intent.
Hinton’s foundational work with neural networks has helped today’s chatbots understand images, text, and speech in much the same way as they’re interpreted by humans. When interviewed in 2014 by Wired, he was filled with enthusiasm. But in New York Times’ interview, a much different Hinton presented his concerns about the far-reaching potential for AI to have a momentous effect on the world.
Today’s leading technology companies have pledged to move cautiously and safely in their explorations of AI. However, Dr. Hinton worries that competition and a desire to move quickly may prevent this from happening. We’ve already witnessed the rush of Google and Microsoft as they scrambled to create more powerful machines like ChatGPT and Bing AI that passed through the Chatbot system.
An area of growing concern relates to the production of counterfeit data – AI can already make convincing fake music, images of people who don’t exist, and apt photographs that have won competitions. The internet is now filled with videos and documents that the reliability of which can’t be taken for granted. Dr. Hinton fears that this is only the beginning and that many more threats could soon arise if tech companies can’t keep up with their competition while simultaneously addressing their responsibility to protect the public.
Going one step further, AI will soon take on even more complex tasks, such as writing presentations, proposals, and programming scripts. There are already AI-generated books on sale on Amazon, and with the increasingly effective capacity of AI to write, the film industry’s unhappiness at the potential effects on human writers are growing.
Finally, the advancing intelligence of artificial intelligence means that it can learn from past actions and draw conclusions on its own. If this intelligence is allowed to run its programming independently, Dr. Hinton fears that the results may be unintended and unpredictable. AI often mimics the smartness of humans — but unlike humans, AI does not have a natural compassion and understanding of the consequences of its actions.
No matter what exciting AI possibilities Dr. Hinton once anticipated, it is clear that his outlook has drastically changed. Quoting Wired in 2014, he said, “We ceased to be the lunatic fringe. We’re now the lunatic core.” The current implications of the advancement of AI technology, his warnings should not be taken lightly.