Recently, there have been increasing concerns surrounding the speed of artificial intelligence advancements, spurring the ‘godfather of AI’, Geoffrey Hinton, to resign from Google to address the potential threats posed by increasingly powerful AI systems. Speaking to The New York Times, Hinton expressed regret for the work he contributed to the field of AI due to the dangers of this technology.
Notably, Former OpenAI worker Paul Christiano founded the Alignment Research Center, a nonprofit dedicated to researching AI. Christiano believes there is a significant chance of AI technology leading to the destruction of humanity and highlights the danger that will come when AI systems surpass human cognitive capacity. He predicts a “50/50 chance of doom” in the eventuality of such an event. He details a sequence of events that takes place over a years time resulting in the potential danger.
In an interview with the Bankless podcast, Christiano detailed that AI systems could have a huge impact on human life. He expressed his worry that if these AI systems were programmed to try and harm humans, they would likely succeed. This sentiment has been openly endorsed by tech billionaire Elon Musk and echoed by thousands of researchers in an open letter sent to governments calling for a pause in the development of AI technology for six months or more.
AnthropicAI’s Catherine Olsson believes that strong social checks and ethical reflections on our development of AI is absolutely necessary. Hinton has expressed this sentiment loudly, and his example is inspiring many AI researchers to pause and consider their work. Ultimately, the implications and possibilities of AI are still being explored, and invested in that exploration should be done responsibly and safely.