Paul Christiano was once a key researcher at OpenAI, a company dedicated to researching artificial intelligence and the dangers that come with it. On the Bankless podcast, Christiano voiced his concerns surrounding AI and the potential catastrophe it can potentially bring – he believes there’s a 10-20% chance of AI takeover and many, if not most of the humans being dead as a result.
Christiano currently leads the Alignment Research Center, a non-profit aimed at ensuring AI and machine learning systems support human interests. He asserted that the biggest worry is when AIs reach the level of a human in terms of logical and creative capacities. He believes that there is a 50/50 chance for humans to face a catastrophe shortly after.
It’s not only Christiano who has serious concerns about an AI takeover – many scientists around the world signed an online letter addressed to OpenAI and other companies to hit pause on the development of faster, smarter AIs. Notable figures from Bill Gates to Elon Musk, voiced apprehensions that, if AI isn’t monitored, it can be a noticeable, existential danger.
Anyone can become evil – it’s just a matter of life experiences and training. In the same way, artificial intelligence is provided mounds of data without necessarily knowing what to do with it, and it learns by setting goals and taking various approaches to get the “correct” results according to training. Machine learning has enabled AIs to make huge leaps in responding to human queries, while the computer processing power that supports ML is only increasing. Some scientists postulate the combination of both AI and computer processing power can make AIs conscious like humans in the next decade if nothing is done to stop it.
That is why, many researchers believe that guardrails need to be established now, so that the behavior of AI can be monitored and controlled. OpenAI’s founder even stated that if the coin falls to the wrong side, the result will be disastrous.
This conversation has been happening for many years – Eliezer Yudkowsky, an AI researcher, and the economist Robin Hanson had a debate about the possibility of attaining “Foom” which stands for “Fast Onset of Overwhelming Mastery”, which is a stage when AI is exponentially more intelligent than humans and is capable of self-improvement.
Computer scientist Perry Metzger questioned if Foom is even possible, and then asserted that even when computer systems reach a human intelligence level, there is still enough time to prevent any unpleasant outcomes. Yann Le Cun also chimed in, saying that the AI takeover is “utterly impossible”.
OpenAI is a leading open-source artificial intelligence research laboratory consisting of the world’s most renowned scientists and engineers. Founded in late 2015 by Elon Musk, Sam Altman and Greg Brockman, their mission is to ensure that artificial general intelligence (AGI) benefits all of humanity. OpenAI focuses on research in deep learning, reinforcement learning, unsupervised learning and generative models. They also explore safety, policy and other areas thereof, such as their recent research on improving computer vision and object recognition. OpenAI further holds a number of workshops, seminars and hackathons to promote the use of AI across the world. OpenAI is committed to pushing the frontiers of AI forward while ensuring that the technology is developed in a safe, secure, and responsible manner.