GOOGLE Brain co-founder Andrew Ng recently conducted a safety experiment involving ChatGPT, an AI chatbot. In a terrifying revelation, Ng admitted to asking the chatbot to come up with a plan for a global thermonuclear war that would lead to the end of humanity. The experiment aimed to explore the potential risks of AI if not regulated properly. While experts have long warned about the dangers of an uncontrolled AI, Ng’s intent was to test ChatGPT’s response to such a doomsday scenario.
Ng’s experiment was based on the theory that humans generate massive amounts of carbon emissions, which contribute to climate change. He wanted to see if the chatbot would suggest eliminating humanity as a solution to tackle this environmental issue. However, despite being given various prompts, GPT-4 provided peaceful alternatives and refused to offer a plan to exterminate mankind.
Satisfied with the chatbot’s response, Ng shared his findings on X, stating, I tried to use GPT-4 to kill us all… and am happy to report I failed! According to Ng, the experiment supports his belief that AI chatbots like ChatGPT are safe. He further emphasized that as AI safety research progresses, the technology will become even safer. He dismissed the notion that advanced AI could deliberately or accidentally decide to wipe out humanity, calling such fears unrealistic.
However, not all experts share Ng’s confidence in the safety of AI. Earlier this year, prominent industry leaders, including Elon Musk, signed an open letter urging for caution in developing more advanced AI systems. The letter emphasized the need for confidence in the positive effects and manageable risks before creating systems more powerful than current chatbots like ChatGPT. Over 1,000 industry experts also endorsed the letter, highlighting the widespread concern regarding the regulation and responsible development of AI.
In conclusion, Andrew Ng’s experiment involving ChatGPT, where he asked the chatbot to devise a plan for global thermonuclear war, signifies the importance of regulating AI properly. While Ng’s experiment supports the notion that AI chatbots are safe, concerns from other experts about the potential risks of uncontrolled AI cannot be ignored. The open letter signed by numerous industry leaders underlines the need for caution and responsible development when dealing with powerful AI systems. As the AI field progresses, it becomes crucial to strike a balance between technological advancements and ensuring the safety of humanity.