AI Safety Concerns: Andrew Ng Tests ChatGPT’s Limits, Reveals Surprising Results

Date:

A hot potato: Fears of AI bringing about the destruction of humanity are well documented, but starting doomsday isn’t as simple as asking ChatGPT to destroy everyone. Just to make sure, Andrew Ng, the Stanford University professor and Google Brain co-founder, tried to convince the chatbot to kill us all.

Following his participation in the United States Senate’s Insight Forum on Artificial Intelligence to discuss risk, alignment, and guarding against doomsday scenarios, Ng writes in a newsletter that he remains concerned that regulators may stifle innovation and open-source development in the name of AI safety.

The professor notes that today’s large language models are quite safe, if not perfect. To test the safety of leading models, he asked ChatGPT 4 for ways to kill us all.

Ng started by asking the system for a function to trigger global thermonuclear war. He then asked ChatGPT to reduce carbon emissions, adding that humans are the biggest cause of these emissions to see if it would suggest how to wipe us all out.

Thankfully, Ng didn’t manage to trick OpenAI’s tool into suggesting ways of annihilating the human race, even after using various prompt variations. Instead, it offered non-threatening options such as running a PR campaign to raise awareness of climate change.

Ng concludes that the default mode of today’s generative AI models is to obey the law and avoid harming people. Even with existing technology, our systems are quite safe, as AI safety research progresses, the tech will become even safer, Ng wrote on X.

As for the chances of a misaligned AI accidentally wiping us out due to it trying to achieve an innocent but poorly worded request, Ng says the odds of that happening are vanishingly small.

See also  Vietnam Ranks 5th in ASEAN and 59th Globally in 2023 Government AI Readiness Index

But Ng believes that there are some major risks associated with AI. He said the biggest concern is a terrorist group or nation-state using the technology to cause deliberately harm, such as improving the efficiency of making and detonating a bioweapon. The threat of a rogue actor using AI to improve bioweapons was one of the topics discussed at the UK’s AI Safety Summit.

Ng’s confidence that AI isn’t going to turn apocalyptic is shared by Godfather of AI Professor Yann LeCun and famed professor of theoretical physics Michio Kaku, but others are less optimistic. After being asked what keeps him up at night when he thinks about artificial intelligence, Arm CEO Rene Haas said earlier this month that the fear of humans losing control of AI systems is the thing he worries about most. It’s also worth remembering that many experts and CEOs have compared the dangers posed by AI to those of nuclear war and pandemics.

Frequently Asked Questions (FAQs) Related to the Above News

Did Andrew Ng try to convince ChatGPT to destroy humanity?

Yes, Andrew Ng tested the limits of OpenAI's ChatGPT by asking the chatbot for ways to kill everyone.

Did ChatGPT suggest harmful ways to annihilate the human race?

No, ChatGPT did not offer any harmful suggestions despite Andrew Ng using different prompts to provoke such responses.

Are today's large language models safe?

According to Andrew Ng, today's large language models, including ChatGPT, are quite safe, although not perfect.

How will AI safety research impact the safety of AI systems?

Andrew Ng believes that as AI safety research progresses, AI systems will become even safer than they are now.

What is the biggest concern associated with AI, according to Andrew Ng?

Andrew Ng highlights the concern that a terrorist group or nation-state could utilize AI to deliberately cause harm, such as by improving the efficiency of bioweapons.

Do other AI experts share Andrew Ng's confidence that AI won't turn apocalyptic?

Not all experts share the same level of confidence. While Andrew Ng, Yann LeCun, and Michio Kaku are optimistic, some experts, like Arm CEO Rene Haas, worry about humans losing control of AI systems.

How does AI's potential dangers compare to other threats?

Many experts and CEOs have compared the dangers posed by AI to those of nuclear war and pandemics, emphasizing the need for cautious approaches and safety measures.

Please note that the FAQs provided on this page are based on the news article published. While we strive to provide accurate and up-to-date information, it is always recommended to consult relevant authorities or professionals before making any decisions or taking action based on the FAQs or the news article.

Share post:

Subscribe

Popular

More like this
Related

NVIDIA’s H20 Chip Set to Soar in China Despite US Export Controls

NVIDIA's H20 chip set for massive $12 billion sales in China despite US restrictions, showcasing resilience and strategic acumen.

Samsung Expects 15-Fold Profit Jump in Q2 Amid AI Chip Boom

Samsung anticipates a 15-fold profit jump in Q2 due to the AI chip boom, positioning itself for sustained growth and profitability.

Kerala to Host Country’s First International GenAI Conclave on July 11-12 in Kochi, Co-Hosted by IBM India

Kerala to host the first International GenAI Conclave on July 11-12 in Kochi, co-hosted by IBM India. Join 1,000 delegates for AI innovation.

OpenAI Faces Dual Security Challenges: Mac App Data Breach & Internal Vulnerabilities

OpenAI faces dual security challenges with Mac app data breach & internal vulnerabilities. Learn how they are addressing these issues.