ChatGPT Maker to Dedicate Fifth of Compute Power to Preventing Rogue AI

Date:

OpenAI, the creator of ChatGPT, has announced plans to dedicate a fifth of its compute power to prevent the potential dangers posed by rogue artificial intelligence (AI). The start-up, which is backed by Microsoft, aims to solve the alignment problem by ensuring that the goals of AI systems are beneficial to humans. In a recent blog post, Ilya Sutskever and Jan Leike, co-founders of OpenAI, expressed concerns about the immense power of superintelligence and its potential to disempower humanity or even lead to human extinction. They emphasized the need for breakthroughs in alignment research to control superintelligent AI.

Sutskever and Leike predicted that superintelligent AI systems, which possess greater intelligence than humans, could emerge within this decade. They stressed the importance of developing superior techniques to control and steer such AI systems, hence the focus on alignment research. OpenAI plans to dedicate 20% of its compute power over the next four years to tackle this challenge. Additionally, the company will establish a new research team, the Superalignment team, to lead this effort.

The primary objective of the team is to create an AI alignment researcher capable of human-level understanding, with the subsequent goal of scaling up this capability using vast amounts of compute power. OpenAI intends to train AI systems using human feedback, enable them to assist in human evaluation, and eventually have AI systems conduct alignment research themselves.

However, some AI safety advocates, such as Connor Leahy, caution that OpenAI’s approach is flawed. Leahy believes that solving the alignment problem should be a priority before pursuing human-level AI, as the initial AI systems may behave unpredictably and cause havoc if not properly controlled. He argues against relying on a plan that could potentially lead to unintended and unsafe consequences.

See also  Army Invests $50M in AI Solutions from Small Businesses

Concerns about the risks associated with AI have been a prominent topic among AI researchers and the general public. In April, a group of industry leaders and experts in AI signed an open letter calling for a six-month pause in the development of AI systems more powerful than OpenAI’s GPT-4, citing potential societal risks. Additionally, a Reuters/Ipsos poll in May revealed that over two-thirds of Americans are worried about the potential negative effects of AI, with 61% believing it could pose a threat to civilization.

OpenAI’s commitment to dedicating compute power and resources to the alignment problem signifies the importance of addressing the potential dangers of superintelligent AI. Their efforts aim to ensure that AI remains beneficial and controllable for humanity, but critics argue for a more cautious and proactive approach to mitigate unintended consequences. The future of AI hinges on finding solutions that prioritize the alignment of AI goals with human values.

Frequently Asked Questions (FAQs) Related to the Above News

What is OpenAI's plan regarding compute power and rogue AI?

OpenAI plans to dedicate one-fifth of its compute power to prevent potential dangers posed by rogue artificial intelligence (AI).

What is the alignment problem that OpenAI aims to solve?

The alignment problem refers to ensuring that the goals of AI systems are beneficial to humans and preventing their potential negative impacts.

What concerns did Ilya Sutskever and Jan Leike express about superintelligent AI?

They expressed concerns about the immense power of superintelligence and its potential to disempower humanity or even lead to human extinction.

When do Sutskever and Leike predict that superintelligent AI systems could emerge?

They predict that superintelligent AI systems, which possess greater intelligence than humans, could emerge within this decade.

What steps will OpenAI take to tackle the alignment problem?

OpenAI plans to dedicate 20% of its compute power over the next four years to tackle the alignment problem. They will also establish a new research team, the Superalignment team, to lead this effort.

What is the primary objective of OpenAI's Superalignment team?

The primary objective of the Superalignment team is to create an AI alignment researcher capable of human-level understanding, with the goal of scaling up this capability using vast amounts of compute power.

What concerns have some AI safety advocates raised about OpenAI's approach?

Some AI safety advocates caution that OpenAI's approach is flawed, as they believe solving the alignment problem should be a priority before pursuing human-level AI. They argue against relying on a plan that may lead to unintended and unsafe consequences.

What are the concerns of the general public regarding AI, according to recent polls?

According to polls, over two-thirds of Americans are worried about the potential negative effects of AI, with 61% believing it could pose a threat to civilization.

Why is OpenAI's commitment to dedicating compute power significant?

OpenAI's commitment signifies the importance of addressing the potential dangers of superintelligent AI. By dedicating compute power and resources, they aim to ensure that AI remains beneficial and controllable for humanity.

What should the future of AI prioritize, according to the article?

The future of AI should prioritize finding solutions that align AI goals with human values, to mitigate potential unintended consequences and ensure its benefits for humanity.

Please note that the FAQs provided on this page are based on the news article published. While we strive to provide accurate and up-to-date information, it is always recommended to consult relevant authorities or professionals before making any decisions or taking action based on the FAQs or the news article.

Aniket Patel
Aniket Patel
Aniket is a skilled writer at ChatGPT Global News, contributing to the ChatGPT News category. With a passion for exploring the diverse applications of ChatGPT, Aniket brings informative and engaging content to our readers. His articles cover a wide range of topics, showcasing the versatility and impact of ChatGPT in various domains.

Share post:

Subscribe

Popular

More like this
Related

OpenAI’s ChatGPT for Mac App Security Breach Resolved

OpenAI resolves Mac App security breach for ChatGPT, safeguarding user data privacy with encryption update.

COVID Vaccine Study Finds Surprising Death Rate Disparities

Discover surprising death rate disparities in a COVID vaccine study, revealing concerning findings on life expectancy post-vaccination.

Apple Watch to Get Chip, Display Upgrades in Spring Launch

Get ready for the latest Apple Watch models this spring! Expect upgraded chips and displays for enhanced performance and features.

Can Nvidia Rise to a $4 Trillion Valuation with Blackwell Chips Leading the Way?

Can Nvidia rise to a $4 trillion valuation with Blackwell chips leading the way? Explore the potential of AI innovation in the tech industry.