OpenAI’s Hired Professor Warns of Significant Risk if GPT-4 Used for Dangerous Chemistry

Date:

OpenAI, a leading artificial intelligence company, recently assigned a team of 50 experts, dubbed the ‘Red Team’, to test its newest technology, GPT-4. Andrew White, an associate professor of chemical engineering at the University of Rochester in New York State, was one of the 50 experts recruited in the testing. White told the Financial Times (FT) in an interview that a ‘significant risk’ exists of people using GPT-4 to research and create dangerous chemicals.

White used a feature called “plug-ins” to draw information from scientific papers and directories of chemical manufacturers and ask GPT-4 to suggest a compound that could potentially act as a chemical weapon. GPT-4 was then capable of finding the compound online.

The group of 50 testers’ results were documented in a technical paper which showed that the AI model could also develop hate speech and help users find unlicensed guns online. Fortunately, with White’s help, OpenAI was able to ensure that these issues were addressed before GPT-4 was available for public use.

GPT-4 is OpenAI’s most advanced AI technology that is able to develop creative and accurate results quickly. Many experts and academics, including Elon Musk, have called for a six-month break on the development of AI technologies more powerful than GPT-4, out of concern for the risks these more sophisticated technologies bring if released into the public with no safety precautions.

Andrew White is an associate professor of chemical engineering at the University of Rochester in New York, where his research focuses on utilizing computational methods to understand and design catalytic materials. He was one of the 50 experts tapped by OpenAI to test its GPT-4 model and immediately became aware of the potential dangers of this powerful AI tool. He identified the potential of GPT-4 to create dangerous chemicals and reached out to OpenAI to help them address those risks. By doing this, White has been able to contribute towards helping make the technology safer before its release to the public.

See also  OpenAI Unveils Voice Engine: Revolutionizing Speech Generation and Applications

Frequently Asked Questions (FAQs) Related to the Above News

Please note that the FAQs provided on this page are based on the news article published. While we strive to provide accurate and up-to-date information, it is always recommended to consult relevant authorities or professionals before making any decisions or taking action based on the FAQs or the news article.

Share post:

Subscribe

Popular

More like this
Related

Hacker Breaches OpenAI, Exposing ChatGPT Designs: Cybersecurity Expert Warns of Growing Threats

Protect your AI technology from hackers! Cybersecurity expert warns of growing threats after OpenAI breach exposes ChatGPT designs.

AI Privacy Nightmares: Microsoft & OpenAI Exposed Storing Data

Stay informed about AI privacy nightmares with Microsoft & OpenAI exposed storing data. Protect your data with vigilant security measures.

Breaking News: Cloudflare Launches Tool to Block AI Crawlers, Protecting Website Content

Protect your website content from AI crawlers with Cloudflare's new tool, AIndependence. Safeguard your work in a single click.

OpenAI Breach Reveals AI Tech Theft Risk

OpenAI breach underscores AI tech theft risk. Tighter security measures needed to prevent future breaches in AI companies.