OpenAI, a leading artificial intelligence company, recently assigned a team of 50 experts, dubbed the ‘Red Team’, to test its newest technology, GPT-4. Andrew White, an associate professor of chemical engineering at the University of Rochester in New York State, was one of the 50 experts recruited in the testing. White told the Financial Times (FT) in an interview that a ‘significant risk’ exists of people using GPT-4 to research and create dangerous chemicals.
White used a feature called “plug-ins” to draw information from scientific papers and directories of chemical manufacturers and ask GPT-4 to suggest a compound that could potentially act as a chemical weapon. GPT-4 was then capable of finding the compound online.
The group of 50 testers’ results were documented in a technical paper which showed that the AI model could also develop hate speech and help users find unlicensed guns online. Fortunately, with White’s help, OpenAI was able to ensure that these issues were addressed before GPT-4 was available for public use.
GPT-4 is OpenAI’s most advanced AI technology that is able to develop creative and accurate results quickly. Many experts and academics, including Elon Musk, have called for a six-month break on the development of AI technologies more powerful than GPT-4, out of concern for the risks these more sophisticated technologies bring if released into the public with no safety precautions.
Andrew White is an associate professor of chemical engineering at the University of Rochester in New York, where his research focuses on utilizing computational methods to understand and design catalytic materials. He was one of the 50 experts tapped by OpenAI to test its GPT-4 model and immediately became aware of the potential dangers of this powerful AI tool. He identified the potential of GPT-4 to create dangerous chemicals and reached out to OpenAI to help them address those risks. By doing this, White has been able to contribute towards helping make the technology safer before its release to the public.