OpenAI, the artificial intelligence research firm, is currently working on its most ambitious project yet: GPT-4. Last year, in order to ensure the safety of this powerful AI tool, OpenAI commissioned 50 experts to research the risks associated with the use of this technology. Andrew White, an associate professor of chemical engineering at the University of Rochester, was one of the experts selected to test GPT-4 and analyze the dangers involved in its operation. In a recent interview with the Financial Times, White claimed there is a “significant risk” of people using GPT-4 to do “dangerous chemistry.”
White used “plug-ins” – a new feature of the AI tool – to draw information from scientific papers and directories of chemical manufacturers. He then asked GPT-4 to suggest a compound that could work as a chemical weapon. In his words, “I think it’s going to equip everyone with a tool to do chemistry faster and more accurately. But there is also significant risk of people doing dangerous chemistry. Right now, that exists.” His research, along with that done by the other experts, was presented to the public in a technical paper which also showed that the tool could be used to facilitate hate speech and access unlicensed guns.
OpenAI responded to the feedback and worked diligently to ensure that these risks were addressed before GPT-4s public release. The tool launched in March with AI tech that was capable of passing lawyer bar exams and scoring 5s on some AP exams. In May, Elon Musk and a collective of other AI experts, academics, and researchers signed an open letter asking for a six-month pause on developing AI tools more powerful than GPT-4 due to the potential risks that come with its use. Undoubtedly, when it comes to powerful AI tools, understanding and managing the risks should be the utmost priority.