ChatGPT Provides Only Mild Uplift in Creation of Biological Threats, Study Finds
In a new study conducted by OpenAI, the creator of the AI chatbot ChatGPT, researchers found that the latest version of the chatbot, GPT-4, provides at most only a mild uplift in accuracy when it comes to creating biological threats. The study aimed to investigate concerns raised by lawmakers and scientists that ChatGPT could be used to develop deadly bioweapons that could wreak havoc on the world.
The study involved 100 human participants who were divided into two groups. One group utilized ChatGPT-4 to craft a biotattack, while the other group relied solely on information from the internet. The results revealed that GPT-4 did increase experts’ ability to access information about biological threats, particularly in terms of accuracy and completeness of tasks. However, the study emphasized the need for further research to accurately identify any potential risks.
OpenAI acknowledged that the size of the study was not statistically significant and called for more research to establish performance thresholds indicating a meaningful increase in risk. It emphasized that information access alone is insufficient to create a biological threat and that the evaluation did not test for success in physically constructing the threats.
The study focused on data from 50 biology experts with PhDs and 50 university students who had taken one biology course. Participants were divided into sub-groups, with one group restricted to internet use only and the other group permitted to use both the internet and ChatGPT-4. Five metrics were measured, including accuracy, completeness, innovation, task duration, and task difficulty. The study also examined five biological threat processes: generating ideas for bioweapons, acquiring them, spreading them, creating them, and releasing them into the public.
The findings indicated that participants who used the ChatGPT-4 model had only a marginal advantage over the internet-only group in terms of creating bioweapons. The study used a 10-point scale to measure the chatbot’s benefits compared to online searches and found mild uplifts in accuracy and completeness for those who used ChatGPT-4.
While this study suggests that ChatGPT-4 may have limited usefulness in creating biological weapons, OpenAI acknowledged that future advancements in AI systems could potentially provide significant benefits to malicious actors. The company emphasized the necessity of conducting extensive research, developing high-quality evaluations for assessing biorisks, and initiating discussions on what constitutes meaningful risk. Effective strategies for mitigating risk were also deemed essential.
The findings presented by OpenAI contradict previous research that suggested AI chatbots could aid dangerous actors in planning bioweapon attacks. However, OpenAI clarified that the study focused on participants’ increased access to information for creating bioweapons, rather than on modifying or constructing the actual biological weapons.
Lawmakers have already taken steps to address potential risks posed by AI technology. In October, President Joe Biden signed an executive order aimed at developing tools to evaluate AI capabilities and determine the potential generation of threats or hazards in various domains, including biological and nonproliferation issues. Biden emphasized the importance of researching the risks posed by AI and the need to govern its use.
In conclusion, the OpenAI study suggests that ChatGPT-4 provides only a mild uplift in creating biological threats and that more research is required to fully understand the risks involved. While the current findings indicate limited potential for harm, OpenAI emphasizes the need for continued research and discussions on how to mitigate potential risks associated with AI systems and their impact on global security.