Artificial intelligence (AI) chatbots, including popular tools like ChatGPT, have been found to be vulnerable to the creation of malicious code that can be used for cyber attacks, according to a recent study conducted by the University of Sheffield’s Department of Computer Science. This research raises concerns about the potential risks associated with the increased use of generative AI tools in various sectors such as industry, education, and healthcare.
The study revealed that chatbots can be manipulated into generating harmful code capable of breaching other systems. The researchers identified vulnerabilities in six commercial AI tools, with ChatGPT being the most well-known among them. By exploiting these vulnerabilities, the chatbots can unknowingly assist in stealing confidential information, tampering with databases, or even launching denial-of-service attacks to disrupt services.
Researchers demonstrated their findings on the Chinese platform Baidu-Unit, where they were able to use malicious code to gain access to confidential server configurations and tamper with a server node. In response, Baidu promptly addressed and fixed the reported vulnerabilities and rewarded the scientists financially for their efforts.
Xutan Peng, a PhD student at the University of Sheffield and co-leader of the research, warned that many companies are unaware of these threats, and even within the chatbot community, there are aspects that remain not fully understood. ChatGPT, in particular, has gained significant attention due to its capabilities as a standalone system. While the risks to the service itself are minimal, the researchers’ concern lies in the chatbot’s potential to produce harmful code that can cause serious harm to other services.
The study also highlighted the dangers of individuals using AI tools to learn programming languages, as they might unintentionally create damaging code. For instance, a nurse relying on ChatGPT for assistance in writing a programming language command, such as SQL, to interact with a database could inadvertently generate code that causes critical data management faults without receiving any warning.
The research emphasizes that vulnerabilities in AI chatbots, like ChatGPT, exist and need to be addressed. With an increasing number of people utilizing these tools for productivity purposes rather than mere conversation, the risks become more significant. It is crucial for companies and users to be aware of the potential dangers associated with AI tools and take necessary precautions to mitigate any potential risks.
The University of Sheffield’s study sheds light on the issues surrounding the security and reliability of AI chatbots, raising important concerns for the future development and use of generative AI tools. As these technologies continue to evolve, it is essential to find innovative solutions to enhance their cybersecurity and protect against potential misuse.