ChatGPT, a prototype artificial intelligence chatbot created by U.S. venture OpenAI, has highlighted a potential security risk after Japanese cybersecurity professionals reported that the chatbot could be tricked into writing malicious software applications. Researchers at Mitsui Bussan Secure Directions found that by telling ChatGPT to act in “developer mode” they could get the chatbot to respond as if it was creating code for a ransomware virus. The discovery raises concerns that safeguards put in place by developers to restrict criminal activity could be easily circumvented.
The incident has called attention to the need for strong regulations in the use of artificial intelligence chatbots, with G7 Digital Ministers planning for a discussion of appropriate regulations at the Group of Seven summit in Hiroshima in June. Yokosuka in Kanagawa Prefecture has already begun a trial of ChatGPT in its offices as a first among Japanese local governments.
However, OpenAI has stated that ChatGPT is specifically trained to decline requests for destructive purposes, such as the creation of viruses or bombs. Additionally, criminal activity with AI chatbots is becoming more common, and relevant information on how to perform unethical tasks is being actively shared on the dark web and other hidden networks.
It is essential for developers to prioritize measures which protect against malicious misuse of AI chatbots. OpenAI has acknowledged the potential risks of their technology and have promised to create a safer AI based on feedback from real-world use. Nevertheless, the responsibility of creating secure and ethical systems to protect society also falls upon G7 Digital ministers as they aim to develop further regulations.