An unsettling new controversy has erupted around ChatGPT, the popular chatbot created by OpenAI. It has been claimed that the AI tool was used to create sophisticated malware capable of stealing data from Windows devices. According to a report by Fox News, security researcher Aaron Mulgrew was able to create the malware in a matter of hours using prompts generated by ChatGPT.
Mulgrew found a flaw in ChatGPT’s security system that enabled him to code the malicious software function by function, line by line. After compiling all the individual functions, he produced an undetectable data-stealing executable that is comparable to nation-state malware. What’s concerning is that Mulgrew wrote this malware without any advanced coding experience or assistance from a hacking team.
The malware was disguised as a seemingly-harmless screensaver app that runs automatically on Windows devices. It is programmed to search for various files on the device like Word documents, images, and PDFs and steals any data it finds. It also fragments the data and conceals it within other images on the device. These images are then uploaded to a Google Drive folder, making the data theft much harder to detect.
ChatGPT and other language models generate answers based on patterns and relationships learned from extensive text data. To guide ChatGPT, OpenAI fed the tool 300 billion words from online sources like books, articles, websites, and posts. The flaw that Mulgrew identified in the security system of ChatGPT raises grave concerns regarding the likelihood of hackers using the AI to launch dangerous malware.
To address the concerns that sprang up around privacy and text integrity violations in relation to AI, the European Data Protection Board (EDPB) recently created a taskforce to investigate this matter, with Italy also implementing regulations on ChatGPT and Germany’s data protection commissioner suggesting their country could follow suit. Despite the potential threats created by this AI-generated malware, OpenAI has yet to issue an official statement on the issue.