AI Chatbot ‘WormGPT’ Helping Hackers Plan Advanced Cyberattacks
The emergence of an AI chatbot named ‘WormGPT’ is raising concerns about its potential for enabling sophisticated cyberattacks. Cybersecurity firm SlashNext recently identified the chatbot on cybercrime forums in the dark web, highlighting its malicious intent. Designed specifically for illegal activities, WormGPT goes beyond the capabilities of traditional chatbots like ChatGPT and Bard, lacking ethical boundaries or limitations.
Unlike its counterparts, WormGPT possesses features such as unlimited character support and chat memory retention, allowing it to answer potentially illegal queries. It has been trained on a wide range of data sources, with a particular focus on malware-related information. This knowledge allows WormGPT to craft convincing and strategically cunning emails, showcasing its potential for sophisticated phishing attacks.
Adrianus Warmenhoven, a cybersecurity expert at NordVPN, noted that when ChatGPT was introduced last year, discussions in the dark web revolved around exploiting it for criminal purposes. Hackers were keen on leveraging its humanlike language abilities to create more authentic phishing emails, as well as utilizing its programming capabilities to develop new malware.
NordVPN has issued a warning about the rise in Grandma Exploits, where criminals indirectly seek illegal information by wrapping it within innocent requests, such as letters to relatives. Warmenhoven emphasized the potential power of WormGPT in social engineering, particularly targeting businesses that can offer large paydays for ransomware gangs.
Moreover, experts are concerned that an AI chatbot without safeguards poses a significant threat for various types of crimes. Without the protections and censorship imposed on ChatGPT, a tool like WormGPT can carry out cyber attacks on a large scale and become a production line for spreading fake news.
It is essential for international law enforcement authorities to swiftly respond and identify the creators of this malicious chatbot. Their intervention is necessary to prevent this worm from turning the AI dream into a nightmare, Warmenhoven stated.
The apparent risks associated with WormGPT highlight the importance of addressing the potential dangers posed by AI tools in the wrong hands. Alongside the ongoing development of AI technology, efforts to implement safeguards and ethical boundaries become increasingly vital in ensuring that the AI revolution does not become a breeding ground for cybercriminals.
Sarah Silverman is also reportedly suing the creators of ChatGPT and LLaMA over copyright claims.