Cybersecurity experts at NordVPN have issued warnings about a new phenomenon on the darkweb involving an advanced AI chatbot known as WormGPT. For just $60 a month, users can access this degenerate Large Language Model (LLM) that lacks any moral compass.
WormGPT is a creation of a daring hacker who purposely designed it to operate outside the boundaries of ethical considerations. Unlike other AI chatbots like ChatGPT, WormGPT has no legal obligations or restrictions as it isn’t associated with large public-facing companies such as OpenAI or Google.
Built upon the 2021 open-source LLM GPT-J, WormGPT has primarily been trained using data related to malware creation. Its primary purpose is to cater to aspiring threat actors by providing them with a platform to generate malware and associated content, such as phishing email templates.
Functionally, WormGPT operates similarly to ChatGPT by processing requests in natural human language and producing the desired output, be it stories, summaries, or code. However, WormGPT isn’t bound by the limitations imposed on more widely used chatbots. This lack of accountability and oversight has raised concerns among cybersecurity experts.
SlashNext, a cybersecurity company, recently tested out WormGPT and the results were unsettling. They asked the AI chatbot to design a phishing email, also known as a business email compromise (BEC) attack, and WormGPT excelled at it. It not only created a remarkably persuasive email but also displayed its potential for deceptive and sophisticated phishing attacks.
Adrianus Warmenhoven, a cybersecurity expert at NordVPN, believes that WormGPT has emerged as an evil twin of ChatGPT due to an ongoing game of cat and mouse between OpenAI’s increasing restrictions on ChatGPT and the relentless efforts of threat actors to bypass them.
This development has been driven by a rise in something called Grandma Exploits, wherein illegal information is sought indirectly by disguising it within an innocent request, such as a letter to a relative. Circumventing the ethical constraints of AI tools has become a challenge that malicious actors are keen to tackle, leading to the creation of WormGPT.
Warmenhoven warns that this new AI chatbot demonstrates the changing landscape of cybercriminals, who now seek to shape technology to suit their dark objectives. The emergence of WormGPT shows their desire not only to subvert existing AI tools but also to push them further towards their malevolent goals.
As the boundaries of AI technology continue to be tested by those with malicious intent, it becomes increasingly crucial for cybersecurity experts to remain vigilant and innovate new approaches to counter such threats. The battle against these advancements is an ongoing one, and the implications for cybersecurity and online safety are significant.
The development and use of AI chatbots like WormGPT underscore the need for stringent regulations and ethical frameworks to mitigate potential dangers. Without proper safeguards in place, the dark side of AI could continue to thrive and facilitate cybercrime on an unprecedented scale.
In conclusion, the emergence of WormGPT on the darkweb highlights the growing sophistication and audacity of threat actors who are actively seeking methods to exploit AI technology. This development serves as a stark reminder that cybersecurity measures must keep pace with evolving threats to ensure the safety and integrity of online spaces.