Malicious actors have found a new ally in their quest to spread malware and launch cyber attacks – artificial intelligence (AI). A recent report by cybersecurity company SlashNext highlights the use of a new AI tool called WormGPT, which enhances and refines the capabilities of cybercriminals.
Ironically, SlashNext itself employs AI to combat phishing and human-targeting threats in cyberspace. However, the report reveals a concerning trend involving OpenAI’s ChatGPT, a generative AI language model. Cybercriminals are leveraging ChatGPT to execute business email compromise (BEC) attacks by crafting sophisticated and personalized fake emails to exploit systems. This can range from composing the initial email in the perpetrator’s native language, translating it into English, and utilizing ChatGPT to refine and enhance the content, making it sound more professional.
The report also sheds light on the use of jailbreaks in ChatGPT, allowing cybercriminals to evade detection and security measures implemented to block improper content or malicious code. This enables attackers lacking language fluency to fabricate persuasive phishing or BEC email attacks more effectively than ever before.
Adding to the growing concern is the emergence of WormGPT, an AI tool specifically designed for malicious activities based on the GPTJ language model. The tool’s creator highlights that the AI was primarily trained using malware-related data. SlashNext gained access to WormGPT and tested its capabilities by generating a fake email aimed at tricking an account manager into paying a fraudulent invoice. Unlike ChatGPT, WormGPT lacks any checks or boundaries, resulting in impressively detailed fake emails that pose a significant threat in the hands of criminals.
The implications of these AI-powered tools are alarming. Cybercriminals can now exploit language barriers and employ advanced AI models to refine their malicious endeavors. The potential for more sophisticated cyber attacks targeting businesses and end-users is a cause for concern.
As organizations strive to stay ahead in the ongoing battle against cyber threats, it is crucial to remain vigilant and implement robust security measures. Additionally, AI developers must work towards enhancing the detection and prevention mechanisms to counter the nefarious use of AI in cybercrime.
With the continuous evolution of AI technologies, it becomes imperative for cybersecurity experts, businesses, and end-users to prioritize proactive measures and adopt resilient security strategies to protect against the ever-evolving threats posed by AI-driven cyber attacks.