The use of chatbots and Artificial Intelligence (AI) is nothing new in the cyber world. However, a recent report by the Egress Threat Intelligence team suggests that cybercriminals are using AI to craft malicious phishing emails. The report highlights that chatbots, in particular, can be used to create well-written, grammatically correct and humanised emails that are difficult to identify as fraudulent.
Earlier this year, 72% of cybersecurity leaders admitted to being concerned that AI could be misused to enhance phishing campaigns. One such AI tool in focus is ChatGPT. The Egress Threat Intelligence team carried out a test by asking the tool to write an email that requested personal details. The resulting email was well-written with good grammar, a salutation, and sign off. The subject of the email was humanised with phrases such as “I know it’s been a while”. The result was an authentic-looking email that could fool people into believing that it was written by a human.
The same tool was also tested to create a promotional offer for people to win two prizes, with a link to enter a website. The Egress team warns that such a tool can easily generate a phishing campaign.
While chatbots like ChatGPT are programmed to discourage the use of outputs for illegal activities, cybercriminals can still use them to generate phishing emails and campaigns. The use of AI-generated content has made it difficult to differentiate between human and machine-written phishing attacks.
Anti-phishing detection technologies, such as Integrated Cloud Email Security (ICES), have been evolving to detect and prevent phishing attacks. AI models including Natural Language Processing (NLP) and Understanding (NLU) can be used to analyze email messages for signs of social engineering. Similarly, machine learning can help detect anomalies in anti-phishing solutions to detect heterogeneous phishing attacks where small changes are made to bypass traditional email security solutions.
However, cyberattacks have evolved, and so have the cyber defenses, where phishing remains a significant risk. Cybersecurity leaders don’t need to be concerned about the use of AI to create phishing emails to any degree more than they’re concerned about phishing as an area of risk for their organisation. It is also important to note that not all AI-based cyber security solutions are the same. Some AI models can be manipulated by cybercriminals to bypass detection, and thus evaluating the AI solution and safeguarding it against cyber attacks is essential.
In conclusion, while there are concerns about the use of AI in crafting phishing emails, the cybersecurity industry is evolving its technologies to detect and prevent these attacks. With the right detection capabilities, organizations can mitigate the risk of phishing attacks, independent of whether they are created by human beings or machines.