Developers Unleash DarkBART and DarkBERT, Advanced AI Hackers, Threatening Cybersecurity
A cybersecurity warning has been issued after the revelation that developers are poised to introduce even more powerful and sophisticated malicious chatbots. Dubbed DarkBART and DarkBERT, these AI-backed tools are set to equip threat actors with capabilities far exceeding current cybercriminal offerings.
The forthcoming bots, DarkBART and DarkBERT, will be armed with ChatGPT-like AI capabilities, potentially lowering the entry barrier for cybercriminals seeking to develop sophisticated cyber attacks. These AI tools are expected to enable threat actors to launch persuasive business email compromise (BEC) phishing campaigns, exploit zero-day vulnerabilities, target critical infrastructure weaknesses, distribute malware, and engage in other illicit activities.
Researchers were alerted to the development of these new AI-based malicious chatbots by an ethical hacker who had discovered a previous AI-based hacker tool called WormGPT. The hacker, known as CanadianKingpin12 on underground forums, claims to have even more advanced chatbots in the works.
In terms of functionality, DarkBART is set to be a dark version of the Google BART AI, using a large language model (LLM), known as DarkBERT, as its base. Interestingly, DarkBERT was originally developed by a South Korean data-intelligence firm called S2W with the purpose of fighting cybercrime. It currently has limited access available only to academic researchers, making unauthorized access to it significant.
According to researcher Daniel Kelley, CanadianKingpin12 alleges to have gained access to DarkBERT and trained his version of the AI on the vast text corpus from the Dark Web. The developer even claims that his new bot can be integrated with Google Lens, allowing for the transmission of text accompanied by images. This integration is noteworthy since existing chatbot offerings have been limited to text-only interactions.
The second adversarial tool, also named DarkBERT but unrelated to the Korean AI, takes things even further. This tool aims to use the entirety of the Dark Web as its LLM, providing threat actors with access to the collective knowledge of the hacker underground. Similar to DarkBART, it also boasts Google Lens integration.
As adversarial AI tools rapidly progress, experts predict that their developers will offer application programming interface (API) access to the chatbots. This will allow cybercriminals to seamlessly integrate them into their workflows and code, thus reducing the barriers to entry for engaging in cybercrime.
These advancements raise significant concerns regarding the potential consequences, as the use cases for this technology become increasingly intricate. To combat the threats posed by AI-driven cybercrime, organizations are advised to take a proactive approach. Alongside standard training to identify phishing attacks, specific training in relation to BEC attacks and the role of AI should be provided to employees. Additionally, email verification measures should be enhanced to counter AI-driven threats by implementing strict processes and keyword-flagging mechanisms.
As cybersecurity strategies adapt and evolve to counter emerging threats, a proactive and educated approach becomes essential to combat AI-driven cybercrime effectively. The rapid progression and development of malicious AI tools reinforce the critical need for organizations to stay ahead of cybercriminals, prioritizing effective defense measures, and investing in ongoing training and technology to safeguard their digital infrastructure.