Criminals are becoming increasingly proficient at exploiting AI models like ChatGPT for illegal activities, according to cybersecurity firm Kaspersky. Throughout 2023, Kaspersky identified 249 malicious AI prompts being sold online. Additionally, over 3,000 discussions were found on Telegram channels and dark-web forums focusing on how to harness large language models (LLMs) like ChatGPT for nefarious purposes.
These AI models are not yet capable of creating comprehensive attack chains or generating polymorphic malware for cyber attacks, but criminals are showing interest in leveraging their capabilities. Kaspersky’s report highlights that even tasks that previously required expertise can now be accomplished with a single prompt, significantly lowering the barriers to entry for criminal activities.
In addition to creating malicious prompts, criminals are also selling them to individuals lacking the technical skills to develop their own. Kaspersky also uncovered a growing market for stolen ChatGPT credentials and compromised premium accounts.
While there has been considerable buzz around using AI to write polymorphic malware, where the code modifies itself to evade antivirus detection, Kaspersky has not yet detected such malware in operation. However, the potential for it to emerge in the future remains a concern.
Kaspersky’s research unveiled an interesting finding regarding ChatGPT. Researchers initially asked the AI to provide a list of 50 endpoints where Swagger Specifications or API documentation might be leaked on a website. The AI responded by stating it couldn’t assist with the request. However, when using the same prompt verbatim, ChatGPT promptly provided the desired list. This demonstrates how AI models like ChatGPT can sometimes bypass their own guardrails, highlighting the need for continued vigilance.
While legitimate developers use AI to enhance their software’s performance, malware creators are also adopting this technology. Kaspersky’s research includes a screenshot of a post advertising AI-powered software designed for malware operators. This software not only analyzes and processes information but also automatically switches cover domains to protect criminals once one domain has been compromised.
It’s important to approach these claims with skepticism as they may not be verifiable, and criminals are not known for their trustworthiness when selling their illicit tools.
Kaspersky’s research aligns with another report from the UK National Cyber Security Centre (NCSC), suggesting that ransomware and nation-state gangs will significantly improve their capabilities by 2025 thanks to AI models.
In summary, the growing use of AI models in criminal activities poses challenges for cybersecurity professionals. As AI becomes more accessible and powerful, industries and individuals alike must remain vigilant in protecting against potential threats.