Microsoft and OpenAI have revealed that their language model, ChatGPT, was utilized by multiple state-sponsored threat actors in their cybercrime operations. According to Microsoft Threat Intelligence’s blog post, large language models such as ChatGPT were leveraged by nation-state hacking groups from Russia, North Korea, Iran, and China for various activities including scripting, phishing, vulnerability research, target reconnaissance, and detection evasion. However, Microsoft and OpenAI terminated OpenAI accounts associated with these threat groups after collaborating and sharing information.
The five identified threat actors include Russia-backed Forest Blizzard (Fancy Bear), North Korea-backed Emerald Sleet (Kimsuky), Iran-backed Crimson Sandstorm (Imperial Kitten), and China-backed Charcoal Typhoon (Aquatic Panda) and Salmon Typhoon (Maverick Panda). Microsoft observed that these threat actors were exploring and testing the capabilities of ChatGPT, but no significant cyberattacks leveraging this generative AI were discovered.
Fancy Bear, known for its cyberespionage activities and linked to Russian military intelligence agency GRU, used ChatGPT to perform reconnaissance related to radar imaging technology and satellite communication protocols. Kimsuky, a North Korea-sponsored threat actor, used the language model to produce spear-phishing content and study vulnerabilities such as the Microsoft Office Follina vulnerability. Crimson Sandstorm, affiliated with the Iranian military’s Islamic Revolutionary Guard Corps, attempted to develop code for evading detection, generated snippets of code for web scraping, and sent phishing emails impersonating international development agencies and targeting prominent feminists.
The Chinese state-sponsored attackers, Charcoal Typhoon and Salmon Typhoon, performed exploratory actions with ChatGPT. Charcoal Typhoon, which has conducted cyberattacks in multiple countries, attempted to automate complex cyber operations, translate communications for potential social engineering, and gain deeper system access. Salmon Typhoon used the model for translation and attempted to develop malicious code but was blocked by the model’s filters.
Microsoft’s threat research outlined nine specific tactics, techniques, and procedures related to the use of large language models by threat actors. These findings will be integrated into the MITRE ATT&CK framework.
It is important to note that while these threat actors utilized ChatGPT, they were primarily exploring its capabilities, and no significant cyberattacks were observed. Microsoft and OpenAI will continue working together to enhance security and protect users from potential misuse of AI technologies.