Russian, North Korean, Iranian, and Chinese state-backed threat actors have been utilizing generative AI on a large scale to carry out cyber attacks, according to a recent report by Microsoft and OpenAI. The report highlights the growing use of AI technologies to inform, enhance, and refine attack methods of these prominent threat actors.
The research, conducted by Microsoft in collaboration with OpenAI, aims to ensure the safe and responsible use of AI technologies like ChatGPT while mitigating the potential for misuse. The report identifies several adversaries believed to be state-backed groups, shedding light on how they are incorporating AI tools into their tactics, techniques, and procedures (TTPs).
One such threat actor mentioned in the report is Forest Blizzard, also known as Strontium, which has strong links to a specific unit of the Russian military intelligence agency GRU. This highly effective group targets various sectors including defense, transportation/logistics, government, energy, NGOs, and information technology. Forest Blizzard employs generative AI for reconnaissance and scripting techniques, using LLM (Large Language Model)-informed methods to gain insights into satellite communication protocols and radar imaging tools.
Another state-backed group mentioned in the report is Emerald Sleet, also known as Thallium, which operates on behalf of North Korea. The group has been actively engaged in AI-enhanced spear-phishing attacks, specifically targeting prominent North Korea specialists to gather intelligence. Microsoft’s analysts found significant overlap between Emerald Sleet’s activities and those of other hacking groups such as Kimsuky and Velvet Chollima.
The report also highlights the activities of Crimson Sandstorm, an Iranian threat actor believed to be affiliated with the Islamic Revolutionary Guard Corps (IRGC). This group has been active since at least 2017, primarily targeting defense, maritime shipping, transportation, healthcare, and technology systems. Microsoft observed their use of LLMs in various phishing emails and scripting techniques, as well as employing AI models to disable antivirus systems and delete files to evade anomaly detection.
Two Chinese state-affiliated groups using AI technologies to target different regions were also mentioned in the report. Charcoal Typhoon, also known as Chromium, focuses on sectors such as government, higher education, communications, infrastructure, oil & gas, and information technology, with a particular emphasis on organizations in Taiwan, Thailand, Mongolia, Malaysia, France, and Nepal. Salmon Typhoon, also known as Sodium, has a history of attacks against the US defense sector and cryptographic and technology sectors but has been exploring the efficacy of LLMs for research purposes.
The report released by Microsoft and OpenAI aligns with recent warnings from the intelligence alliance Five Eyes about state-backed groups using living off the land techniques to maintain access to critical infrastructure systems. Additionally, the US’s National Security Agency (NSA), FBI, and Cybersecurity and Infrastructure Agency (CISA) have recently provided details about the methods used by Chinese threat actor Volt Typhoon to compromise the networks of critical national infrastructure organizations.
The cyber threat landscape continues to evolve as state-backed threat actors leverage generative AI to enhance their attacks. The findings of this report emphasize the need for enhanced cybersecurity measures to counter these emerging threats. By proactively monitoring and adapting to these evolving tactics, organizations and governments can better protect themselves and their critical infrastructure from sophisticated cyber attacks.