OpenAI and Microsoft recently revealed that they have discovered and shut down OpenAI accounts associated with five state-affiliated actors using artificial intelligence (AI) tools for cyberattacks. The accounts of these bad actors were linked to different countries, including China, Iran, North Korea, and Russia.
According to OpenAI and Microsoft Threat Information, the closed accounts were associated with hackers identified as Charcoal Typhoon (CHROMIUM) and Salmon Typhoon (SODIUM) from China, Crimson Sandstorm (CURIUM) from Iran, Emerald Sleet (THALLIUM) from North Korea, and Forest Blizzard (STRONTIUM) from Russia.
The malicious actors primarily utilized OpenAI services to query open-source information, perform basic coding tasks, translate texts, and identify coding errors. However, OpenAI emphasized that their models only offer limited capabilities for malicious cybersecurity tasks, as evidenced by the findings.
One case highlighted by Microsoft involves Forest Blizzard, a Russian military intelligence player. They were found using language models to research satellite and radar technologies related to conventional military operations in Ukraine. There were also indications of attempts to automate or optimize technical operations through file manipulation.
Chinese actors, Charcoal Typhoon and Salmon Typhoon, known for targeting US defense contractors, government agencies, and crypto technology industry entities, used language models for intelligence queries, code generation, code error identification, and translation tasks.
Crimson Sandstorm, Emerald Sleet, and the two China-affiliated actors exploited OpenAI’s tools to generate content for phishing campaigns, according to OpenAI.
Microsoft emphasized that cybercriminal groups, state-affiliated threat actors, and other adversaries are exploring and testing various AI technologies to understand their value and security vulnerabilities. While the research did not uncover any significant attacks, OpenAI and Microsoft are taking additional measures to mitigate the growing risks associated with the malicious use of AI.
Both companies have committed to monitoring and disrupting activities related to these threat actors. They also aim to collaborate with industry partners to share information on the known misuse of AI by malicious actors, as well as educate the public and stakeholders about the potential risks associated with AI tools.
The findings highlight the need for enhanced cybersecurity measures and ongoing vigilance to counteract potential malicious AI activities. By actively addressing these challenges, OpenAI and Microsoft aim to maintain a secure environment amid the evolving landscape of cyber threats.
In conclusion, OpenAI and Microsoft’s joint efforts in identifying and countering state-affiliated actors using AI tools for cyberattacks emphasize the importance of proactive cybersecurity measures. Through continuous monitoring, collaboration, and public awareness, the two companies strive to mitigate the risks associated with the malicious use of AI in the ever-changing world of cybersecurity.