OpenAI, in collaboration with Microsoft, has taken action to shut down five accounts belonging to nation-state cyber-crews. These accounts were allegedly used by government agents from China, Iran, Russia, and North Korea to carry out malicious activities such as generating phishing emails, crafting malware scripts, and evading malware detection.
OpenAI’s conversational large language models, like GPT-4, have various applications such as extracting and summarizing information, crafting messages, and writing code. To prevent misuse, OpenAI has implemented measures to filter out requests for harmful information and malicious code. However, it appears that these nation-state cyber-crews crossed the line by utilizing the platform for malicious intent.
The terminated accounts belong to threat actors known as Charcoal Typhoon and Salmon Typhoon (China), Crimson Sandstorm (Iran), Emerald Sleet (North Korea), and Forest Blizzard (Russia). Microsoft’s Threat Intelligence team has provided an analysis of the malicious activities carried out by these actors.
Charcoal Typhoon and Salmon Typhoon, known for their previous attacks on companies in Asia and the US, used GPT-4 to research specific companies and intelligence agencies. They also utilized the models to translate technical papers related to cybersecurity tools.
Crimson Sandstorm, controlled by the Iranian Armed Forces, attempted to run scripted tasks, evade malware detection, and develop highly-targeted phishing attacks using OpenAI’s models.
Emerald Sleet, acting on behalf of the North Korean government, used OpenAI’s models to search for information on defense issues in the Asia-Pacific region, as well as public vulnerabilities. They also engaged in crafting phishing campaigns.
Finally, Forest Blizzard (also known as Fancy Bear), a Russian military intelligence crew, researched open-source satellite and radar imaging technology and looked for ways to automate scripting tasks.
OpenAI has previously mentioned that their models have limited capabilities for cybersecurity tasks beyond what’s achievable with non-AI-powered tools. The terminations of these state-affiliated actors’ accounts mark a significant step in addressing potential misuse of OpenAI’s technology.
It is important to note that OpenAI’s efforts to shut down these accounts demonstrate its commitment to preventing harmful activities while providing valuable and secure AI-powered solutions.