OpenAI has revealed the shutdown of an Iranian propaganda operation that was utilizing ChatGPT to disseminate political propaganda. The group responsible for this operation, named Storm-2035, was creating AI-generated content related to the US Presidential election. Fortunately, OpenAI was able to identify and suspend the associated accounts before the content reached a significant audience.
This covert Iranian Influence Operation (IO) was brought to light in a Microsoft Threat Intelligence Report on August 9 alongside other Iranian groups engaged in similar activities. The operation involved creating AI-generated articles on US politics and global events for news websites, as well as posting on social media platforms like X and Twitter in English and Spanish. Notably, the content touched on various topics, including the Gaza conflict, the U.S. election, and Latin American politics.
It was interesting to note that the threat actor behind the operation seemed to be taking a dual approach by criticizing both Donald Trump and Kamala Harris in their posts. This tactic is reminiscent of Russian propaganda networks’ strategies during the 2016 election, where they aimed to exploit existing societal divisions in the US.
OpenAI’s efforts to combat AI-based covert operations by geopolitical adversaries have been ongoing, with previous instances of terminating state-affiliated threat actor accounts engaging in malicious activities. The company’s collaboration with the US defense establishment, including updating its policies to allow military and warfare use of its models, signals a shift towards AI’s increased role in military operations.
As AI continues to play a significant role in global conflicts and security, OpenAI’s actions underscore the importance of vigilance and cooperation in addressing emerging threats in the digital realm.