The U.S. Department of Justice recently revealed that a Russian propaganda campaign utilizing fake social media accounts, powered by artificial intelligence (AI), was disrupted. This campaign aimed to spread disinformation in the United States and various other countries.
The bot farm employed AI technology to create profiles impersonating Americans on X, previously known as Twitter, to advocate for Russia’s involvement in the conflict in Ukraine, along with other pro-Kremlin narratives. This operation was reportedly approved and funded by the Kremlin, overseen by a Russian intelligence officer. Allegedly, an editor at RT, the Russian state-owned media outlet, organized the bot farm and the AI software behind it.
As the use of AI technology advances, concerns have arisen regarding its potential to generate propaganda and disinformation on a larger scale. Notably, Meta, the parent company of Facebook, and OpenAI have identified foreign influence campaigns, some linked to Russia, utilizing AI to manipulate public opinion.
FBI Director Christopher Wray emphasized that Russia intended to utilize the bot farm to disseminate AI-generated foreign disinformation, leveraging AI to undermine support for Ukraine and shape geopolitical narratives favorable to the Russian government.
RT, known for promoting the Russian government’s agenda internationally, is said to have been seeking alternative distribution channels following its diminished reach post-Russia’s 2022 invasion of Ukraine.
The DOJ disclosed that nearly a thousand fake profiles on X were part of the Russian campaign, including one purported resident of Minneapolis sharing videos of Russian President Vladimir Putin justifying actions in Ukraine. X suspended these accounts for violating terms of service. The exact extent of engagement with these fake accounts remains unclear. X did not provide a comment on this matter.
The DOJ also seized two domain names associated with the bot farm, utilized to create email accounts for setting up the fake X profiles.