Both OpenAI and Meta recently revealed information regarding the use of their AI tools in spreading political disinformation and disrupting politics in various countries, including the U.S. The nefarious campaigns involved actors linked to China, Israel, Russia, and Iran and utilized services provided by both OpenAI and Meta.
Meta, in its latest quarterly threat report, highlighted that generative AI remains detectable in such disinformation campaigns. The social media giant emphasized that their models have not encountered new tactics that hinder their ability to disrupt adversarial networks behind these campaigns. While AI-generated photos are commonly used, political deepfakes are not prevalent according to Meta’s report.
On the other hand, OpenAI stated that they have embedded defenses into their AI models, collaborated with partners to share threat intelligence, and leveraged their technology to detect and prevent malicious activities. The company’s models are designed with defense mechanisms in mind to impose obstacles on threat actors.
OpenAI disclosed that they banned accounts associated with identified campaigns and collaborated with industry partners and law enforcement to aid further investigations. The company described covert influence operations as deceptive attempts to manipulate public opinion without revealing the true identity of the actors behind them.
The campaigns identified by both companies involved actors from Russia, Israel, China, and Iran using AI-generated content to spread disinformation on various social media platforms. For example, Russian campaigns like Bad Grammar and Doppelganger utilized OpenAI’s systems to generate comments and content targeting audiences in different countries on platforms like Telegram and 9GAG.
Similarly, Israeli firm STOIC’s operation Zero Zeno employed OpenAI’s technology to generate comments and engage in broader disinformation tactics on social media platforms targeting Europe and North America. Additionally, Chinese campaign Spamouflage exploited OpenAI’s language models for spreading narratives under the guise of developing productivity software.
Overall, the disclosures made by OpenAI and Meta shed light on the persistent use of AI tools for political disinformation campaigns by various actors across the globe. Both companies continue to enhance their defenses and collaborate with partners to combat such malicious activities effectively.