OpenAI has successfully thwarted attempts by Iranian actors to manipulate the upcoming US election using ChatGPT technology. Several ChatGPT accounts linked to an Iranian influence campaign have been identified and deactivated by OpenAI. This campaign, associated with the clandestine Iranian operation Storm-2035, utilized ChatGPT to create extensive articles and social media comments on various platforms like Instagram.
The content generated by these accounts covered a wide range of topics, including the US presidential election, the Israel-Hamas conflict, Venezuelan politics, and issues impacting Latinx communities in the US. OpenAI’s investigation revealed that the campaign did not gain significant audience engagement, with most social media posts receiving minimal interaction.
The influence campaign also produced content for fake conservative and progressive news outlets in an effort to target diverse viewpoints. Some of the narratives pushed by the campaign included claims about Donald Trump planning to declare himself king of the US and Kamala Harris’s choice of Tim Walz as her running mate being a strategic move for unity. These efforts have raised concerns about potential attempts to manipulate the outcome of the upcoming US presidential election.
Previous cybersecurity incidents involving Iranian hackers targeting the campaigns of former President Trump and Vice President Harris have further underscored the need for vigilance. OpenAI assessed the threat level posed by the Iranian influence campaign using the Brookings Institution‘s Breakout Scale, rating it as a Category 2 operation. Despite generating content on multiple platforms, there was little evidence of genuine audience engagement.
Overall, OpenAI’s intervention is a significant step in safeguarding the integrity of the US election process and preventing foreign interference. It serves as a reminder of the ongoing challenges posed by malicious actors seeking to manipulate public opinion through online platforms.