AI-Driven Disinformation Threatens 2024 US Election, Warns Intelligence Report
An intelligence report has warned that the use of Artificial Intelligence (AI) tools by malicious actors poses a significant threat to the upcoming 2024 US presidential election. The report, issued by Dragonfly’s Security Intelligence & Analysis Service (SIAS) on August 22, 2023, highlights the likelihood of Iran and Russia-linked groups attempting to influence voter opinions, undermine public confidence in the legitimacy of the vote, and manipulate the outcome of the election.
Declassified US intelligence reports, including the Annual Threat Assessment from February 2023 and reports on the 2020 presidential election, form the basis of this assessment. The accessibility of AI and related technologies, such as video manipulation tools, has greatly lowered the barrier for both state and non-state actors to create and disseminate disinformation. US officials have already expressed concerns about the potential threat posed by these technologies. In fact, the FBI director warned in July that AI could enable threat actors to develop increasingly powerful and sophisticated capabilities, customizable and scalable to suit their nefarious agendas. The report suggests that AI could facilitate the engineering of political scandals and the propagation of conspiracy theories.
Both state and non-state actors are expected to be highly motivated to conduct disinformation campaigns prior to the 2024 election. In past elections, the US has accused the governments of China, Iran, and Russia, as well as other actors like Cuba, Venezuela, and Hezbollah, of information and influence operations. It is believed that preparations for such operations for the upcoming election are already underway. Russian groups, in particular, have previously been found to have scanned US electoral systems and deployed bot armies well in advance of past election cycles, according to US intelligence reports.
Advancements in AI, especially in the field of Generative AI (GenAI), are predicted to make it easier for malicious actors to create more convincing and plausible disinformation content. Last year, a digital investigations firm called Graphika uncovered a pro-China operation that used AI-generated videos featuring fake news anchors criticizing the US regarding gun violence and promoting US-China cooperation. The report suggests that advancements in GenAI will enable disinformation actors to:
– Produce AI-generated images and videos that appear genuine.
– Amplify contentious topics and stir public opinion.
– Cause economic, social, and political impacts.
– Undermine public trust in the election’s legitimacy and democratic processes.
The report predicts that efforts aimed at moderating or taking down AI-generated, misleading content will likely prove ineffective. Social media companies, in particular, are expected to struggle with this challenge due to an anticipated surge in the volume of such content ahead of the election. Furthermore, a reduction in content moderation standards and layoffs at social media firms may exacerbate the problem. It is also anticipated that candidates and campaign officials will seek to publicly disprove and distance themselves from any AI-generated content that may emerge.
Disinformation campaigns have been a major concern in previous US elections, primarily due to foreign interference efforts by actors like Russia. After the 2016 election, US intelligence agencies confirmed that Russia-backed groups amplified conspiracy theories and circulated information obtained from hacking and leaking emails of officials associated with the Democratic candidate Hilary Clinton.
The report also highlights that the potential for major political scandals surrounding the 2024 election is likely to increase due to disinformation campaigns. Hostile actors, including political figures, may leverage AI-generated content to support claims of election fraud. This could enhance the plausibility of their claims to the public and their respective voter bases. The report identifies this as a particular risk, which could lead to security impacts such as political protests and isolated acts of extremism, especially if former President Donald Trump were to run and lose the election. Allegations of voter fraud played a pivotal role in motivating Trump supporters during the storming of the Capitol building in January 2021.
As the 2024 US presidential election approaches, the threat of AI-driven disinformation looms large. The use of AI tools by state and non-state actors, particularly those linked to Iran and Russia, poses a serious risk to the integrity of the election process, public trust, and societal cohesion. While efforts will likely be made to combat this disinformation, the report underscores the challenges that lie ahead, given the ever-advancing capabilities of AI technology. Ensuring the security and fairness of the upcoming election will require vigilant monitoring, robust countermeasures, and a coordinated response from all stakeholders involved.