2024 US Election Faces AI-Driven Disinformation Storm, Threatening Trust
The upcoming 2024 US election is poised to face a deluge of disinformation fueled by artificial intelligence (AI), which has the potential to severely undermine trust in the electoral process. With AI-powered tools becoming increasingly sophisticated, political campaigns are able to create fabricated images, videos, and texts that appear incredibly real, blurring the lines between fact and fiction.
Both sides of the political aisle are leveraging AI technology for their campaign strategies. On one hand, AI programs can mimic a political figure’s voice with astonishing accuracy and generate convincing videos and texts. This poses a significant challenge for voters who struggle to discern what is true and what is not. Trust in the electoral process is at stake if disinformation becomes rampant.
On the other hand, campaigns also utilize AI to enhance efficiency in various aspects, such as analyzing voter databases and drafting fundraising emails. The possibilities offered by AI advancements allow campaigns to streamline their operations and gain valuable insights into voter preferences and trends.
Examples of AI-generated disinformation in recent campaigns include a video released by Florida Governor Ron DeSantis’s presidential campaign, which supposedly showed former President Trump embracing Anthony Fauci. However, fact-checkers later discovered that the video used AI-generated images. The Republican Party also shared an AI-generated video showcasing a dystopian future if President Biden were to win, featuring images of panic on Wall Street and a military takeover of San Francisco.
According to a poll conducted by Axios and Morning Consult, over 50 percent of Americans expect that AI-enabled falsehoods will impact the outcome of the 2024 election. These concerns about the influence of AI-driven disinformation also translate into a significant portion of the population expressing less trust in the election results.
The rise of AI in disinformation campaigns poses a potential threat to the electoral process, particularly in a hyperpolarized political environment. The ease and affordability of AI templates fuel a landscape where distinguishing between fake and real material becomes increasingly challenging. This lack of clarity has the potential to incite public anger, as witnessed during the assault on the US Capitol on January 6, 2021.
Despite the risks, the rapid advancements in AI technology also provide campaigns with essential tools for understanding voters and tailoring campaign strategies. AI-driven solutions offer granular insights into voter behavior, revolutionizing the way campaigns develop outreach plans and craft messaging.
Addressing the potential for AI abuse, several US states, including Minnesota, have passed legislation to criminalize deepfakes aimed at harming political candidates or influencing elections. President Biden recently signed an executive order to promote the secure and trustworthy use of AI, acknowledging the danger posed by AI-generated audio and video deepfakes.
As the 2024 US election approaches, the challenge lies in striking a balance between utilizing AI for operational efficiency and guarding against the spread of disinformation. The battle for truth in the age of AI requires vigilance, regulation, and responsible use of these technologies. Only by doing so can trust in the electoral process be restored, and the integrity of democracy preserved.