2024 Election Faces New Threat: Artificial Intelligence Uses Misinformation to Manipulate Voters
Misinformation and disinformation have long plagued the world of politics, but with the rise of new technology, the lines between fact and fiction are becoming increasingly blurred. As the 2024 election approaches, the use of artificial intelligence (AI) is becoming a significant concern, with experts dubbing it a political super-weapon. AI is already making its presence felt in the political arena, well in advance of the election.
One of the alarming aspects of AI is its accessibility. In the past, utilizing AI applications required substantial financial resources. However, now these applications are affordable and accessible to almost anyone. Mark Grzegorzewski, a security studies professor at Embry Riddle Aeronautical University, warns that this accessibility is potentially concerning.
The Republican National Committee has already demonstrated the capabilities of AI. Earlier this year, they released the first-ever political ad based entirely on AI-generated imagery. The ad portrayed a dystopian America if President Joe Biden were to be re-elected, using AI-generated scenes of explosions in Taiwan, an influx of immigrants at the border, and a crime and fentanyl crisis in San Francisco.
AI has expanded beyond just ads. Campaigns are now incorporating AI to clone not only imagery but also voices in political ads. For example, a pro-Ron DeSantis super PAC called Never Back Down used an AI version of Donald Trump’s voice in a television ad released in July. AI creators have even developed AI text-to-voice generators, which allow for a 24/7 debate between AI Joe Biden and AI Donald Trump on the video streaming platform Twitch. These AI-generated voice debates perfectly mimic the voices and mannerisms of the current and former presidents.
The growing presence of AI in the political arena has raised concerns among voters. Grzegorzewski emphasizes that people need to be extra skeptical during this election cycle. It is crucial for individuals to dig deeper, spend more time, and fact-check the issues they come across on social media. Relying on reliable sources and exercising critical thinking is more important than ever.
Fortunately, there are ways to spot AI-generated content if you know what to look for. For instance, in AI-generated videos, the skin complexion may appear blotchy, and there may be awkward blinking. However, it is necessary to take additional steps to combat the potential dangers of AI in politics.
Google and platforms like YouTube have taken the initiative to address these concerns. In September, Google announced that it would soon require political ads using AI to include a prominent disclosure. While this is a positive step forward, Grzegorzewski cautions that it may not be enough. He advocates for penalties, such as financial sanctions or removal from platforms, for companies that fail to include the mandatory disclosure.
The Federal Election Commission is also deliberating its power to regulate artificial intelligence in the political arena. A decision on whether the FEC will take action is expected next month.
As the 2024 election looms, the threat of AI-driven misinformation and manipulation is a cause for concern. Voters must remain vigilant, fact-check sources, and demand transparency from political advertisers. It is necessary to preserve trust in the political system and in each other, for the sake of democracy.
In conclusion, the advent of artificial intelligence in politics presents both opportunities and risks. While AI can enhance campaign strategies and engage voters in innovative ways, it also poses unprecedented challenges in terms of disinformation and manipulation. As we navigate this new landscape, it is crucial for voters, tech companies, and policymakers to work together to address these issues effectively. The future of democracy and the integrity of elections may depend on it.