Concerns are growing over the threat posed by deepfake audio messages to elections around the world, with experts particularly worried about their potential impact on political campaigns. The recent circulation of a doctored audio message claiming to be from U.S. President Joe Biden has raised alarm among disinformation experts. In the message, a voice altered to sound like Biden urged voters in New Hampshire not to cast their ballots in the state’s Democratic primary, falsely suggesting that their votes should be saved for the November election. The use of deepfake technology in audio messages is a cause for concern due to its ease of editing, low cost of production, and difficulty in tracing. These fake messages combined with a voter registration database could provide a powerful weapon for bad actors that current election systems are ill-equipped to handle.
Robert Weissman, president of consumer advocacy think tank Public Citizen, has called on lawmakers to implement protections against fake audio and video recordings to prevent electoral chaos. The emergence of deepfake audio is occurring at a time when an increasing number of U.S. political campaigns are utilizing AI software to reach constituents on a large scale. The deepfake incident involving Biden also coincided with the announcement of funding for voice-cloning startup ElevenLabs, which has been valued at $1.1 billion.
Although this is not the first instance of a doctored political recording, it highlights the growing concern over the use of deepfake technology to manipulate public opinion. Last year, ahead of Slovakia’s parliamentary elections, audio deepfakes spread on social media platforms, including one that appeared to show party leader Michal Simecka discussing a plan to buy votes. While the political use of video and audio deepfakes has been limited thus far, the potential for misuse and manipulation in the future is a cause for alarm.
The increasing accessibility of deepfake technology raises questions about the security and integrity of electoral processes. As political campaigns and investors continue to embrace AI software and voice-cloning startups, it is crucial that safeguards and regulations are put in place to ensure the authenticity of audio and video recordings. The potential consequences of deepfake audio messages in election campaigns, when combined with widespread dissemination and exploitation, could have far-reaching implications for democratic processes and public trust in elections.
In order to safeguard against the threat of deepfake audio, it is essential for governments, tech companies, and researchers to collaborate on developing effective detection tools and strategies. This will require ongoing investment in AI research, cybersecurity measures, and public education to raise awareness about the existence and potential risks of deepfake technology. Additionally, strong legal frameworks and penalties must be established to deter and punish those who engage in malicious activities using deepfake audio in political contexts.
The emergence of deepfake technology has added a new layer of complexity to the already challenging task of ensuring the integrity of elections. As technology continues to advance, it is imperative that proactive measures are taken to protect the democratic process from manipulation and disinformation. The time to act is now, before the widespread use of deepfake audio messages undermines public trust in elections and threatens the very foundations of democratic societies.