Fake Audio Threatens Election Integrity: Social Media’s Dangerous Loophole
In the world of misinformation, fake audio could have a more sinister effect than video, especially during a tumultuous election year. Platforms like Facebook, with over 3 billion users, have an outdated policy that allows manipulated audio clips to remain on their site, albeit with a warning label. This policy could prove disastrous in a divisive election year, as fake audio clips have the potential to sway votes without proper debunking.
Fake audio clips have already had an impact on elections. Last year, a manipulated audio clip of a Slovak political leader was shared on Facebook just days before a closely contested national election. Although Meta Platforms Inc, the parent company of Facebook, labels such content as manipulated, they do not take it down. In this instance, the political leader in question lost the election, and while it is impossible to determine if the fake audio clip influenced the outcome, the lack of debunking during a media blackout period certainly didn’t help.
What makes fake audio even more dangerous than fake videos is its ability to sound hyper-realistic and difficult to distinguish from genuine recordings. Companies like Eleven Labs, Voice AI, and Respeecher have developed tools to synthesize the voices of actors with just a few minutes of recording. These AI-generated voices can be indistinguishable from the real thing, thanks to advancements originally made for podcasters and marketers.
While some of these companies have implemented features to prevent misuse or require permission for voice cloning, others are using these tools to exploit politicians. Recently, the voice of London Mayor Sadiq Khan was cloned and a faked audio clip was shared on TikTok, suggesting the cancellation of Armistice Day. The clip caused outrage but remained on Facebook without a warning label, being circulated and amplified by a far-right group. Similar incidents involving UK Labour Party Leader Keir Starmer also occurred on TikTok before being taken down.
Facebook’s leniency toward forged audio is particularly concerning due to its massive user base. While platforms like TikTok, YouTube, and X (formerly Twitter) take action against deceptive audio content, Facebook’s policy of leaving it with a warning label relies on an overburdened team of fact-checkers. With an increase in manipulated audio spreading rapidly across the internet, fact-checking efforts often lag behind.
There is currently no reliable technical method for detecting fake AI audio, so fact-checkers must rely on traditional investigative techniques. However, the number of people working on misinformation at social media companies has declined over the past two years as they seek to cut costs. Furthermore, the absence of a fact-checking tool development project at Meta raises concerns about the future of combating misinformation.
As we approach major elections in the UK, India, the US, and other countries, the outdated policy of Facebook in only taking down faked videos must be revised. The ease with which fake audio can be generated and shared poses a significant threat to election integrity. Social media platforms need to prioritize the removal of deceptive audio content to prevent it from influencing public opinion and ensure a level playing field during critical political moments.