An audio deepfake impersonating President Joe Biden has triggered concerns about the role of generative AI in spreading disinformation and its potential impact on elections. The deepfake robocall, circulating during the New Hampshire primary, urged voters not to participate in the Tuesday primaries and instead save their vote for the November election. The call was designed to sound like Biden but was linked to Kathy Sullivan, a former New Hampshire Democratic Party Chair who leads a super PAC advocating for writing-in Biden’s name on the ballots. The origin of the robocall remains unknown, with various parties, including Sullivan, the Biden campaign, and President Trump, denying involvement.
Lawmakers have expressed concerns about the rise of generative AI and its potential to displace jobs and spread disinformation. The deepfake robocall impersonating Biden has further fueled these fears and highlighted the need for accountability and transparency in AI regulations. Experts and advocates have called for immediate action to protect against deepfakes in politics, emphasizing that their use could lead to electoral chaos by sowing confusion and perpetuating fraud.
Incidents involving manipulated content by AI have caused real-world consequences in the past. For example, in May, fake images suggesting an attack on the Pentagon went viral, leading to a dip in the stock market. Additionally, during the conflict between Israel and Hamas in October, manipulated images and videos of dead children and destroyed homes circulated on social media, causing outrage. These instances demonstrate the urgent need for measures to distinguish between authentic and manipulated content.
Efforts have been made to address these concerns, with some Big Tech companies committing to watermarking manipulated content to differentiate it from organic content. The Biden administration has also issued guidelines for companies developing AI technologies. Senate Majority Leader Chuck Schumer has organized AI Insight Forums to discuss regulations with tech leaders like Mark Zuckerberg and Elon Musk. However, only a few bills have emerged from these discussions, with proposals to ban the use of AI for deceptive content and to require political ads that utilize AI to disclose it.
In conclusion, the deepfake robocall imitating President Biden during the New Hampshire primary has sparked concerns about the role of generative AI in spreading disinformation and its potential impact on elections. Policymakers and tech companies are urged to take swift action to implement protections against deepfakes to prevent electoral chaos and maintain trust in democratic processes.