Tech Giants Urged to Regulate AI-Generated Political Ads Amid Concerns of Election Misinformation
Leading tech companies are facing pressure to regulate AI-generated political advertisements on their platforms ahead of the 2024 elections in the United States. Google has already announced that it will implement labels on deceptive AI-generated political ads that could manipulate a candidate’s voice or actions. However, Democratic lawmakers are now calling on other major social media platforms, including Meta (the parent company of Facebook and Instagram) and X (formerly Twitter), to explain why they are not implementing similar regulations.
In a letter addressed to Meta CEO Mark Zuckerberg and X CEO Linda Yaccarino, U.S. Senator Amy Klobuchar and U.S. Representative Yvette Clarke expressed serious concerns about the potential harms of AI-generated political ads and urged the companies to outline the rules they are developing to safeguard free and fair elections. The lawmakers emphasized the need for transparency on these platforms, as voters often turn to social media to learn about candidates and important issues.
Both lawmakers are actively involved in advocating for regulations on AI-generated political ads. Clarke introduced a House bill that would amend federal election laws to require clear labels on election advertisements containing AI-generated images or videos. She emphasized the importance of ensuring that the American people are aware when content is fabricated. Klobuchar echoed this sentiment and stressed that major platforms should take the lead in implementing regulations, especially considering the current lack of an elected speaker in the House of Representatives.
While Google has taken steps to address AI-generated political ads by requiring disclaimers on its platforms, Meta and X have not yet responded to the lawmakers’ requests for comment. Meta does have a policy in place that restricts the use of faked, manipulated, or transformed audio and imagery for misinformation purposes. However, there is currently no specific rule regarding AI-generated political ads. A bipartisan Senate bill co-sponsored by Klobuchar and Republican Senator Josh Hawley aims to ban materially deceptive deepfakes related to federal candidates, with exceptions for parody and satire.
AI-generated political ads have already made an appearance in the 2024 election cycle. The Republican National Committee aired an ad that used fake but realistic photos to depict a dystopian future under President Joe Biden. These ads, as well as other misleading examples, would likely be prohibited under the proposed Senate bill. However, there are debates about the need for new rules regarding deepfakes and AI-generated content, as some argue that even false speech is protected by the First Amendment and that voters should ultimately determine the truth and falsity of political messages.
In August, the Federal Election Commission took a procedural step toward potentially regulating AI-generated deepfakes in political ads by allowing public comments on a petition brought forth by the advocacy group Public Citizen. The comment period for this petition ends on October 16.
As the 2024 elections approach, the regulation of AI-generated political ads remains a crucial concern. Lawmakers are emphasizing the need for transparency and accountability from major social media platforms to prevent the spread of election-related misinformation and disinformation. While Google has taken action, there is growing pressure on Meta (Facebook and Instagram) and X (Twitter) to implement regulations that protect the integrity of democratic processes and ensure that voters have accurate information.