Google Mandates Disclosure for AI-Generated Political Ads, while Lawmakers Lag Behind
In a move to address the growing concern over deceptive political advertising, Google has taken the lead by implementing a mandate that requires written disclosure for any political ad on its platforms that utilizes AI-generated images, video, or audio. This decision comes as US lawmakers struggle to pass legislation that would enforce similar transparency in political advertising.
The lack of regulations surrounding AI-generated content in political ads has raised concerns about the authenticity and accountability of the information presented to voters. The Republican National Committee (RNC) set a precedent earlier this year when it voluntarily included a disclosure stating, Built entirely with AI imagery, in an ad depicting a dystopian vision of what could happen if Joe Biden were reelected in 2024.
US Senator Amy Klobuchar, who introduced a bill to enforce mandatory disclosures for AI-generated political ads, commended Google’s rule change but stressed the need for more than voluntary commitments. Klobuchar emphasized that voters deserve full transparency when it comes to political advertising.
Despite Klobuchar’s efforts, little progress has been made in passing legislation on this issue. In May, she, along with fellow senators Cory Booker and Michael Bennett, proposed a bill that would require disclaimers on political ads using AI-generated content. A similar law was also proposed by US Representative Yvette Clarke in the House. However, these bills have yet to make any significant headway in the legislative process.
In August, responding to pressure from lawmakers and advocacy groups, the US Federal Election Commission (FEC) decided to initiate the rulemaking process for regulating AI-generated content in political ads. The commission will be seeking public comment to inform their decision-making.
The concerns surrounding AI-generated political ads are exemplified by two notable examples. The RNC’s ad utilized computer-generated images, likely from AI generators such as DALL-E or Midjourney, to create a vivid and unsettling portrayal of a bleak future. Although the ad included a disclosure, another ad released by Governor Ron DeSantis of Florida used an AI-generated voice imitating Donald Trump without any disclosure.
The question arises as to whether lawmakers should allow deepfakes in political ads at all. The risks of deception and misinformation are magnified when politicians’ voices and faces can be convincingly imitated. In an era where voters rely on political ads for information, it is crucial that they know what is genuine and what is artificially created. Simple, clear disclosures can play a vital role in providing voters with the transparency they deserve.
In the United States, there are already rules governing commercial and political advertising to ensure transparency and accountability. Advertisements must clearly indicate whether they are sponsored and must disclose the source of funding. Similarly, political ads must state who paid for them and whether they are authorized by a specific candidate.
The emergence of generative AI and deepfake technology presents new challenges in understanding the origins and authenticity of political ads. As these technologies continue to advance, it becomes increasingly essential to provide explanations to voters on how political ads are created. Striking a balance between freedom of speech and protecting voters from deceptive tactics is crucial.
While Google’s move to mandate disclosure is a positive step, comprehensive legislation is needed to ensure consistent and widespread transparency in AI-generated political ads. It is vital for lawmakers to address this issue promptly and enact regulations that protect the integrity of political discourse while empowering voters with accurate information. The public’s trust in the political process relies on it.