Microsoft and Meta, the parent company of Facebook, are taking steps to crack down on deceptive AI political ads ahead of the 2024 elections. Both tech giants have announced new policies aimed at curbing the creation and dissemination of misleading AI-generated political advertisements.
Microsoft’s President Brad Smith and VP of Technology for Fundamental Rights, Teresa Hutson, outlined the company’s approach to AI in political advertising. They expressed concerns about authoritarian nation-states using traditional methods alongside AI and other emerging technologies to undermine the integrity of electoral systems. As part of their election protection commitment, Microsoft will provide transparent and authoritative information about elections, enable candidates to verify the origins of campaign material, and offer recourse for instances where AI distorts their likeness or content. Microsoft also plans to safeguard political campaigns against cyber threats by launching new tools such as Content Credentials as a Service and an Election Communications Hub.
Meta, on the other hand, is targeting misinformation and deceptive political ads on its platforms. Advertisers running ads related to social issues, elections, and politics will now be required to disclose if the image or sound has been digitally created or altered, including through the use of AI. Meta’s new policy entails an authorization process for advertisers and the inclusion of a disclaimer stating who paid for the ad. It covers ads featuring AI-generated images or deepfakes of individuals, as well as ads that manipulate realistic images or footage to misrepresent non-existent events.
Combatting deepfakes and misinformation is becoming increasingly challenging as generative AI advances rapidly. Policymakers, corporations, and law enforcement are striving to keep up with the development of new AI tools. Even though some progress has been made in addressing deepfakes, the spread of AI-generated content raises concerns about the trustworthiness of online information.
In an effort to regulate the use of AI-generated deepfakes, the U.S. Federal Election Commission recently addressed the issue. Additionally, a proposed bipartisan bill called the No Fakes Act aims to criminalize the unauthorized use of a person’s likeness in media without their permission.
As the 2024 elections draw nearer, Microsoft and Meta’s initiatives to tackle deceptive AI political ads are steps in the right direction. By providing access to transparent information and implementing stricter policies, the tech giants aim to protect the integrity of electoral systems and ensure free and fair elections.
References:
1. https://decrypt.co/112680/microsoft-meta-crack-down-deceptive-ai-political-ads-ahead-2024-elections
2. https://twitter.com/nickclegg/status/1480737928485010433