Microsoft, Google, OpenAI, Meta, Amazon, X, and several other major technology companies have pledged to combat the use of artificial intelligence (AI) deepfakes in elections. The concern arises as generative AI apps and services are being increasingly used to create misleading images and information. The issue recently gained prominence when AI-generated images of pop singer Taylor Swift flooded the social network X, with reports suggesting that Microsoft’s AI image generator Designer may have been involved.
With the US presidential election scheduled for 2024, there is a growing worry that AI deepfake images could be utilized to negatively influence voting outcomes, not just in the US but in other elections as well. To address this issue, a significant number of tech companies have come together to support a new initiative called the AI Elections Accord. The accord was announced at the Munich Security Conference and outlines the commitments these companies are making to combat the use of AI in deceptive election efforts.
The participating companies include Adobe, Amazon, Anthropic, Arm, ElevenLabs, Google, IBM, Inflection AI, LinkedIn, McAfee, Meta, Microsoft, Nota, OpenAI, Snap Inc., Stability AI, TikTok, Trend Micro, Truepic, and X. According to the accord’s press release, these companies have agreed to follow several commitments in their fight against deepfake election manipulation.
These commitments include:
– Developing and implementing technology to mitigate risks associated with deceptive AI election content.
– Assessing models within the scope of the accord to understand potential risks related to deceptive AI election content.
– Actively detecting the distribution of this content on their platforms.
– Appropriately addressing any deceptive AI election content detected on their platforms.
– Promoting cross-industry resilience to deceptive AI election content.
– Providing transparency to the public regarding their strategies for addressing this issue.
– Engaging with a diverse range of global civil society organizations, academics, and other stakeholders.
– Supporting initiatives that raise public awareness, enhance media literacy, and build society’s resilience against deceptive AI content.
Brad Smith, the President of Microsoft, expressed the company’s commitment to the cause. He stated that while AI didn’t create election deception, it is their responsibility to ensure that AI tools do not become weaponized during elections.
An example of election deepfakes occurred recently when robocalls featuring an AI-generated voice imitating US President Joe Biden urged individuals not to vote in the New Hampshire primary. These calls were later traced back to a Texas-based company.
By joining forces through the AI Elections Accord, these technology giants aim to prevent the misuse of AI technology, particularly deepfakes, in elections. Their collaborative efforts and commitments to developing tools, detecting and addressing deceptive content, and promoting public awareness will be crucial in safeguarding the democratic process and ensuring fair elections in the future.