Microsoft and OpenAI have teamed up to tackle the growing issue of deepfakes and fraudulent AI content ahead of pivotal global elections this year. The collaborators have launched a $2 million fund with the primary objective of safeguarding the integrity of democratic processes worldwide.
With an estimated 2 billion individuals set to participate in elections across 50 countries, concerns over the spread of AI-generated misinformation have reached an alarming level. Particularly vulnerable groups are often the targets of such deceptive tactics.
The emergence of generative AI tools, like ChatGPT chatbots, has significantly expanded the capabilities for creating deepfakes. These readily available technologies can fabricate convincing videos, photos, and audio clips of political figures, amplifying the risk of misinformation.
Recognizing the imminent threat posed by AI-driven disinformation, tech giants like Microsoft and OpenAI have pledged to combat this menace voluntarily. Collaborative efforts are underway to standardize deepfakes intended to deceive voters, with prominent AI organizations incorporating safeguards into their systems. Notable examples include Google’s Gemini AI chatbot, which is restricted from answering election-related queries, and Meta’s prohibition on such content by its parent company, Facebook.
To empower academics in identifying fake content produced by the DALL-E image generator, OpenAI has rolled out a deepfake detection tool. Joining forces with Adobe, Google, Microsoft, and Intel in the Coalition for Content Provenance and Authenticity (C2PA) steering group, the company is dedicated to combating disinformation.
A newly established societal resilience fund will play a pivotal role in advancing the ethical use of AI. Microsoft and OpenAI will allocate these funds to organizations such as Older Adults Technology Services (OATS), C2PA, International IDEA, and PAI to support AI literacy and education programs, particularly for marginalized communities.
Emphasizing the significance of the Societal Resilience Fund in fostering community projects related to AI, Microsoft Corporate Vice President for Technology and Corporate Responsibility Teresa Hutson underscored the joint commitment of Microsoft and OpenAI in combatting AI-generated disinformation.
As accessible AI technologies continue to proliferate, concerns regarding the surge in politically motivated disinformation on social media intensify. AI could potentially complicate the electoral landscape this year, given the deep-rooted ideological divides and escalating distrust in online content.
While the sophistication of the latest AI technologies is commendable, experts opine that most deepfakes, especially those originating from global influence campaigns in Russia and China, lack credibility. Though generative AI has been heavily utilized in recent elections in countries like Pakistan and Indonesia, there is no conclusive evidence that it has unfairly favored specific candidates.
Despite the occasional impact of AI-generated misinformation operations in the US, law enforcement agencies face challenges in responding promptly to deepfake threats. However, some states have taken proactive measures by enacting election-deepfake laws to prevent the dissemination of manipulated media to undermine candidates during election periods.
The concerted efforts by Microsoft, OpenAI, and other stakeholders underscore a collective resolve to combat the pervasive threat of AI-generated disinformation, safeguarding the democratic processes and fostering transparency in electoral systems globally.