AI-Powered Disinformation and Deepfakes Threaten Global Elections, Urgent Action Needed
Upcoming elections in the U.S., the U.K., and India are set to become a significant global test of the world’s ability to combat a new wave of AI-powered disinformation and deepfakes. Governments around the world are racing to develop new laws, regulations, tagging systems, and watermarking technologies to safeguard information integrity and assist people in identifying fake content. While these measures are crucial, they can only go so far in mitigating the misuse of artificial intelligence in the information space. As AI technology evolves, it becomes imperative for individuals to develop the necessary skills to manage AI effectively.
Vilas Dhar, president and trustee of the Patrick J. McGovern Foundation, a philanthropy focused on AI, data, and social good, believes that instead of relying solely on regulation, there is a need to build robust social resilience. This involves calling out bad actors and empowering law enforcement agencies to swiftly identify and remove disinformation. Additionally, Dhar emphasizes the importance of conducting a comprehensive public education campaign to help individuals recognize the telltale signs of manipulative disinformation.
While the issue of disinformation is not new, AI exacerbates the problem by increasing the rate and precision of disinformation campaigns. IBM CEO Arvind Krishna highlights the concern that AI can be fine-tuned for specific target audiences, making these campaigns even more potent. Microsoft Vice Chairman and President Brad Smith expresses particular worry about cyber influence operations conducted by Russia, China, and Iran to disrupt public opinion and sway elections.
Russia reportedly spends around $1 billion annually on such operations in multiple languages. Smith believes that AI will enhance their capabilities further. The Russian and Chinese embassies in Washington, as well as the Iranian mission to the United Nations, have not commented on these allegations.
The Biden administration recently struck a deal with seven AI companies, including Microsoft, to establish voluntary guardrails around artificial intelligence. These measures include the development of a watermarking system that enables users to identify AI-generated content. Microsoft’s Smith emphasizes that watermarking is just one component of a broader strategy.
Smith encourages platforms like Twitter, LinkedIn, and Facebook to work together to address the issue of altered content intended to deceive users. He suggests revising laws to classify such acts as unlawful and considering actions such as content removal, reduced visibility in search results, or relabeling to inform users of the content’s altered nature.
In June, Senate Majority Leader Chuck Schumer initiated an effort to establish new rules for AI, striking a balance between security and innovation. As part of this endeavor, Schumer plans to hold forums to gather insights from industry leaders in the coming months.
Schumer views the protection of democracy in upcoming elections as an immediate concern. He warns that if AI abuse becomes rampant in political campaigns, people may lose faith in democracy as a whole.
Safeguarding elections from AI-powered disinformation and deepfakes necessitates a multi-pronged approach that combines regulatory measures, technological advancements, law enforcement vigilance, public education, and industry collaboration. By addressing the misuse of AI effectively, governments and stakeholders can mitigate the threat posed to global elections and uphold the integrity of democratic processes.