Microsoft has raised concerns about the potential misuse of Artificial Intelligence (AI) by China to interfere in elections across the globe, including in countries like India, South Korea, and the United States. Clint Watts, General Manager of Microsoft Threat Analysis Center, warned that China might create and spread AI-generated content to serve its own interests during major elections this year.
The upcoming general elections in India, South Korea, and the US have raised alarm bells regarding possible foreign interference through the use of AI technology. With India’s general elections set to take place in seven phases between April and June, the South Korean elections scheduled for April, and the US Presidential election in November, the risk of external manipulation through fake social media accounts and divisive content is a real concern.
According to a recent report by the Microsoft Threat Analysis Center (MTAC), North Korea is also involved in similar activities such as cryptocurrency heists and supply chain attacks. The report highlighted China’s strategy of leveraging fake accounts on social media platforms to influence public opinion and divide voters, while North Korea aims to enhance its cyber capabilities and military objectives.
Despite the low probability of such activities significantly impacting election outcomes, China’s experimentation with AI-generated content is expected to continue and potentially become more effective in the future. The report also suggested that Chinese and North Korean cyber actors are likely to target elections in India, South Korea, and the US by engaging with the population and influencing perceptions of political events.
Furthermore, China has been increasing its use of AI-generated content globally to advance its strategic goals. Chinese cyber actors have been known to conduct reconnaissance on US political institutions, while influence actors may interact with Americans to shape opinions on US politics. North Korea, on the other hand, is predicted to engage in sophisticated cryptocurrency heists and supply chain attacks, specifically targeting the defense sector.
The report emphasized the use of AI-generated content by CCP-affiliated actors during the Taiwanese presidential election in January 2024, marking the first instance of a nation-state actor employing such tactics to influence a foreign election. The blog post highlighted the activities of a group known as Storm-1376, or Spamouflage and Dragonbridge, which posted AI-generated fake audio on YouTube to manipulate public opinion during the election.
In conclusion, the threat of AI misuse by foreign actors in influencing elections is a significant concern for countries like India, South Korea, and the US. As technology continues to evolve, vigilance and proactive measures are essential to safeguard the integrity of democratic processes around the world.