AI-Powered Campaigns Surge, Raising Concerns over Cybersecurity
There has been a significant surge in the use of artificial intelligence (AI) for conducting manipulative information campaigns online, which has raised concerns over cybersecurity. Mandiant, a US cybersecurity firm owned by Google, has observed an increasing number of AI-generated content being used in politically-motivated online influence campaigns since 2019. This includes fabricated profile pictures and other forms of content created using generative AI models. The report reveals that these campaigns have been orchestrated by groups associated with governments such as Russia, China, Iran, Ethiopia, Indonesia, Cuba, Argentina, Mexico, Ecuador, and El Salvador.
Generative AI models, like ChatGPT, have made it easier to create convincing fake videos, images, text, and computer code, leading to an alarming rise in the use of such technology for spreading disinformation. Cybersecurity experts have expressed concerns about cybercriminals utilizing these models to carry out malicious activities. The affordability and accessibility of generative AI could empower groups with limited resources to produce higher quality content for large-scale influence campaigns, impacting public sentiment and potentially manipulating public opinions.
One specific example mentioned in the report is the pro-China information campaign named Dragonbridge. Since it started in 2019 by targeting pro-democracy protesters in Hong Kong, it has experienced exponential growth across 30 social platforms and in 10 different languages. However, despite their widespread reach, these campaigns have not achieved significant results. Sandra Joyce, the Vice President of Mandiant Intelligence, stated that from an effectiveness standpoint, not a lot of wins there. She added that these campaigns have not yet changed the threat landscape substantially.
While AI has been increasingly prevalent in online influence campaigns, Mandiant has not observed AI playing a significant role in digital intrusions orchestrated by Russia, Iran, China, or North Korea. The use of AI for digital intrusions is expected to remain low in the near future. However, Joyce emphasized that it is crucial to recognize that the problem will likely intensify over time.
As we move forward, it becomes evident that AI-powered campaigns pose serious challenges to cybersecurity. Governments and organizations must remain vigilant and develop strategies to counter the impact of AI-generated disinformation. Efforts should not solely focus on AI detection and regulation; innovative solutions to promote media literacy and critical thinking among the public are equally important. Only through a comprehensive approach can we effectively mitigate the risks associated with the malicious use of AI in online influence campaigns and maintain the integrity of our digital ecosystem.