Artificial Intelligence Fuels New Cybersecurity Threats in 2024, Experts Warn
As we enter the new year, experts are sounding the alarm on the escalating cybersecurity threats fueled by artificial intelligence (AI). CrowdStrike Chief Security Officer Shawn Henry recently emphasized that AI has put a powerful tool in the hands of the average person, giving them capabilities to overcome cybersecurity measures and gain unauthorized access to corporate networks. This development has major implications for everyone, as adversaries are increasingly leveraging AI to exploit vulnerabilities and spread misinformation.
AI has proven instrumental in penetrating corporate networks, enabling cybercriminals to breach security systems and gain unauthorized access. Additionally, the use of AI in creating sophisticated deepfakes through advanced video, audio, and text manipulation techniques poses a significant concern. Misinformation campaigns amplified by AI-powered deepfakes can have far-reaching consequences, particularly during vital events such as elections.
Henry stressed the importance of critically evaluating the source of information and never taking online content at face value. He emphasized the need to verify the origin of information through multiple sources and understand the motivations behind those disseminating it. Unfortunately, many individuals fail to conduct thorough fact-checking, providing an opportunity for cybercriminals to exploit their trust.
The upcoming 2024 election year for several countries, including the United States, Mexico, South Africa, Taiwan, and India, raises concerns about the potential impact of AI-driven cyber threats on democratic processes. Henry highlighted the long-standing history of foreign adversaries targeting elections, with notable instances in 2016 and even as far back as 2008. Misinformation and disinformation campaigns are likely to intensify during this critical period, as cybercriminals utilize AI to manipulate public sentiment and create chaos.
One particular area of concern in the 2024 U.S. election is the security of voting machines. However, Henry expressed optimism about the decentralized nature of the U.S. voting system, suggesting that it would deter large-scale hacking attempts. Nevertheless, he acknowledged the broader threat posed by AI, giving less technical cybercriminals access to powerful tools that can be used to devise malicious software, phishing emails, and other harmful activities.
The potential for AI misuse extends beyond cybersecurity threats. A report published by the RAND Corporation revealed the possibility of terrorists exploiting generative AI in planning biological attacks. The study indicated that jailbreaking techniques and prompt engineering could potentially circumvent AI safeguards, enabling malicious actors to access deeper levels of AI systems.
Moreover, email phishing attacks have seen a staggering 1265% increase since the beginning of 2023, as reported by cybersecurity firm SlashNext. This notable surge highlights the pressing need for global policymakers to develop regulations and strategies to counter the misuse of generative AI. Authorities worldwide have been actively exploring measures to clamp down on the spread of AI-generated deepfakes, with organizations like the United Nations and the U.S. Federal Election Commission taking action.
Technology giants Microsoft and Meta have also implemented policies aimed at curbing AI-powered political misinformation. Microsoft warned of the possibility of authoritarian nation-states combining traditional techniques with AI to undermine the integrity of electoral systems. These proactive efforts by industry leaders and policymakers are crucial in combatting the potential consequences of AI-driven cyber threats.
The exponential rise in AI-fueled cybersecurity threats demands heightened awareness and vigilance from individuals and organizations alike. By adopting a critical mindset, questioning information sources, and verifying information through reliable channels, we can better protect ourselves from the risks posed by AI manipulations. As Pope Francis astutely noted, artificial intelligence should serve humanity’s best interests and aspirations, not compete with them.