The widespread use of artificial intelligence (AI) is creating new security threats that are starting to emerge. Attackers are employing AI to enhance phishing and other fraudulent tactics. For instance, Meta’s 65-billion parameter language model was leaked, which will inevitably result in improved phishing attacks. Moreover, AI and machine learning (ML)-based services are being used to store sensitive data, making it challenging for security teams to monitor and safeguard the use of such services. Respondents in a Fishbowl survey revealed that 68% of workers employing ChatGPT for business purposes do not disclose this to their employers.
As AI’s influence grows, consumers, businesses and governments are increasingly concerned about the misuse of such systems. Social engineering attacks will be the first to benefit from synthetic text, voice and images. Manual efforts like phishing attempts may become automated with AI’s help. Attackers will likely adopt AI faster than defenders, granting them an advantage. They will be able to launch more sophisticated AI/ML-powered attacks at a scale and low cost.
Bias models could create malicious models, leading to further arms races and the rise of adversarial AI tools designed to fool AI systems, manipulate data or steal sensitive data. Moreover, as more software code is generated using AI, attackers may exploit vulnerabilities to compromise large-scale applications. However, this presents an opportunity for innovative approaches to improving security through AI.
The US federal government’s announcement that governance is forthcoming is a promising initial step. However, we remain woefully unprepared for AI’s future, which has prompted the nonprofit Future of Life Institute to publish an open letter requesting a pause in AI innovation. While enjoyable for clickbait, stopping innovation is implausible since attackers will not follow suit. Alternatively, we require more innovation, action and investment to ensure ethical and responsible use of AI.
The silver lining is that this creates opportunities for innovative approaches to security enabled by AI and machine learning. Threat hunting and behavioural analytics can offer a big boost in improving AI’s security posture, but these proposals take time and require development. Future articles must consider the paradigm shifts accompanying any new technology and integrate strategies to test their potency. The dystopian possibilities associated with AI must be addressed to benefit society.