A new attack technique called AI package hallucination has been discovered. Attackers can use ChatGPT, a generative AI platform, to replace unpublished packages with their own malicious packages. This means that they can use supply chain attacks to deploy malicious libraries to known repositories. The technique tricks developers by generating possible solutions to coding problems and offering links to coding libraries that don’t actually exist.
The Vulcan Cyber Voyager18 research team discovered the technique and has issued a warning due to the broad adoption of open-source code libraries and the nature of software supply chains. The researchers urged the need for early detection and vulnerability testing in this evolving field.
The technique has significant implications for developers who use ChatGPT for answers and present an opportunity for attackers. Attackers can create their package to replace the ‘fake’ packages recommended by ChatGPT, which can result in victims downloading and using malicious packages unknowingly.
The attack technique of compromising the software supply chain through the use of shared and imported third-party libraries is not new. Therefore, developers, and other potential victims, should be cautious and follow basic security hygiene rules. This includes evaluating all code for security before downloading or executing it, practicing secure coding practices, and not blindly trusting packages recommended by ChatGPT and the internet in general.
In conclusion, as AI technology advances, both cybersecurity offense and defense are evolving. The arms race between those who prioritize security and those who don’t has been going on for years. Therefore, security researchers and software publishers have to leverage generative AI to detect and alert cybersecurity professionals of new threats in time to prevent such forms of exploit.