Hackers have discovered a vulnerability in the popular OpenAI language model ChatGPT that enables them to spread malware through the software supply chain. Researchers from cybersecurity firm Vulcan Cyber’s Voyager18 team have warned that attackers can exploit ChatGPT’s preference for returning false information to spread malicious code packages. In a blog post, the researchers revealed that hackers are able to create plausible yet bogus code packages through AI package hallucinations. Developers may inadvertently download these packages, building them into software that is then used by others. Attackers can take advantage of ChatGPT’s tendency to offer recommendations for unpublished or non-existent packages and create their own version of these packages — complete with malicious code. While it is difficult for developers to spot these fake packages, there are steps they can take, such as ensuring the libraries they download are what they say they are and checking dates and number of downloads.
ChatGPT Urges Developers to Address Supply-Chain Malware Attacks and Hallucinations Vulnerabilities
Date:
Frequently Asked Questions (FAQs) Related to the Above News
Please note that the FAQs provided on this page are based on the news article published. While we strive to provide accurate and up-to-date information, it is always recommended to consult relevant authorities or professionals before making any decisions or taking action based on the FAQs or the news article.