Security engineers and researchers at the Trust Project, a worldwide group of news organizations working to establish transparency standards, have recently discovered that malware operators are utilizing AI-based generative tools as a new way of spreading malicious software. By exploiting popular topics related to OpenAI’s ChatGPT software, threat actors managed to create malicious browser extensions that pretended to deliver AI-related features.
The ultimate goal of these campaigns is to gain access to user accounts and ad accounts across the internet using file-sharing services like Dropbox, Google Drive, Mega, MediaFire, Discord, Atlassian’s Trello, Microsoft OneDrive, and iCloud to host malicious software. Since the beginning of this year, several malwares strains with ChatGPT-related tactics have been identified by Meta security engineers Duc H. Nguyen and Ryan Victory and reported to industry peers for proper precautions.
Meta security engineers have prevented the sharing of over 1,000 ChatGPT-themed malicious links on the company’s platforms. To remain concealed from automatic ad review systems, some of these extensions contained functioning ChatGPT elements, as well as by utilizing popular marketing tools such as link-shorteners.
In response to public awareness and blocked access to larger platforms, some of these campaigns reshaped their tactics, like cloaking and focusing on smaller platforms, like Buy Me a Coffee, while targeting other popular topics such as Google’s Bard and TikTok marketing.
Given the rise in generative AI chatter, Meta is warning users to be hesitant when faced with unsolicited link or downloads, particularly those that appear to contain ChatGPT-related applications. Users should likewise be aware of any applications hosted in browser web stores or sidebars that offer AI-related features. The Trust Project is advocating for the public to join in the initiative and spread the importance of these measures.