Cybercriminals are constantly evolving their tactics to deceive unsuspecting users, with a recent trend being the distribution of malware disguised as popular generative AI (GenAI) tools like ChatGPT and Midjourney. According to a report by ESET Security, threat actors are using fake ads, phishing websites, and browser extensions to lure victims into downloading malicious software.
These malicious programs often masquerade as legitimate AI assistants such as ChatGPT, video creator Sora AI, image generator Midjourney, DALL-E, and photo editor Evoto. They may even claim to offer unreleased versions of these tools, enticing users with the promise of advanced features. However, these fake apps can contain various types of malware, including infostealers, ransomware, and remote access Trojans (RATs).
Phishing sites and social media ads are common vectors for distributing this type of malware, with over 650,000 attempts to access malicious domains related to ChatGPT recorded in the second half of 2023. Additionally, cybercriminals have been known to deploy malicious browser extensions disguised as popular services like Google Translate, tricking users into installing harmful software.
In some cases, threat actors have even hijacked legitimate accounts to create the illusion of authenticity, promoting fake ads for GenAI tools through these channels. This deceptive tactic makes it easier for cybercriminals to reach a larger audience and increase the likelihood of successful infections.
As the cybersecurity landscape continues to evolve, it is crucial for users to remain vigilant and exercise caution when interacting with online content. By staying informed about the latest threats and adopting best practices for internet safety, individuals can better protect themselves against malicious actors seeking to exploit popular AI tools for their nefarious purposes.