A new report from Facebook’s parent company Meta warns users of malware disguising itself as AI tools. A recent malware trend observed by Meta is related to OpenAI’s popular chatbot ChatGPT. The company’s security team recently identified around ten new malware families masquerading as AI chatbot tools.
These malicious actors are using web browser extensions to pose as AI chatbot tools on Chrome and Firefox browsers. While they may in fact provide the promised chatbot features, they also carry malicious code that can access users’ devices. Meta has blocked 1,000 unique URLs hosting these malicious extensions within Facebook, Instagram, and Whatsapp.
Once installed, the bad actors can launch their attacks–from business account hijacking to automated services providing advertising permissions to bad actors. Meta has reported the malicious links to respective domain registrars and hosting providers.
Meta’s security report dives into the technical aspects of the malware, with Ducktail and NodeStealer being cited as some of the more pressing threats.
Meta is a company focused on cyber security solutions for business and individuals. Founded in 2017, the company provides solutions to prevent corporate data loss, malicious access, and infrastructure attacks.
Rafael Henrique is a security researcher at Meta who authored the new security report detailing the recent malware threats. As a researcher and practitioner in the information security space, he has extensive experience in malware analysis, system hardening and penetration testing.