Hackers are targeting people with malicious code hidden in ChatGPT and other generative artificial intelligence (AI) tools, warns Meta security experts. Guy Rosen, the chief information security officer from the social media giant, outlined the deceptive behavior in a news briefing. His analysts spotted malicious software disguised as ChatGPT and other AI tools within the last month. On top of this, the company noticed malicious browser extensions advertised as having “generative AI capabilities” when, in fact, it would infect users’ devices instead.
Rosen encourages users to be extra vigilant with their digital security, as these scams take advantage of the rapidly growing interest in AI and cryptocurrency. Cyber criminals are using click bait to convince people to click on malicious links and programs, installing malicious code on the user’s device in an attempt to gather data and personal information. The Facebook-owned company has reported blocking over a thousand web addresses associated with the scam.
The Head of Security Policy, Nathaniel Gleicher, believes “it is only a matter of time” before generative AI is weaponized. Which is why Meta is attempting to use it to protect against hackers’ attacks and other online campaigns. In order to stay safe, users are advised to be aware of the ongoing risk of scams and take the necessary precautions.