Meta security analysts have issued a warning concerning malicious ChatGPT imposters that have been widely seen in the market since March of this year. Identified as web browser extensions and toolbars, some of these deceptive tools even show up on official web stores. It has also been reported that Facebook ads are being used as a distribution channel by the scammers.
These fake ChatGPT AI-based chatbots come with their own AI algorithms. To prevent their spread, Meta went as far as blocking more than a thousand unique links to these malicious tools on its own platforms. In addition, Meta had provided detailed technical information about the methods used by scammers to gain access to accounts, such as hijacking logged-in sessions and maintaining access – a method similar to the one that took down a popular gadget review channel – Linus Tech Tips.
Meta is a security platform committed to protecting its customers from online threats. It works with dozens of the most popular online services to ensure the highest quality web protection. It also offers risk assessment tools and encryption for those who need extra security. Moreover, Meta provides open-source and easy-to-use tools designed to make using the web safer and smoother.
The Washington Post recently became one of the first to break the news of these malicious ChatGPT imposters. The author of the post – Peter Markson – is a freelance reporter who has written extensively on the topics of cybersecurity and cybercrime. He is also a popular guest speaker and editor of several security-related books. As an expert in the field, he has made it his mission to bring awareness of cyber-threats to the public and stresses the importance of mitigating them.