ChatGPT, the latest Artificial Intelligence chatbot powered by OpenAI, has quickly gained mainstream popularity. Unfortunately, this rise in popularity has been accompanied by malicious actors looking to exploit unsuspecting users. Meta, the company behind Facebook, has been the latest to publicly warn about the growing pandemic of ChatGPT scams – with over 1,000 malicious links claiming to be associated with the chatbot being discovered.
This led Guy Rosen, Meta’s Chief Information Security Officer, to assert that “ChatGPT is the new crypto,” referencing the multitude of scams that arrived during the cryptocurrency boom. Fake ChatGPT apps have been found on the Mac App Store, Google Play Store and other online sources. These apps are often limited in functionality and fraudulently display OpenAI and ChatGPT imagery. Additionally, scammers also use ‘phishing’ websites that copy the official OpenAI ChatGPT domain and distribute malware that attempts to collect your personal information and payment information.
Alex Kleber, a researcher for the Privacy 1st blog, reported on the numerous number of these fake ChatGPT clones in the Mac App Store. He highlighted specific developers using multiple developer accounts to spam the store with these fake apps. This makes it more challenging for legitimate developers to list, publish, and sell apps that could genuinely improve users’ ChatGPT experience. This fraud has been further exposed by Dominic Alvieri, a security researcher who highlighted a website that imitates the official OpenAI domain in a recent Twitter thread. In addition, research and intelligence firm, Cyble, claimed to have found more than 50 malicious apps on the Google Play Store in their published report.
Due to the severity of these scams, it is highly recommended that users double-check any app, extension, or site they are using to ensure it is legitimate, and made by a verified developer. It may also be wise to do some research into the app, extension, or site in question and see what other people and professionals say about it. As these malicious actors continue to target users of AI-assisted tools, such measures must be taken to ensure user safety.