Title: The Dark Side of AI: Over 100,000 ChatGPT Accounts Stolen and Traded
Criminals have devised a new tactic to target unsuspecting users of the artificial intelligence chatbot ChatGPT. They are stealing user accounts and trading them on illegal online criminal marketplaces. Shockingly, this threat has already affected more than 100,000 individuals worldwide.
According to Singapore-based cybersecurity firm Group-IB, 101,134 devices have been infected with information-stealing malware that contained saved ChatGPT credentials. In a press release issued on June 20, the cybersecurity firm highlighted that these compromised credentials have been traded on illicit dark web marketplaces over the past year. The Asia-Pacific region, in particular, has seen the highest concentration of ChatGPT credentials being offered for sale.
The hidden malware captures and transfers data to third parties when users interact with the AI chatbot. Hackers can then utilize this stolen information to create false personas and manipulate data for various fraudulent activities.
To protect themselves, users must never disclose sensitive information such as personal and financial details, regardless of how friendly the chatbot may seem. It is also important to note that this issue may not necessarily be the fault of the AI provider, as the infection could be present on the device or within other applications.
Between June 2022 and May 2023, more than 100,000 ChatGPT accounts were compromised, with India accounting for the highest number of affected accounts at 12,632, followed by Pakistan with 9,217, Brazil with 6,531, Vietnam with 4,771, and Egypt with 4,588. The United States ranked sixth, with 2,995 compromised ChatGPT credentials.
Dmitry Shestakov, head of threat intelligence at Group-IB, noted that many enterprises integrate ChatGPT into their operations, with employees using the chatbot for classified correspondence or to optimize proprietary code. Given that ChatGPT retains all conversations by default, compromised account credentials can inadvertently provide threat actors with a wealth of sensitive information.
Group-IB’s analysis of criminal underground marketplaces revealed that most ChatGPT accounts were accessed using the Raccoon info stealer malware, responsible for compromising over 78,000 credentials. This type of malware collects saved credentials, including those from browsers, bank cards, crypto wallets, cookies, browsing history, and more, which are then sent to the malware operator.
To minimize the risk of compromised ChatGPT accounts, Group-IB recommends users regularly update their passwords and enable two-factor authentication (2FA). With 2FA enabled, users receive an additional verification code, often on their mobile devices, to access the chatbot’s services.
Enabling 2FA can be done by accessing the settings of the ChatGPT account and selecting the Data controls option. However, it’s important to note that while 2FA is an effective security measure, it may not be completely foolproof. Therefore, users should consider clearing all saved conversations, especially if they discuss sensitive topics such as personal details, financial information, or work-related matters.
To clear conversations, users can go to the Clear Conversations section in their account and click Confirm clear conversations.
Group-IB highlighted that there has been a substantial increase in compromised ChatGPT accounts, mirroring the chatbot’s growing popularity. In June 2022, only 74 accounts were compromised, while by November, the number reached 1,134. In January and March 2023, the figures rose to 11,909 and 22,597 compromised accounts, respectively.
While ChatGPT poses new risks regarding the access of sensitive information, it is important to note that the chatbot can also assist hackers in enhancing their criminal activities. Cyber threat intelligence firm Check Point Research (CPR) outlined in a blog post that ChatGPT and similar AI models increase the potential for hacking threats. With ChatGPT’s assistance in generating code, even less-skilled individuals can launch sophisticated cyber attacks, automating complicated attack processes and creating multiple variations of scripts easily.
Additionally, CPR warned of Russian cybercriminals attempting to bypass ChatGPT’s restrictions for potential criminal activities, stating that hackers are most likely trying to implement and test ChatGPT into their day-to-day criminal operations due to its cost-efficiency and AI technology.
As the popularity of ChatGPT continues to grow, it is crucial for users to remain vigilant and take necessary precautions to protect their accounts and personal information. Regular password updates, enabling 2FA, and clearing saved conversations when discussing sensitive topics should be habitual practices to avoid falling victim to hackers.