According to new research from Group-IB, cybercriminals are increasingly targeting ChatGPT accounts for credential theft and access to sensitive information. OpenAI’s chatbot stores past user queries and AI responses by default, making each account a potential entry point for threat actors to access users’ information.
The stolen information can be used for malicious purposes such as identity theft, financial fraud, targeted scams, and more, warns Dmitry Shestakov, head of threat intelligence at Group-IB in an interview with TechTarget Editorial. Group-IB’s Threat Intelligence platform provided visibility into dark web communities, which allowed researchers to discover most of the compromised ChatGPT credentials within the logs of information stealers sold by threat actors through illicit marketplaces.
Over the past year, Group-IB has identified 101,134 information-stealer-infected devices with saved ChatGPT data. The number of stealer logs containing ChatGPT credentials rose consistently from June 2022 through March 2023, with the monthly figure for May 2023 being the highest on record, with 26,802 compromised accounts discovered last month. Most of the stolen ChatGPT credentials were compromised with Raccoon malware, a notorious information stealer.
Threat actors use information stealer malware to collect credentials stored in infected browsers, such as bank card details and cryptocurrency wallet information. The stolen data is extracted in the form of a log file.
Group-IB researchers have warned users that their personal or professional information may be at risk due to the increasing number of compromised ChatGPT credentials. While most of the victims are located in the Asia-Pacific region, all ChatGPT users should remain vigilant and take measures to protect their accounts and sensitive information.
Frequently Asked Questions (FAQs) Related to the Above News
What is the new research from Group-IB about?
The new research from Group-IB is about cybercriminals increasingly targeting ChatGPT accounts for credential theft and access to sensitive information.
Why are ChatGPT accounts vulnerable to credential theft risk?
ChatGPT accounts are vulnerable to credential theft risk because the chatbot stores past user queries and AI responses by default, making each account a potential entry point for threat actors to access users' information.
What can stolen ChatGPT information be used for?
Stolen ChatGPT information can be used for malicious purposes such as identity theft, financial fraud, targeted scams, and more.
How did Group-IB discover most of the compromised ChatGPT credentials?
Group-IB's Threat Intelligence platform provided visibility into dark web communities, which allowed researchers to discover most of the compromised ChatGPT credentials within the logs of information stealers sold by threat actors through illicit marketplaces.
How many information-stealer-infected devices with saved ChatGPT data did Group-IB identify over the past year?
Group-IB identified 101,134 information-stealer-infected devices with saved ChatGPT data over the past year.
What is Raccoon malware?
Raccoon malware is a notorious information stealer that threat actors use to collect credentials stored in infected browsers, such as bank card details and cryptocurrency wallet information.
Who are the most affected ChatGPT users according to Group-IB researchers?
According to Group-IB researchers, most of the victims are located in the Asia-Pacific region, but all ChatGPT users should remain vigilant and take measures to protect their accounts and sensitive information.
Please note that the FAQs provided on this page are based on the news article published. While we strive to provide accurate and up-to-date information, it is always recommended to consult relevant authorities or professionals before making any decisions or taking action based on the FAQs or the news article.