If you’re one of the many people who have been using ChatGPT to simplify your work, you might have unknowingly put your personal information at risk. A new report by Singapore-based cybersecurity firm, Group-IB, revealed that 101,134 ChatGPT accounts were compromised by information-stealing malware in the past year. Many of the stolen credentials were traded on the dark web, with the Asia-Pacific region being the most affected.
The report also showed that the majority of logs containing ChatGPT accounts were hacked by a malware called ‘Raccoon info stealer’, which could expose confidential or sensitive information to the hackers. Cybersecurity experts warn that many people don’t realize that their ChatGPT accounts hold sensitive information that is sought after by cybercriminals.
The popularity of ChatGPT credentials has grown on underground communities, making them more attractive to threat actors. Therefore, it might be a good idea to disable the chat saving feature unless necessary to limit the amount of data available.
Info stealers are a kind of malware that focus on stealing digital assets stored on a compromised system and collecting essential information like crypto wallet records, access credentials, passwords, and browser logins. To obtain the maximum amount of data possible, they infect as many computers as they can through phishing or other means.
Jake Moore, cybersecurity advisor at ESET, said that people must take caution when inputting information into cloud based services and chatbots because it may hold sensitive or confidential data.
In conclusion, if you have been using ChatGPT, be mindful of the information that is being stored, and avoid inputting any sensitive data if possible. And if you’ve turned on the chat saving feature, you might want to turn it off to limit the amount of data that can be accessed and exploited by the hackers.