A Singaporean cybersecurity company, Group-IB, has revealed a massive data leak of over 100,000 login credentials belonging to the artificial intelligence chatbot ChatGPT. The leaked information was traded on the dark web between June 2022 and May 2023, posing concerns about the security of user information. Group-IB discovered that more than 101,000 devices containing login credentials for ChatGPT were traded on dark web marketplaces. May 2023 witnessed a peak in the availability of almost 27,000 ChatGPT-related credentials on online black markets.
India had the highest number of leaked credentials, surpassing 12,500, while the United States ranked sixth overall with nearly 3,000 leaked logins. The root cause of this breach was not any weaknesses in ChatGPT’s infrastructure but rather direct authentication methods employed in compromised accounts. It is reasonable to assume that accounts created with a direct authentication method were targeted primarily.
The exposure of confidential company information poses a threat to corporations, since unauthorized individuals can gain access to user queries and chat history, which are stored by default. Cybercriminals infected individual user devices worldwide confirming the importance of regularly updating software and implementing two-factor authentication as essential security practices.
Individuals and organizations must take proactive steps to enhance their security measures in the wake of this massive leak. Regular software updates and patches can help mitigate vulnerabilities exploited by cybercriminals. Implementing two-factor authentication adds an additional layer of protection, making it more difficult for unauthorized individuals to gain access. Moreover, users should exercise caution while sharing sensitive information with AI chatbots and keep in mind the potential risks associated with storing personal or confidential data.
It is important to note that the press release announcing the breach was written with the assistance of ChatGPT, highlighting the growing prominence of AI language models in various domains and the potential for collaboration between humans and AI. Nevertheless, this incident emphasizes the need to ensure the security of such platforms to prevent unauthorized access and misuse of information.