The Dark Side of AI: Over 100 000 ChatGPT Accounts Stolen and Traded

Date:

Title: The Dark Side of AI: Over 100,000 ChatGPT Accounts Stolen and Traded

Criminals have devised a new tactic to target unsuspecting users of the artificial intelligence chatbot ChatGPT. They are stealing user accounts and trading them on illegal online criminal marketplaces. Shockingly, this threat has already affected more than 100,000 individuals worldwide.

According to Singapore-based cybersecurity firm Group-IB, 101,134 devices have been infected with information-stealing malware that contained saved ChatGPT credentials. In a press release issued on June 20, the cybersecurity firm highlighted that these compromised credentials have been traded on illicit dark web marketplaces over the past year. The Asia-Pacific region, in particular, has seen the highest concentration of ChatGPT credentials being offered for sale.

The hidden malware captures and transfers data to third parties when users interact with the AI chatbot. Hackers can then utilize this stolen information to create false personas and manipulate data for various fraudulent activities.

To protect themselves, users must never disclose sensitive information such as personal and financial details, regardless of how friendly the chatbot may seem. It is also important to note that this issue may not necessarily be the fault of the AI provider, as the infection could be present on the device or within other applications.

Between June 2022 and May 2023, more than 100,000 ChatGPT accounts were compromised, with India accounting for the highest number of affected accounts at 12,632, followed by Pakistan with 9,217, Brazil with 6,531, Vietnam with 4,771, and Egypt with 4,588. The United States ranked sixth, with 2,995 compromised ChatGPT credentials.

See also  Navigating Ethics Concerns in Large Language Models: ChatGPT and Beyond

Dmitry Shestakov, head of threat intelligence at Group-IB, noted that many enterprises integrate ChatGPT into their operations, with employees using the chatbot for classified correspondence or to optimize proprietary code. Given that ChatGPT retains all conversations by default, compromised account credentials can inadvertently provide threat actors with a wealth of sensitive information.

Group-IB’s analysis of criminal underground marketplaces revealed that most ChatGPT accounts were accessed using the Raccoon info stealer malware, responsible for compromising over 78,000 credentials. This type of malware collects saved credentials, including those from browsers, bank cards, crypto wallets, cookies, browsing history, and more, which are then sent to the malware operator.

To minimize the risk of compromised ChatGPT accounts, Group-IB recommends users regularly update their passwords and enable two-factor authentication (2FA). With 2FA enabled, users receive an additional verification code, often on their mobile devices, to access the chatbot’s services.

Enabling 2FA can be done by accessing the settings of the ChatGPT account and selecting the Data controls option. However, it’s important to note that while 2FA is an effective security measure, it may not be completely foolproof. Therefore, users should consider clearing all saved conversations, especially if they discuss sensitive topics such as personal details, financial information, or work-related matters.

To clear conversations, users can go to the Clear Conversations section in their account and click Confirm clear conversations.

Group-IB highlighted that there has been a substantial increase in compromised ChatGPT accounts, mirroring the chatbot’s growing popularity. In June 2022, only 74 accounts were compromised, while by November, the number reached 1,134. In January and March 2023, the figures rose to 11,909 and 22,597 compromised accounts, respectively.

See also  OpenAI Developing Autonomous AI Assistant to Control Users' Devices: Report

While ChatGPT poses new risks regarding the access of sensitive information, it is important to note that the chatbot can also assist hackers in enhancing their criminal activities. Cyber threat intelligence firm Check Point Research (CPR) outlined in a blog post that ChatGPT and similar AI models increase the potential for hacking threats. With ChatGPT’s assistance in generating code, even less-skilled individuals can launch sophisticated cyber attacks, automating complicated attack processes and creating multiple variations of scripts easily.

Additionally, CPR warned of Russian cybercriminals attempting to bypass ChatGPT’s restrictions for potential criminal activities, stating that hackers are most likely trying to implement and test ChatGPT into their day-to-day criminal operations due to its cost-efficiency and AI technology.

As the popularity of ChatGPT continues to grow, it is crucial for users to remain vigilant and take necessary precautions to protect their accounts and personal information. Regular password updates, enabling 2FA, and clearing saved conversations when discussing sensitive topics should be habitual practices to avoid falling victim to hackers.

Frequently Asked Questions (FAQs) Related to the Above News

What is ChatGPT?

ChatGPT is an artificial intelligence chatbot that allows users to have interactive conversations with a computer program.

How many ChatGPT accounts have been stolen and traded?

Over 100,000 ChatGPT accounts have been compromised and traded on illegal online criminal marketplaces.

How are criminals stealing ChatGPT accounts?

Criminals are using information-stealing malware to capture saved ChatGPT credentials from infected devices.

Which region has seen the highest concentration of compromised ChatGPT credentials?

The Asia-Pacific region has seen the highest concentration of compromised ChatGPT credentials being offered for sale.

What can hackers do with stolen ChatGPT information?

Hackers can use stolen ChatGPT information to create false personas and manipulate data for fraudulent activities.

Whose responsibility is it if a ChatGPT account is compromised?

The infection could be present on the device or within other applications, so it may not necessarily be the fault of the AI provider.

How can users protect themselves from having their ChatGPT accounts compromised?

Users should never disclose sensitive information and should regularly update their passwords. It is also recommended to enable two-factor authentication (2FA) and clear saved conversations regularly.

What is two-factor authentication (2FA)?

Two-factor authentication (2FA) is an additional security measure where users receive a verification code, often on their mobile devices, to access the chatbot's services.

How can users enable 2FA for their ChatGPT accounts?

Users can enable 2FA by accessing the settings of their ChatGPT account and selecting the Data controls option.

Is two-factor authentication foolproof?

While two-factor authentication is an effective security measure, it may not be completely foolproof. Users should still take additional precautions to protect their accounts.

How can users clear their conversations on ChatGPT?

Users can go to the Clear Conversations section in their ChatGPT account and click Confirm clear conversations to clear their conversations.

Is there a risk of hackers using ChatGPT for criminal activities?

Yes, cyber threat intelligence firms have warned that hackers can use ChatGPT to enhance their criminal activities, such as automating complicated attack processes and creating multiple variations of scripts easily.

What precautions should users take to protect their ChatGPT accounts?

Users should regularly update passwords, enable two-factor authentication, and clear saved conversations when discussing sensitive topics to protect their ChatGPT accounts.

Please note that the FAQs provided on this page are based on the news article published. While we strive to provide accurate and up-to-date information, it is always recommended to consult relevant authorities or professionals before making any decisions or taking action based on the FAQs or the news article.

Aniket Patel
Aniket Patel
Aniket is a skilled writer at ChatGPT Global News, contributing to the ChatGPT News category. With a passion for exploring the diverse applications of ChatGPT, Aniket brings informative and engaging content to our readers. His articles cover a wide range of topics, showcasing the versatility and impact of ChatGPT in various domains.

Share post:

Subscribe

Popular

More like this
Related

Global Data Center Market Projected to Reach $430 Billion by 2028

Global data center market to hit $430 billion by 2028, driven by surging demand for data solutions and tech innovations.

Legal Showdown: OpenAI and GitHub Escape Claims in AI Code Debate

OpenAI and GitHub avoid copyright claims in AI code debate, showcasing the importance of compliance in tech innovation.

Cloudflare Introduces Anti-Crawler Tool to Safeguard Websites from AI Bots

Protect your website from AI bots with Cloudflare's new anti-crawler tool. Safeguard your content and prevent revenue loss.

Paytm Founder Praises Indian Government’s Support for Startup Growth

Paytm founder praises Indian government for fostering startup growth under PM Modi's leadership. Learn how initiatives are driving innovation.