Is ChatGPT a Big Privacy Risk? With the immense success of OpenAI’s Generative Pre-trained Transformer (ChatGPT) Artificial Intelligence chatbot, several issues have been raised about its potential to pose a risk to user privacy. Even if you’ve never used ChatGPT, the AI platform could still have more of your data than you realize.
OpenAI’s massive success has been attributed to its ability to feed more than 300 billion words from the internet, which includes articles, blog posts, social media sites, and books. However, it is difficult to tell whether this includes sensitive personal data from users, as well as information shared without their consent.
In March 2023, Italy was the first country to state that ChatGPT was violating user privacy for collecting and sharing this data. This brings into question the service’s GDPR (General Data Protection Regulation) compliance, as well as other privacy laws.
ChatGPT collects Personally Identifiable Information (PII) including account info, communication info, and social media info when users sign up, as well as Technical Information (TI) regarding their device and IP address. Furthermore, the conversations you have with the chatbot may be used for training and prompts can’t be deleted, as stated in the service’s FAQ section.
If you want to maintain your privacy when using ChatGPT, there are several things you can do, such as creating a private account, limiting what you share with the bot, and exercising your right to be forgotten. If you’re from Europe or California you can also delete your OpenAI account entirely, but it is unclear if the company completely removes the data elsewhere.
The discussions about regulating AI platforms and ChatGPT, in particular, are ongoing, and governments around the world are looking into the matter. Until that happens, it is best to use the software responsibly and be mindful of your data collection and sharing activities.