If you’ve ever trusted ChatGPT with your personal secrets, you might want to reconsider. The language model developed by OpenAI claims it does not have the ability to process, store, or use users’ personal information. However, OpenAI’s privacy policy reveals that the company can use that information in certain cases.
According to OpenAI, certain types of personal data can be used for purposes such as improving products and services, conducting research, and preventing fraud or criminal activity. This includes data like user names, payment card information, and information exchanged with ChatGPT or the company itself. Data from interactions with OpenAI accounts on social networks, as well as data provided in surveys or events, can also be used for these purposes.
But this issue isn’t exclusive to generative AI. Everyday actions like sending emails through Gmail or sharing files on OneDrive also involve sharing information with service providers. Companies like OpenAI, Microsoft, and Google may disclose information to third parties based on their privacy policies.
However, the General Data Protection Regulation (GDPR) in the European Union strictly prohibits companies from using personal data for purposes other than those specified. Violating this regulation can result in hefty fines for companies, equivalent to 4% of their global annual turnover.
Generative AI, like ChatGPT, relies on a vast amount of data, some of which is personal, to generate original content and improve its services. But experts are warning against sharing personal information with these AI tools. The Spanish Data Protection Agency (AEPD) advises users to refuse chatbots that ask for unnecessary registration data or transfer data to countries without adequate guarantees. They also recommend limiting the amount of personal data shared or not providing it at all if there is a risk of international transfer.
Even ChatGPT itself cautions users to exercise caution when sharing personal, sensitive, or confidential information. The AI tool explicitly recommends against providing sensitive information through online platforms, including conversations with language models like itself.
If, despite the warnings, a user has already shared personal data with an AI system, there might be an option to have it deleted. OpenAI provides a form on their website through which users can request the removal of personal data. However, OpenAI does mention that submitting a request does not guarantee the removal of information from ChatGPT outputs. The company also cross-checks the information provided to verify its accuracy.
Legal expert Ricard MartÃnez advises users to take legal action if they believe their personal data has been processed unlawfully. Users have the right to request the deletion of their personal data and the right to data portability, allowing them to download their entire data history if compatible formats are available.
The AEPD recommends anonymizing personal data to minimize its use, ensuring only necessary data is utilized. Anonymization involves converting personal data into a form that cannot be used to identify a specific person.
As the new EU law on artificial intelligence comes into effect, companies managing personal data will have to be transparent about how their algorithms work and the content they generate. They will also need to implement security systems for large language models and ensure compliance with the GDPR.
Looking ahead, legal expert Borja Adsuara envisions a future where personal data collected by AI systems remains within personal repositories, rather than being used to feed universal generative artificial intelligence.
In conclusion, the risk of trusting ChatGPT or any other AI tool with personal secrets is real. While these tools claim not to process or store personal information, their associated companies can use that information for various purposes. It’s essential for users to exercise caution, limit the amount of personal data shared, and be aware of their rights regarding the deletion and portability of personal data. The future of AI and personal data management lies in transparency, security, and respecting privacy regulations.