The Dutch data protection authority, AP, expressed concern regarding the way artificial intelligence companies handle personal data and has asked the creators of ChatGPT, OpenAI, for more information. ChatGPT is a widely used chatbot that provides credible answers to questions and has been accessed by 1.5 million people in the Netherlands since its launch four months ago. The chatbot uses data, including information from online sources, as well as questions that people ask, which may contain sensitive, personal information such as medical or marital disputes. AP is seeking more information about how the algorithm is trained using user questions and how data is collected from the internet. Additionally, the watchdog is worried about the accuracy and appropriateness of the information generated by the system for end-users. Moreover, they are uncertain if OpenAI can rectify or delete that data when necessary.
In April, the Italian data protection authority banned ChatGPT due to privacy concerns, although it was reinstated by the end of the month, with OpenAI clarifying various issues that were highlighted. European privacy regulators have established a ChatGPT task force to coordinate regulatory approaches. ChatGPT users can now control which conversations can be used to train the system by turning off chat history, according to the company.