The European Data Protection Board (EDPB) has recently taken the pilot step towards formalizing a universal policy on AI chatbots such as ChatGPT, a conversational AI tool developed by OpenAI. This task force comes in the wake of Italy’s temporary ban on the ChatGPT following concerns regarding privacy. Similarly, Spain has also started an investigation into ChatGPT’s possible data breaches and alerted the EDPB, which initiated the task force.
OpenAI is an American artificial intelligence research laboratory, founded by Elon Musk andSam Altman, that focuses on developing advanced artificial intelligence technology. Its flagship ChatGPT product is the fastest-growing consumer application in the world, having more than 100 million monthly active users. It learns through user input and is used to create conversations with other people. Despite the program’s apparent success, the data-processing techniques it utilizes have caused some to question its potential violation of privacy.
The new task force from the EDPB will assess AI chatbot privacy concerns and work towards creating a framework for an EU-wide policy. This could potentially create regulations for AI chatbot applications crossing borders and set the groundwork for further usage. It is important to note, however, the special focus this task force will have on ChatGPT and OpenAI.
The EDPB was founded in 2016 and its members consist of the various data protection authorities of the EU country members. The body takes up a large responsibility of overseeing the data protection rules within the European Union. This task force is thus the first step in the creation of policies that protect companies from violating those rules.
The taskforce will be dedicated to discussing and exchanging information about possible actions that can be taken by data protection authorities to safeguard the privacy of users and the will also act as a pilot to inform the development of regulations for AI solutions to be adopted across the European Union.
The creation of this task force serves to remind us of the responsibilities of both companies and regulating bodies when it comes to developing and utilizing AI. By delivering both rights and responsibility, it ensures that the development of AI is done responsibly and that users are not vulnerable to the potential misuse of the technology.