The European Union (EU) data regulator has created a special task force team this week with the aim of helping countries manage the risks posed by the AI-driven chatbot ChatGPT. OpenAI, the US-based maker of the bot, has come under increased pressure since Italy rendered the software temporarily unvailable last month due to complaints. France’s data regulator, CNIL, said on Thursday it has opened an investigation in response to five legal complaints and Spain’s data protection agency, AEPD, also launched an investigation into the ChatGPT and its parent company, OpenAI.
ChatGPT, which can compose poetry, stories, and conduct conversations from minimal information inputted, has caused alarm in both the academic and private sectors over fear that it may be used for unethical practices, for example, providing false evidence for exams or the propagation of disinformation on the internet. Given the AI-dependent nature of the technology, concerns have also been raised regarding data privacy and the usage of datasets for teaching.
Eric Bothorel, a Member of Parliament from France, made a legal complaint citing the bot’s knowledge of and fabrication of personal information such as his birth date and job history. As mandated by the European Data Privacy Regulation (GDPR), such programs must clearly indicate the veracity of the given personal data and assure their accuracy; which appears to be lacking in the case of ChatGPT. In order to amend the situation, Italy has released a list of steps OpenAI must take to regain access to the country’s market. In addition, the European Union’s data regulator has launched a working group to investigate and establish a cross-border coordination in managing the privacy conundrums ChatGPT has posed.
OpenAI, the company at the center of the complaints and beyond, was founded in 2015 and was created with the mission to advocate beneficially for advancing artificial intelligence. Over the years, OpenAI has proven to be successful in its endeavor, displaying immense power in its cloud-based computer operations. OpenAI’s impressive list of commendable accomplishments include the development of the Digital Neural Network (DNN) which is used for speech recognition, as well as its state-of-the-art chatbot, GPT-3, which is based on an unsupervised deep learning system and is capable of producing human-like natural language results. It is no surprise, then, that OpenAI remains committed to safeguarding people’s privacy and does not access datasets outside of appropriate channels.
Eric Bothorel, the Member of Parliament mentioned earlier in the article, is a very influential figure in the world of politics today. He has spoken on a variety of topics, including climate change, finance, and even the ongoing pandemic crisis. As a proponent of artificial intelligence, Bothorel has dedicated his resources to fostering innovation and bringing the benefits of technology to the greater public.
The European Data Protection Board has made it clear that, although it is a firm supporter of AI development, the technology must be cultivated within the boundaries of personal rights and freedom. This sentiment has provoked numerous legal bodies of the EU to band together and implement a more stringent regulation in response to the rise of ChatGPT and other sophisticated AI applications.
To conclude, ChatGPT, developed by OpenAI, is an impressive AI technology that has benefited many industries and individuals. However, its introduction has also brought up a variety of issues that have yet to be addressed, with the privacy infringements at the forefront. As the EU places further regulations to ensure responsibility and compliance with data regulations, it remains to be seen how OpenAI and other AI technology firms will adjust their operations in the coming months.