As lawmakers from various European Union member states discuss whether or not to institute a ban on particular chatbots, the debate around AI regulation is in full swing. Speculations have been sparked by the recent Italian data protection authority’s (Garante) decision to impose a temporary ban on Microsoft Corp-backed company OpenAI’s ChatGPT chatbot, which uses artificial intelligence (AI). Garante believed that the software breached data protection laws, as it was not clear how user data was being processed. OpenAI has since highlighted its commitment to privacy, while agreeing to meet Garante’s requirements by April 30th.
In response, Spain and France too, have raised questions regarding ChatGPT. Although the European Union has been conversing about the implementation of AI regulation for two years, predictably, it has not been until recently that such pressing discussions have been held. Particularly because AI can no longer be considered a technology quite as rudimentary as it was two years ago, MEP Axel Voss warned that much of the technology would become outdated before the regulations could even enter into force.
The proposed EU Artificial Intelligence Act plans to assign levels of risk to AI programs, and only those deemed to be of ‘high’ or ‘limited risk’ will be subject to special rules about documentations and disclosure of data use. AI applications which monitor and evaluate people’s social behaviors as well as certain facial recognition technologies will be banned. However, it is still uncertain if ChatGPT falls under the scope of this proposed legislation.
EU Commissioner Thierry Breton has championed the positive implications of AI in a digital society and expresses that the EU should not be reliant on external providers. He asserts that data should be stored and processed within the EU and this is something that legislators are seeking to achieve.
Mark Brakel from the nonprofit, ‘Future of Life Institute’ believes that there must be consequences for companies should they fail to adequately manage the risks associated with their products. He also insists that companies must publish the results of these assessments in order to be deemed accountable.
It is also worth noting that the creator of ChatGPT, OpenAI, is based in the US and could soon face competition from other US companies like Google, or Elon Musk’s Twitter. Chinese corporations are also bringing AI products to the market, with Baidu leading the race with their chatbot, ‘Ernie’.
OpenAI is the Microsoft Corp-backed technology company that has created the ChatGPT chatbot. Founded in December 2015, the software serves as a platform for cutting-edge research in machine learning, and it aims to make AI both trustworthy and beneficial to humanity. OpenAI’s mission is to ensure that artificial general intelligence (AGI) is developed with the best possible goals for humanity and it has furthered its commitment to data privacy following Garante’s ban on the application.
MEP Axel Voss from Germany is one of the main drafters of the proposed EU Artificial Intelligence Act. Voss believes that Europe has fallen behind the global race in AI and should be filled with more optimism to develop the technology. He emphasizes that the European Parliament needs to stop being guided by fear and attempt to regulate without curtailing potential progress or becoming too restrictive.
It is clear then, that it is essential for the European Union to strike a balance between consumer protection, regulation and the free development of the economy and research. The regulations must not be too burdensome for companies and developers, or else Europe runs the risk of relying solely on foreign providers. Doing so would render Europe nothing more than a consumer nation, which would be unacceptable.