OpenAI has become a pioneer of using sophisticated AI technologies to build practical applications. Recently, the company has come under scrutiny regarding its use of training data to power its new ChatGPT program which has been central to its AI model GPT-3. However, data protection authorities from Europe and other countries have raised suspicion on how OpenAI has been collecting and processing the data.
Experts have told the MIT Technology Review that it nearly impossible for OpenAI to comply with the rules due to the way data needed for training AI models is usually collected. Data sets for such models usually consists of around forty gigabytes of text, such as for the GPT-2 model, and, for GPT-3, approximately 570 gigabytes of data. OpenAI has not made public the data set for its latest model: GPT-4.
OpenAI’s “hunger for data” is now being brought into question, as authorities have started investigations into the company’s collection and use of personal data without consent or a legitimate interest. Italy has put a block on the use of ChatGPT and France, Germany, Ireland, and Canada are also part of the investigation into whether OpenAI followed data processing regulations. The European Data Protection Board is in the process of forming a task force which is made up of data protection authorities to collect and enforce regulations surrounding OpenAI’s ChatGPT.
OpenAI has been given a deadline of April 30th to comply with the law, which includes obtaining consent, proving a legitimate interest, and letting people have the power to correct any errors about them generated by the chatbot, as well as the ability to erase their data and object to having it used by the company. If OpenAI cannot prove compliance, it could face being banned in certain countries or Europe altogether, as well as hefty fines and even the deletion of the models and the data used to create them.
Such practices are in violation of not just the European Union’s General Data Protection Regulation, but also data protection regulations around the world. This case has the potential to shape the way AI companies go about collecting data in the future, and the outcome is eagerly awaited by regulators and organisations such as EU’s highest court: the Court of Justice of the European Union.
On the one hand, OpenAI will have to demonstrate how it collects data legitimately, either through obtaining consent or proving a legitimate interest. On the other hand, it will have to be more transparent in how it collects and uses data, to ensure the standards mandated by existing data protection regulations.