A European Union privacy task force has recently criticized OpenAI’s measures to ensure the accuracy of outputs from its chatbot, ChatGPT. The task force stated that the efforts made by OpenAI to prevent the chatbot from producing factually incorrect information were not enough to comply with EU data accuracy rules.
In a report released on Friday, the task force acknowledged some improvements in OpenAI’s transparency measures but emphasized that more investigations are needed to fully assess compliance with EU regulations. The joint investigations by national EU privacy watchdogs are still ongoing, and a comprehensive evaluation of OpenAI’s compliance has not yet been completed.
The task force highlighted that while OpenAI’s actions to enhance transparency are beneficial in preventing misinterpretation of ChatGPT’s outputs, they do not fully meet the data accuracy principle required by EU regulations. It is essential for OpenAI to continue improving its measures to ensure the chatbot’s outputs are factually accurate in order to comply with EU data rules.
Overall, the European Union’s privacy watchdog has raised concerns about OpenAI’s incomplete compliance measures and stressed the need for further investigations to assess the accuracy of ChatGPT’s outputs effectively. The focus remains on ensuring that OpenAI adheres to EU data accuracy rules to mitigate the risk of factually incorrect information being generated by the chatbot.