The Japanese government has issued administrative guidance to OpenAI, the operator of ChatGPT, for failing to prioritize the protection of users’ personal information. The Personal Information Protection Commission issued the guidance on Thursday based on the country’s personal information protection law, citing the risk of ChatGPT accessing sensitive personal data without prior consent, which could potentially infringe on users’ privacy. However, the commission has not yet confirmed any specific law violation.
OpenAI was founded in 2015 by a group of investors, including Elon Musk, with the goal of developing artificial intelligence in a responsible and ethical manner. The organization is focused on creating AI applications that can be used to solve real-world problems, such as climate change and disease prevention. ChatGPT, one of OpenAI’s products, is a language model that can generate text-based on a user’s input. It has gained popularity in recent years, particularly in the field of natural language processing.
The Personal Information Protection Commission is a governmental agency in Japan responsible for protecting individuals’ privacy and personal information. The commission was created in 2003 following the enactment of Japan’s personal information protection law, which requires Japanese companies to implement measures to protect personal data and obtain prior consent before collecting or using any personal information. The commission oversees compliance with this law and conducts investigations into violations of individuals’ privacy rights.