OpenAI, a San Francisco-based company, is at the center of a data privacy case that sparked a ban on its commonly used chatbot, ChatGPT, in Italy. The Italian Data Protection Authority (Garante) issued the ban after some users’ messages and payment information were exposed to others. In addition, concerns were raised by Garante about the mass collection of data used to train ChatGPT’s algorithms, and the fear that the system could generate false information about individuals.
Last week, OpenAI executives, including CEO Sam Altman, participated in a video call with Garante’s commissioners. During the call, OpenAI promised to address the concerns and take steps to resolve Garante’s suspicions. Other nations, including Ireland, France, Canada and the UK, are also beginning to pay attention to possible ethical and societal risks posed by AI and generative technology.
In response to concerns, OpenAI published a blog post outlining its approach to AI safety, including removing personal information from training data when feasible, fine-tuning its models to reject requests for personal information, and acting on requests to delete personal information from its systems.
OpenAI is a research laboratory dedicated to discovering and developing helpful AI technologies. Founded in late 2015 from a collection of research papers from a group of leading researchers in AI, OpenAI serves to advance digital intelligence in the way it benefits humanity. OpenAI’s CEO, Sam Altman, has been an artificial intelligence evangelist for years and previously created Y Combinator, one of the world’s foremost technology startups incubators.
OpenAI’s main mission is to ensure that AI is developed safely, responsibly and transparently, which is something they take seriously in the development of all their products. Despite the current circumstances, they remain committed to continuing to produce products that are based on values of safety, transparency and ethical development.