OpenAI recently announced a change to its data settings for ChatGPT, a generative AI model built to automate customer service conversations. This altered policy aims to protect the privacy of their ChatGPT users by not using conversations that start without chat history for training or improving ChatGPT. Instead, those conversations will be saved for 30 days and then erased from their records.
Michael Bennett, director of the education curriculum and business lead for responsible AI at Northeastern University, believes OpenAI’s decision will be met with approval from organizations that were previously hesitant about the misuse of their data. He also said this privacy step might be responding to the issue OpenAI faced after Italy banned ChatGPT due to data protection concerns.
Gartner analyst Bern Elliot believes OpenAI needs to go beyond just announcing privacy control and need to implement safety procedures and audits. He also reminds users to stay cautious, because the AI-based chatbot still has its quirks, such as producing incorrect or irrational information. However, Elliot suggested Microsoft’s version of ChatGPT might be better for organizations that are looking for an enterprise-grade solution.
OpenAI, led by investments of Microsoft of $10 billion, has taken a step toward addressing customer privacy concerns. Nevertheless, OpenAI needs to work on conveying the risks associated with using its version of ChatGPT more clearly to its users. Likewise, organizations should remain mindful of the potential risks when using AI chatbots and the need to properly audit their own systems.