OpenAI, an artificial intelligence company partially owned by Microsoft, recently announced the launch of a new feature which allows users to prevent their conversations from being used to train large language models. This development comes in light of concerns that their free, popular chatbot tool, ChatGPT, could be collecting and using people’s private conversations for machine learning developments without their knowledge or permission.
With this new addition, users can visit the Settings section of ChatGPT to choose whether to turn off the ‘Chat History’ feature. In this way, conversations started when this feature is disabled, cannot be used to improve the company’s models. Furthermore, OpenAI will automatically keep records of conversations for 30 days, even if it is not used for training, before deleting them from their systems.
In response to privacy concerns, various government agencies from various countries across the globe, including Italy, Canada, France, and Spain, have launched investigations into OpenAI. Additionally, a few giant corporations such as JP Morgan, Goldman Sachs, Wells Fargo, and Verizon have limited their employees’ access to the chatbot, fearing that confidential information may be leaked through its usage.
Fortunately, OpenAI plans to offer a ChatGPT Business subscription tier in the near future, to enable organizations to maintain greater control over their data. According to the company’s API usage policies, any texts generated in conversations by Business customers will not be used to train OpenAI’s models.
It remains to be seen what impact the public controversies and strict measures implemented by OpenAI will have on its development goals. One thing is certain, ChatGPT users now have the power to make their own decisions regarding the safety of their data.