OpenAI, a leading artificial intelligence and machine learning company, has taken a major step forward in protecting user data by introducing new privacy and data sharing features to its popular ChatGPT model. Users now have the option to disable chat history in order to prevent their conversations from being used to improve the AI model or appear in the sidebar. Additionally, the company has introduced a 30 day retention period and review process for new conversations in order to ensure that banned content does not appear in the chat history.
The company also plans to launch the OpenAI API Business subscription for enterprise customers wanting to have more control over their data. This subscription will come with the OpenAI API’s data usage policies, ensuring that users’ data will not be used to train the AI model unless explicitly granted permission to do so. Furthermore, users can now use the Export option in settings to easily export their ChatGPT data and understand what is being stored.
OpenAI’s new privacy features are the latest in a series of data protection efforts to ensure that users have greater control and transparency over their data. With the OpenAI API Business subscription and its adoption of OpenAI API’s data policies, users and businesses can have peace of mind knowing that their private conversations are stored securely and used only as specified by them. This move is an indication of OpenAI’s commitment to ensuring users’ privacy and providing them with the tools they need to protect their data.