OpenAI, one of the most renowned artificial intelligence technology innovators, recently launched the AI chatbot ChatGPT to the public. As it continues to improve its AI tools that help create a better user experience, the AI chatbot’s data is also closely being monitored. OpenAI has every user who begins their conversation with ChatGPT receive a pop-up alert to remind them that their conversations will be seen by “AI trainers” and should not include any sensitive information.
With this in mind, OpenAI has allowed users to disable the conversation record with ChatGPT and gain control over the data of the conversation. After 30 days, the company will delete all conversations that were stored in the higher privacy mode. It is important that both the average users and companies dealing with confidential information are aware of the risks of using ChatGPT, and should develop their own policies as to how it should be used.
Rudina Seseri, founder and managing partner of Glasswing Ventures, an AI-investing firm, advises all ChatGPT users to avoid inputting any information that they would not want to share publicly. Users must be aware of the potential for these tools to collect a lot of consumer personal information as well as conversation histories.
For increased safety and trust, Microsoft Corporation also provides users with a ‘Privacy Dashboard’ when using the new Bing Search Bot. This dashboard allows users to “view, export, and delete stored conversation history.” Microsoft also implements measures like encrypting data and only keeping customer data for as long as necessary.
Managing privacy and setting the right policies are critical and should be taken into account when using ChatGPT and other AI tools. This is to ensure all data is secure and not vulnerable to exploitation.