When using tools like ChatGPT, users should always be mindful about their data privacy. It is possible for automated systems such as ChatGPT, or Large Language Model (LLM) AI systems, to access and analyze information from public sources like Twitter, Reddit, and Facebook. Moreover, due to the digital nature of the information, the potential violation of copyright laws arises. OpenAI, the company behind ChatGPT, has taken measures to protect user data and give them more control over their information.
Logging into their account, users need to find the Data Controls option within the Settings menu. This control panel allows users to select to either allow save new chats to their history or delete them within 30 days. This “model training” aims to improve ChatGPT’s performance as it analyzes each prompt as it is entered. Nevertheless, many might find it uncomfortable to allow their prompts, such as conversations with virtual therapists, to be analyzed for the AI system’s development.
Fortunately, by selecting to disable the history and training features, users can be relatively sure that the info they enter into ChatGPT will not be saved, or be accessible for later evaluation. It is also important to note OpenAI’s effort to protect user content from being used without permission or outside of contractual boundaries.
OpenAI is an artificial intelligence research laboratory that seeks to foster responsible and ethical use of artificial general intelligence (AGI). Founded in 2015, the company has since attracted numerous top minds and investors in their mission to ensure that AGI is used safely and responsibly. Led by tech entrepreneur and philanthropist, Sam Altman, OpenAI is considered to be the fire starter of the AI innovation movement, with many of its ideas giving rise to numerous breakthroughs in AI technology.