ChatGPT has more risks than many would expect. Recent research has shown that up to 6.5% of employees have pasted company data into ChatGPT and 3.1% have shared sensitive data with the program. OpenAI’s ChatGPT is an artificial intelligence (AI) system that compiles digital text-based sources from the internet and produces essays, presentations, articles, and computer code from a carefully crafted prompt. While the output from the program may be accurate or helpful, the input that users provide is something that should be closely monitored.
There is a risk of confidential information being shared to ChatGPT. Unfortunately, users may submit data as a conversation aid to refine their queries, and this data is used to improve the AI models of the system. Additionally, the terms of OpenAI’s data usage policies state that the sensitive information entered may not be limited to ChatGPT only and will be shared with other services. Because of this, they suggest users not share any sensitive information in conversations and that certain prompts cannot be deleted from the conversation history.
In response to this risk, companies such as Amazon, Walmart, Accenture, Verizon, JPMorgan Chase, and other financial institutions have all restricted their employees from using ChatGPT. The National Association of Counties’ CIO, Rita Reynolds, has advised local governments to create an AI policy that addresses transparency, fairness and bias, privacy, and informed consent. Staff should be trained to use the program responsibly, and IT personnel should investigate an in-house, enterprise ChatGPT from Azure OpenAI Service.
OpenAI is a research organization co-founded by Elon Musk, Sam Altman, and other renowned tech entrepreneurs. Its mission is to develop trustworthy artificial general intelligence through the use of open source software. They are committed to creating a healthy and prosperous world, and their research and development technologies are helping to advance the artificial intelligence field. OpenAI is also focused on developing safety measures for artificial general intelligence, in order to ensure its wise and responsible use.