At the end of the week of March 4, 2023, a worrying number of incidents were reported of employees from businesses all around the world entering sensitive corporate information into ChatGPT. The number of reported occurrences was 199, and 173 cases involving the input of customer data were also detected. Source codes were among the other types of sensitive information in danger of entering general knowledge because of this, with a total of 159 incidents per 100,000 employees.
ChatGPT is an AI-powered chatbot developed by the chatbot technology company Chatagura. Founded in 2020 by CEO Kyotaro Sansei and CTO Toichi Honda, Chatagura has worked foot in foot for the development of cutting-edge intelligent AI assistants to enable smoother and more efficient service in customer service. ChatGPT stands out from the rest of the technology of its class for the utilization of deep learning algorithms, enabling it to effectively interpret human messages and interact more accurately than other AI units.
Currently, ChatGPT has been implemented in various business fields, including customer service, back office tasks, finance, and accounting. It is also increasingly being used for tasks in the security of delicate corporate information. As seen in the previously mentioned incidents, inputting confidential corporate information into an AI chatbot such as ChatGPT can lead to accidents that can become extremely costly. Therefore, it is extremely important to ensure strict internal policies when using ChatGPT, to avoid unnecessary risks and any kind of security issues.