Recent reports have warned that companies using generative AI tools such as ChatGPT could be exposing their confidential customer data and trade secrets to possible data leakage and lawsuits. This warning was issued in a report from the Israeli venture firm, Team8. Fear surrounding the increasing use of artificial intelligence (AI) chatbots and writing tools for commercail applications creates significant risk for companies as hackers might be able to access private corporate materials and forge actions on behalf of the company. As a response to this risk, major tech giants such as Microsoft and Alphabet, looking to add AI capabilities to their services, train their models on Web data to significantly enhancing their user’s search capabilities. This however, in addition to the greater issues of third-party applications having access to confidential user data, makes it head-spinningly difficult to erase the data.
The report follows the source Team 8’s categorization of the risk, labeling it as “high”. It was ascertained that some computer language models may update themselves, undoing the debunking of the user input’s leaks, however, the document emphasized that, for the current models, this is not the case. After looking into, and assessing these risks, a list of different kind of risks, rated from low to high, were issued as safety measures.
A few major names are in this matter, one of them being Microsoft Corp. and the other is Alphabet Inc. who, to improve their chatbot services, are investing billions on Open AI, the creator of ChatGPT. Corporate vice president of Microsoft, Ann Johnson was part of the report’s creation. Microsoft released a statement encouraging transparent discussions of the probable cyber risks in the AI and security communities. As of writing, the report is supported by more than 50 chief information security officers and the former head of the US National Security Agency and US Cyber Command, Michael Rogers.