Companies using generative artificial intelligence (AI) tools such as ChatGPT could be exposing confidential customer information and trade secrets, a report from the Israeli venture firm Team8 has warned. These chatbots, which are becoming more popular due to their use in search engines and other applications, could make it very difficult to protect this data if attackers were to gain access to it.
Microsoft Corp. and Alphabet Inc. are among the many technology giants racing to add generative AI capabilities to their products in order to better respond to user queries. According to the report, employees may unwittingly feed confidential and private data into the chatbots, which could then be used by AI companies in the future. It is also possible that hackers could exploit these chatbots to access sensitive corporate information or even take action against the company.
The report also states that companies need to introduce proper safeguards if they want to manage the risk of their data being exposed. It also notes that, at present, it is impossible for generative AI language models to return someone’s inputs to another person’s response, but this could change in future iterations.
The report was drafted with input from Microsoft Corporate Vice President Ann Johnson and was endorsed by the former head of the US National Security Agency and US Cyber Command, Michael Rogers. Dozens of US chief information security officers were also listed as contributors.
Microsoft said that it encourages transparent discussion about the cyber security risks posed by generative AI tools and their integration into electronic systems. However, the company is investing heavily into Open AI, the developer of ChatGPT, which provides an indication of how seriously they are taking the issue.