Major tech companies such as Microsoft and Alphabet Inc. are investing in new generative artificial intelligence (AI) tools to improve their chatbots and search engines. While these tools have the potential to become very useful, it could also make companies vulnerable to data leaks and lawsuits if confidential customer information and trade secrets are exposed. This is why venture capital firm Team8 is warning companies of the dangers of using AI tools like ChatGPT.
The Team8 report noted that there is a “high risk” of a data breach or other malicious act if there is a lack of proper safeguards in place while using these AI tools. The risk increases if AI tools are allowed to access sensitive private or customer information. Incidents such as these could leave companies exposed to lawsuits. Additionally, confidential information fed into the chatbots could be used by AI companies in the future, making it difficult to erase the data.
To heighten the security, Team8 is urging companies to go through a security assessment and to introduce proper safeguards. In addition, the chance of discrimination and reputation damage should not be underestimated. However, recent rumors about large-language models about “seeing” queries are debunked, as these models cannot update their in real-time.
Microsoft Corp. Vice President Ann Johnson was involved in the writing of the Team8 report, and Microsoft has invested Billions in Open AI – the developer of ChatGPT. The report has also been endorsed by Michael Rogers, the former head of the US National Security Agency and US Cyber Command. Dozens of US Chief Information Security Officers have shown their support as well.
As the world moves towards a digital-first model where confidential data is exchanged widely, it is crucial that companies be aware of the dangers of tools like ChatGPT and take the right steps to protect their corporate secrets. Otherwise, the risk of malicious exploitation of AI chatbots for stealing data remains high.