Companies using AI chatbots and writing tools may be putting their confidential customer information and trade secrets at risk, warns a new report from Team8, an Israel-based venture firm. Generative artificial intelligence (AI) technologies, such as ChatGPT, are gaining widespread adoption, and can potentially be abused by malicious actors. If these tools are fed with personal or sensitive data, it may be difficult to erase it, increasing the potential of data leaks and potential lawsuits.
To capitalize on this trend, tech giants such as Microsoft and Alphabet have invested heavily in developing AI capabilities, training their models on data collected from the web to provide users with faster and more comprehensive answers to their queries. Nonetheless, there is a risk of private or confidential information being accessed by the AI, creating threats to security and privacy.
Team8 highlights the need for due diligence to ensure secure use of AI chatbots, in order to protect confidential data. The venture firm is a cybersecurity-focused organization that was founded in 2014 and specializes in helping startups develop secure and resilient technologies. Currently, it partners with more than a dozen driving forces in enterprise security and industrial IoT, as well as venture capital groups and Fortune 500 companies. Team8’s CEO is Nadav Zafrir, who has had extensive prior experience in various military technological and cybersecurity-related roles.