Sensitive Data Exposed to AI Apps in Enterprises: New Research Reveals Alarming Findings
A recent study conducted by Netskope has unveiled concerning findings regarding the exposure of sensitive data to generative AI apps in enterprises. The research, part of Netskope Threat Labs’ first comprehensive analysis of AI usage in the enterprise and the associated security risks, indicates that within the average large enterprise, sensitive data is being shared with generative AI apps every hour of the working day.
In a study involving millions of enterprise users worldwide, Netskope found that, on average, enterprises with 10,000 users or more use five AI apps daily. This usage has increased by 22.5% in the past two months, which amplifies the chances of users inadvertently exposing sensitive data.
One of the key findings of the research was that source code accounts for the largest share of sensitive data being exposed to generative AI apps. According to the study, for every 10,000 enterprise users, approximately 183 incidents of sensitive data being posted to the apps occur every month. Additionally, the study revealed that passwords and keys, usually embedded in source code, were also being shared on these platforms. This data includes regulated data, such as financial and healthcare information, as well as personally identifiable information and intellectual property.
Users uploading proprietary source code or text containing sensitive data to AI tools that promise to help with programming or writing is inevitable, said Ray Canzanese, Threat Research Director at Netskope Threat Labs. Therefore, it is imperative for organizations to place controls around AI to prevent sensitive data leaks. The most effective controls that we see are a combination of DLP (data loss prevention) and interactive user coaching.
The study also found that the usage of generative AI apps is rapidly growing. ChatGPT, in particular, has seen a significant increase in daily active users compared to other generative AI apps. At the current growth rate, the number of users accessing AI apps is expected to double within the next seven months. Interestingly, the fastest-growing AI app over the past two months was Google Bard.
To mitigate the security risks associated with the exposure of sensitive data, organizations have taken different approaches. In highly regulated industries like financial services and healthcare, nearly 1 in 5 organizations have implemented a blanket ban on the use of ChatGPT. In the technology sector, this number drops to 1 in 20 organizations. However, blocking access to AI-related content and applications may hinder employee productivity and limit the potential benefits of AI.
As security leaders, we cannot simply decide to ban applications without impacting user experience and productivity, said James Robinson, Deputy Chief Information Security Officer at Netskope. Organizations should focus on evolving their workforce awareness and data policies to meet the needs of employees using AI products productively. There is a good path to safe enablement of generative AI with the right tools and the right mindset.
To enable the safe adoption of AI apps, organizations should prioritize identifying permissible apps and implementing controls that empower users to use them to their fullest potential, while safeguarding sensitive data and protecting against attacks. This approach may include domain filtering, URL filtering, and content inspection. Netskope also announced new solution offerings from SkopeAI, its suite of artificial intelligence and machine learning (AI/ML) innovations, to provide protection using AI-powered techniques.
The research conducted by Netskope highlights the pressing need for organizations to prioritize data protection and implement comprehensive controls around AI usage. By striking the right balance between security and innovation, enterprises can embrace the potential of AI while minimizing the risks associated with exposing sensitive data.