Generative AI Struggles with Business Leaders’ Confidence in Data Privacy
Generative AI providers are facing challenges in gaining the confidence of business leaders when it comes to data privacy. According to a new report from Gartner, over a third (34%) of organizations adopting generative AI are also investing in AI application security solutions to mitigate the risks of data leaks. The report also revealed that investments in privacy-enhancing technologies (PETs), AI model operationalization (ModelOps), and model monitoring are being planned to enhance data privacy.
While business leaders show enthusiasm for generative AI, many express concerns about inaccurate and harmful outputs as well as the potential for leaked proprietary information through public AI services. Of those surveyed, 57% expressed concern about leaked sensitive data in AI-generated code, while 58% were particularly worried about incorrect outputs or models showing bias.
Avivah Litan, a distinguished VP analyst at Gartner, highlighted that organizations are worried about the risks of poor data privacy associated with generative AI. They find it difficult to trust public AI options like Azure OpenAI and Google with their data, as there is no way to verify if the data is being used or shared appropriately. Litan suggested that organizations could run on-premise AI models to ensure data privacy, but acknowledged that most companies lack the necessary resources for this approach.
To address the data privacy concerns, organizations are turning to ModelOps, a governance method similar to DevOps, to automate the oversight of AI models. This ensures their effectiveness and conformity to safety expectations. Additionally, privacy-enhancing technologies (PETs) are being used to protect data through encryption and prevent exposure while in use. PETs can also encrypt AI or ML models to prevent threat actors from revealing sensitive training data.
The need for data privacy in generative AI has been highlighted by incidents such as Apple banning employees from using ChatGPT or GitHub Copilot due to fears of sensitive data exposure. Samsung also issued a warning to its workers against using third-party AI after source code was accidentally leaked via ChatGPT.
Gartner’s survey of 150 IT and information security leaders also revealed a lack of consistency in assigning responsibility for managing the risks of generative AI within organizations. While almost all respondents acknowledged their role in risk management, only 24% stated that they fully owned this responsibility. IT and governance departments were identified as the primary departments responsible for AI risk management.
In conclusion, as generative AI becomes more widely adopted, organizations are increasingly focusing on data privacy and investing in security solutions, PETs, ModelOps, and model monitoring. Despite the enthusiasm for this technology, businesses are still hesitant to trust public AI services with their sensitive data. As a result, alternative measures such as on-premise AI models are being considered by organizations to ensure data privacy.