Generative AI Struggles with Business Leaders’ Confidence in Data Privacy, US

Date:

Generative AI Struggles with Business Leaders’ Confidence in Data Privacy

Generative AI providers are facing challenges in gaining the confidence of business leaders when it comes to data privacy. According to a new report from Gartner, over a third (34%) of organizations adopting generative AI are also investing in AI application security solutions to mitigate the risks of data leaks. The report also revealed that investments in privacy-enhancing technologies (PETs), AI model operationalization (ModelOps), and model monitoring are being planned to enhance data privacy.

While business leaders show enthusiasm for generative AI, many express concerns about inaccurate and harmful outputs as well as the potential for leaked proprietary information through public AI services. Of those surveyed, 57% expressed concern about leaked sensitive data in AI-generated code, while 58% were particularly worried about incorrect outputs or models showing bias.

Avivah Litan, a distinguished VP analyst at Gartner, highlighted that organizations are worried about the risks of poor data privacy associated with generative AI. They find it difficult to trust public AI options like Azure OpenAI and Google with their data, as there is no way to verify if the data is being used or shared appropriately. Litan suggested that organizations could run on-premise AI models to ensure data privacy, but acknowledged that most companies lack the necessary resources for this approach.

To address the data privacy concerns, organizations are turning to ModelOps, a governance method similar to DevOps, to automate the oversight of AI models. This ensures their effectiveness and conformity to safety expectations. Additionally, privacy-enhancing technologies (PETs) are being used to protect data through encryption and prevent exposure while in use. PETs can also encrypt AI or ML models to prevent threat actors from revealing sensitive training data.

See also  Google's ChatGPT competitor Bard expands its language support to include 9 Indian languages and introduces Lens integration.

The need for data privacy in generative AI has been highlighted by incidents such as Apple banning employees from using ChatGPT or GitHub Copilot due to fears of sensitive data exposure. Samsung also issued a warning to its workers against using third-party AI after source code was accidentally leaked via ChatGPT.

Gartner’s survey of 150 IT and information security leaders also revealed a lack of consistency in assigning responsibility for managing the risks of generative AI within organizations. While almost all respondents acknowledged their role in risk management, only 24% stated that they fully owned this responsibility. IT and governance departments were identified as the primary departments responsible for AI risk management.

In conclusion, as generative AI becomes more widely adopted, organizations are increasingly focusing on data privacy and investing in security solutions, PETs, ModelOps, and model monitoring. Despite the enthusiasm for this technology, businesses are still hesitant to trust public AI services with their sensitive data. As a result, alternative measures such as on-premise AI models are being considered by organizations to ensure data privacy.

Frequently Asked Questions (FAQs) Related to the Above News

What are the main concerns business leaders have about data privacy in generative AI?

Business leaders express concerns about inaccurate and harmful outputs, leaked proprietary information, and potential data leaks through public AI services.

How are organizations addressing data privacy concerns in generative AI?

Organizations are turning to ModelOps, a governance method similar to DevOps, to automate the oversight of AI models. They are also using privacy-enhancing technologies (PETs) to protect data through encryption and prevent exposure while in use.

What are PETs in the context of generative AI?

Privacy-enhancing technologies (PETs) are tools and techniques used to protect data privacy. In the context of generative AI, PETs can encrypt data, including AI or ML models, to prevent threat actors from revealing sensitive training data.

Can organizations ensure data privacy by running on-premise AI models?

Running on-premise AI models can help ensure data privacy, but it often requires significant resources that many companies lack.

How have incidents like Apple's ban on ChatGPT and GitHub Copilot impacted the need for data privacy in generative AI?

Incidents like Apple's ban on ChatGPT and GitHub Copilot due to fears of sensitive data exposure highlight the importance of data privacy in generative AI. These incidents increase concerns among organizations about the risks associated with using public AI services.

Who is responsible for managing the risks of generative AI within organizations?

Gartner's survey revealed a lack of consistency in assigning responsibility for managing the risks of generative AI. While almost all respondents acknowledge their role in risk management, only 24% stated that they fully own this responsibility. IT and governance departments are typically identified as the primary departments responsible for AI risk management.

Please note that the FAQs provided on this page are based on the news article published. While we strive to provide accurate and up-to-date information, it is always recommended to consult relevant authorities or professionals before making any decisions or taking action based on the FAQs or the news article.

Share post:

Subscribe

Popular

More like this
Related

Obama’s Techno-Optimism Shifts as Democrats Navigate Changing Tech Landscape

Explore the evolution of tech policy from Obama's optimism to Harris's vision at the Democratic National Convention. What's next for Democrats in tech?

Tech Evolution: From Obama’s Optimism to Harris’s Vision

Explore the evolution of tech policy from Obama's optimism to Harris's vision at the Democratic National Convention. What's next for Democrats in tech?

Tonix Pharmaceuticals TNXP Shares Fall 14.61% After Q2 Earnings Report

Tonix Pharmaceuticals TNXP shares decline 14.61% post-Q2 earnings report. Evaluate investment strategy based on company updates and market dynamics.

The Future of Good Jobs: Why College Degrees are Essential through 2031

Discover the future of good jobs through 2031 and why college degrees are essential. Learn more about job projections and AI's influence.