Risky AI Apps Put Companies’ Data Security at Risk, Calls for Striking Balance

Date:

Companies’ Data Security at Risk Due to Risky AI Apps: Calls for Striking Balance

The rapid advancement of artificial intelligence (AI) technology has undoubtedly transformed the way we work. However, it has also raised concerns about data security and the risks associated with using AI applications. As the popularity of AI apps continues to grow, companies are facing the challenge of balancing innovation with safeguarding their valuable data.

One of the most well-known AI apps, ChatGPT, has garnered significant attention since its introduction last year. Its disruptive capabilities have left IT departments scrambling to adapt. But as we focus on the risks of using AI, it is crucial to look beyond ChatGPT itself. Currently, there are approximately 9,600 generative AI apps available, with new ones entering the market at a rate of around 1,000 per month. This ecosystem is about to expand further with the anticipated launch of the GPT Store by OpenAI. Notably, numerous existing software-as-a-service (SaaS) apps already in use by enterprises are also incorporating AI features, each with varying data handling policies.

The primary concern lies in these third-party AI apps, where security standards often fall short. Many of them lack clear data policies, leaving companies unsure about where their data will be stored, how it will be retained, secured, and utilized. For example, certain apps catering to the accounting profession encourage the upload of sensitive corporate files for generating annual reports. Without adequate safeguards, firms risk breaching data regulations such as GDPR, HIPAA, and PCI. Additionally, cybercriminals or even nation states may exploit these vulnerabilities to obtain companies’ trade secrets.

See also  Apple Developing Chatbot Rival to Google, Microsoft ChatGPT

Given this landscape, it is alarming to note that around 74% of companies currently do not have an established AI policy. Even tech-savvy giants like Apple have taken the precautionary measure of blocking the use of ChatGPT and GitHub Copilot, an AI app assisting developers in code writing. Blocking AI apps altogether may seem like a reasonable approach, considering the lack of understanding and resources to address these risks. However, it is an unsustainable solution given the potential productivity and innovation gains that AI offers. Moreover, the issue of shadow AI arises, where employees may disregard established policies and utilize AI apps that enhance their work efficiency without proper authorization.

To strike a balance, organizations must develop robust AI policies that allow the use of secure and beneficial AI apps while blocking access to risky ones, particularly for teams handling sensitive data. Implementing such policies is not without its challenges, given the vast number of apps available and the diverse job roles they target. However, automating the vetting process in collaboration with security providers could be a long-term solution. In the meantime, companies should adhere to best practices, including:

1. Evaluating AI apps for security standards before adoption.
2. Implementing policies that specify app usage and data handling guidelines, particularly for sensitive data.
3. Offering training and awareness programs to educate employees about the risks and benefits of AI usage.
4. Regularly monitoring and updating AI policies to align with evolving industry standards and emerging threats.

While AI presents opportunities for advancement, it is crucial to address the associated risks effectively. Governments, regulators, organizations, and AI researchers must collaborate to catch up with the rapidly advancing AI landscape. Striking the right balance between leveraging AI advancements and protecting data security is key for companies to thrive in the digital age.

See also  Google Merging Brain and DeepMind into One Team for Major AI Advancement

Frequently Asked Questions (FAQs) Related to the Above News

What is the main concern regarding the use of AI apps in companies?

The main concern is data security. Many AI apps, especially third-party ones, lack clear data handling policies, leaving companies unsure about how their data will be stored, secured, and utilized.

How many generative AI apps are currently available, and at what rate are new ones entering the market?

Currently, there are approximately 9,600 generative AI apps available. New apps are entering the market at a rate of around 1,000 per month.

What are the potential risks associated with using AI apps without proper security measures?

The potential risks include breaching data regulations, such as GDPR, HIPAA, and PCI, as well as the possibility of cybercriminals or nation states exploiting vulnerabilities to obtain companies' trade secrets.

What percentage of companies currently have an established AI policy?

Approximately 74% of companies currently do not have an established AI policy.

Why have some companies, including Apple, taken the step of blocking the use of certain AI apps?

Companies have taken this precautionary measure due to the lack of understanding and resources to address the risks associated with AI apps. However, this blocking approach is not sustainable in the long run, considering the potential productivity and innovation gains that AI offers.

What is shadow AI?

Shadow AI refers to the use of AI apps by employees without proper authorization or in disregard of established policies. This can lead to additional risks for companies.

How can organizations strike a balance between using AI apps and protecting data security?

Organizations can strike a balance by developing robust AI policies that allow the use of secure and beneficial AI apps while blocking access to risky ones, especially for teams handling sensitive data. Automating the vetting process in collaboration with security providers could be a long-term solution.

What are some best practices for companies to adhere to when using AI apps?

Some best practices include evaluating AI apps for security standards before adoption, implementing policies that specify app usage and data handling guidelines (particularly for sensitive data), offering training and awareness programs to educate employees about AI risks and benefits, and regularly monitoring and updating AI policies to align with industry standards and emerging threats.

How important is collaboration between governments, regulators, organizations, and AI researchers in addressing the risks associated with AI apps?

Collaboration is crucial in catching up with the rapidly advancing AI landscape. By working together, these stakeholders can develop effective strategies and regulations to ensure a balance between leveraging AI advancements and protecting data security.

Please note that the FAQs provided on this page are based on the news article published. While we strive to provide accurate and up-to-date information, it is always recommended to consult relevant authorities or professionals before making any decisions or taking action based on the FAQs or the news article.

Share post:

Subscribe

Popular

More like this
Related

Obama’s Techno-Optimism Shifts as Democrats Navigate Changing Tech Landscape

Explore the evolution of tech policy from Obama's optimism to Harris's vision at the Democratic National Convention. What's next for Democrats in tech?

Tech Evolution: From Obama’s Optimism to Harris’s Vision

Explore the evolution of tech policy from Obama's optimism to Harris's vision at the Democratic National Convention. What's next for Democrats in tech?

Tonix Pharmaceuticals TNXP Shares Fall 14.61% After Q2 Earnings Report

Tonix Pharmaceuticals TNXP shares decline 14.61% post-Q2 earnings report. Evaluate investment strategy based on company updates and market dynamics.

The Future of Good Jobs: Why College Degrees are Essential through 2031

Discover the future of good jobs through 2031 and why college degrees are essential. Learn more about job projections and AI's influence.