Companies’ Data Security at Risk Due to Risky AI Apps: Calls for Striking Balance
The rapid advancement of artificial intelligence (AI) technology has undoubtedly transformed the way we work. However, it has also raised concerns about data security and the risks associated with using AI applications. As the popularity of AI apps continues to grow, companies are facing the challenge of balancing innovation with safeguarding their valuable data.
One of the most well-known AI apps, ChatGPT, has garnered significant attention since its introduction last year. Its disruptive capabilities have left IT departments scrambling to adapt. But as we focus on the risks of using AI, it is crucial to look beyond ChatGPT itself. Currently, there are approximately 9,600 generative AI apps available, with new ones entering the market at a rate of around 1,000 per month. This ecosystem is about to expand further with the anticipated launch of the GPT Store by OpenAI. Notably, numerous existing software-as-a-service (SaaS) apps already in use by enterprises are also incorporating AI features, each with varying data handling policies.
The primary concern lies in these third-party AI apps, where security standards often fall short. Many of them lack clear data policies, leaving companies unsure about where their data will be stored, how it will be retained, secured, and utilized. For example, certain apps catering to the accounting profession encourage the upload of sensitive corporate files for generating annual reports. Without adequate safeguards, firms risk breaching data regulations such as GDPR, HIPAA, and PCI. Additionally, cybercriminals or even nation states may exploit these vulnerabilities to obtain companies’ trade secrets.
Given this landscape, it is alarming to note that around 74% of companies currently do not have an established AI policy. Even tech-savvy giants like Apple have taken the precautionary measure of blocking the use of ChatGPT and GitHub Copilot, an AI app assisting developers in code writing. Blocking AI apps altogether may seem like a reasonable approach, considering the lack of understanding and resources to address these risks. However, it is an unsustainable solution given the potential productivity and innovation gains that AI offers. Moreover, the issue of shadow AI arises, where employees may disregard established policies and utilize AI apps that enhance their work efficiency without proper authorization.
To strike a balance, organizations must develop robust AI policies that allow the use of secure and beneficial AI apps while blocking access to risky ones, particularly for teams handling sensitive data. Implementing such policies is not without its challenges, given the vast number of apps available and the diverse job roles they target. However, automating the vetting process in collaboration with security providers could be a long-term solution. In the meantime, companies should adhere to best practices, including:
1. Evaluating AI apps for security standards before adoption.
2. Implementing policies that specify app usage and data handling guidelines, particularly for sensitive data.
3. Offering training and awareness programs to educate employees about the risks and benefits of AI usage.
4. Regularly monitoring and updating AI policies to align with evolving industry standards and emerging threats.
While AI presents opportunities for advancement, it is crucial to address the associated risks effectively. Governments, regulators, organizations, and AI researchers must collaborate to catch up with the rapidly advancing AI landscape. Striking the right balance between leveraging AI advancements and protecting data security is key for companies to thrive in the digital age.