Generative AI, a groundbreaking technology that has the power to transform various industries, is facing concerns related to enterprise security and intellectual property. While the anxieties surrounding this technology are understandable, they may stem more from fear of the future rather than actual threats.
Ten years ago, experts predicted that artificial intelligence would result in a loss of nearly 50% of jobs by 2033. However, we are now almost halfway there, and fully self-driving autonomous cars are still not a reality. This highlights the tendency for anxieties about new technologies to overshadow the actual progress that is made.
Generative AI, particularly Open AI’s ChatGPT, has emerged seemingly overnight and is creating a stir in the enterprise world. Just six months since its inception, generative AI has already reached a technology inflection point, leading many enterprises to adopt it rapidly in hopes of boosting efficiency. However, in their haste to implement it, enterprises are unknowingly putting themselves at risk.
One major concern is accidental data leakage. Enterprises may unknowingly copy or input sensitive corporate information into public generative AI apps like ChatGPT, which essentially taps into the vast knowledge available on the internet. Any information loaded into these apps becomes accessible to other subscribers, posing a significant security risk.
Another pressing concern relates to copyright infringement and intellectual property. When an enterprise’s own intellectual property is combined with another’s in a publicly accessible third-party service, questions arise regarding ownership and copyright protection. Generative AI does not currently vet for bias, attribution, or copyright protection, leaving enterprises vulnerable in this regard.
Furthermore, generative AI can be exploited by cyber attackers. While it currently serves as a content development engine, generating information about previously known attack methods, it does not possess the ability to independently devise new attack methods. However, there is the potential for this to change in the next five years or so, making it a future concern.
Securing the enterprise from possible generative AI cybersecurity risks should begin with establishing and educating employees on sound business policies. It is crucial to create a foundation of knowledge and awareness regarding the risks associated with generative AI. Additionally, developing an official regulatory environment with appropriate guardrails is crucial for ensuring responsible use of generative AI, as recently advocated by leaders in the AI field.
Enterprises must also implement controls to enforce and automate policies that monitor generative AI use, thereby minimizing risks. Symantec, a renowned player in the AI space, emphasizes the importance of data protection in safeguarding user and enterprise intellectual property. Solutions like Symantec Data Loss Prevention Cloud can help enterprises adopt generative AI tools by ensuring compliant handling of data sent to generative tools.
Although generative AI is still in its early stages, it is evident that enterprises that fail to embrace it will be at a severe disadvantage. Security measures must be implemented to fully leverage the potential of this transformative technology. With the right policies, regulations, and controls in place, enterprises can confidently adopt generative AI and unlock its limitless possibilities.
Read more about Generative AI and cybersecurity in the whitepaper by visiting the provided link.
About Alex Au Yeung:
Alex Au Yeung is the Chief Product Officer of the Symantec Enterprise Division at Broadcom. With over 25 years of experience in the software industry, Alex is responsible for driving product strategy, management, and marketing for Symantec.
Note: This article is generated by OpenAI’s language model.