Companies worldwide are becoming increasingly concerned about the potential intellectual property leaks as ChatGPT, a popular chatbot program that utilizes generative AI, gains popularity. While many individuals are finding practical uses for ChatGPT in their day-to-day work, such as drafting emails, summarizing documents, and conducting preliminary research, security firms and companies are raising concerns about the potential risks it poses.
According to an online poll conducted on artificial intelligence (AI), 28% of the respondents revealed that they regularly use ChatGPT at work. However, only 22% of them stated that their employers explicitly allow the use of such external tools. This indicates that a significant portion of the workforce is utilizing ChatGPT without proper authorization.
The poll, which included 2,625 adults across the United States, also revealed that 10% of the participants mentioned their bosses explicitly prohibit the use of external AI tools. Approximately 25% of the respondents were unsure whether their companies permit the use of this technology or not.
The concerns surrounding ChatGPT mainly revolve around the potential for intellectual property and strategy leaks. Human reviewers from different companies have access to the generated chats, and researchers have discovered that similar AI programs can reproduce data absorbed during training, posing a risk to proprietary information.
Many companies and employees do not fully comprehend how generative AI services utilize data, which makes it critical for businesses to assess the potential risks. Ben King, the VP of customer trust at corporate security firm Okta, emphasized the need for businesses to evaluate these risks since users do not have a contract with most AIs due to their free nature.
OpenAI, the company behind ChatGPT, has assured its corporate partners that their data will not be used to further train the chatbot without explicit permission. However, concerns persist as companies continue to grapple with the potential dangers of employees using ChatGPT without proper oversight.
Samsung Electronics, for instance, recently banned its global staff from utilizing ChatGPT and similar AI tools after an employee uploaded sensitive code to the platform. Other companies, like Coca-Cola and Tate & Lyle, are taking a more cautious approach by embracing ChatGPT and similar platforms while prioritizing security measures.
Despite some companies allowing the use of ChatGPT in restricted ways and others outright banning it, employees are still finding ways to utilize the technology. Some employees from Tinder, for example, admitted to using ChatGPT for harmless tasks such as drafting emails and conducting general research, despite the company’s official prohibition.
It is crucial for companies to strike a balance between embracing AI’s potential benefits and safeguarding intellectual property. As the use of generative AI continues to grow, organizations must establish proper guidelines and protocols to ensure the responsible and secure utilization of these tools.
In conclusion, while ChatGPT’s exponential rise in popularity offers numerous advantages in the workplace, there are legitimate concerns about potential intellectual property leaks. It is essential for businesses to address these concerns and establish guidelines to ensure the responsible and safe utilization of generative AI technology.