To print this article, all you need is to be registered or login on Mondaq.com.
OpenAI’s ChatGPT, a widely popular chatbot, has recently been updated to access more current information, extending its knowledge up to April 2023. Despite initially launching with limitations, ChatGPT quickly gained one million users within five days. However, the use of chatbots in the workplace introduces risks that employers must address, particularly when unsanctioned use occurs.
As companies around the world, including major tech giants, have experienced conversational AI leaks, it is crucial for employers to be vigilant. These leaks involve instances where sensitive data provided to chatbots like ChatGPT inadvertently becomes exposed. The information shared with chatbots is transmitted to third-party servers for training purposes, potentially compromising confidential or personal data.
Numerous incidents have occurred where employees unintentionally disclosed sensitive employer information while using publicly available chatbots for work-related tasks. Examples include using chatbots to identify code errors, optimize source code, or generate meeting notes from recorded files.
With the rising popularity of chatbots, the occurrence of conversational AI leaks has increased. According to IBM’s Data Breach Report 2023, the global average cost of data breaches reached an all-time high of $4.45 million between March 2022 and March 2023, with South Africa’s costs exceeding ZAR50 million. These leaks can have severe financial implications for employers, emphasizing the need for proactive measures.
Employers may be tempted to ban the use of ChatGPT for specific tasks or queries to address the risks. However, alternative options exist to responsibly leverage the benefits of ChatGPT and implement Responsible AI in the workplace. Options include obtaining an enterprise license for ChatGPT or self-regulating AI tool usage through policies and training when no specific laws or regulations govern them.
Recognizing the need for enhanced privacy, OpenAI has launched an enterprise version of ChatGPT, allowing individuals and employers to create their own chatbots while safeguarding the training information. The enterprise version promises enterprise-standard security and privacy, reducing the likelihood of conversational AI leaks.
Managing AI risks in the workplace requires a tailored approach based on each employer’s level of AI integration. However, allowing unregulated use of ChatGPT may lead to a situation known as shadow IT. Shadow IT refers to unsanctioned software or tool usage by employees, creating unapproved IT infrastructure and systems parallel to the employer’s official infrastructure. This lack of internal regulation poses security vulnerabilities, data leaks, intellectual property exposure, and other risks. Both employers and employees should exercise caution when utilizing generative AI tools, carefully considering information sourcing and sharing.
To navigate the burgeoning field of AI while upholding data privacy, security, and intellectual property, ENS’ team of expert lawyers specializing in Technology, Media, and Telecommunications as well as labor law have developed a Responsible AI toolkit. This toolkit assists clients in swiftly entering and navigating the world of AI, ensuring responsible and compliant practices.
Employers must prioritize mitigating the risks posed by conversational AI leaks. By being proactive, implementing responsible AI practices, and leveraging tools like ChatGPT’s enterprise version, employers can harness the potential of AI while safeguarding critical information.