Samsung, one of the world’s largest technology companies, has cracked down on the use of generative AI platforms by its employees. According to a memo reviewed by Bloomberg, the company has implemented a policy restricting the use of AI tools like OpenAI’s ChatGPT, Google Bard and Bing, due to the potential risk of data transmission to external servers that can be accessed by other users.
A survey conducted by Samsung found that 65 per cent of employees held concerns about the security of using generative AI services. This was likely due to an incident in which employees accidentally uploaded source codes of various programs and corporate secrets onto ChatGPT.
In response, Samsung has implemented a policy to ensure the safety of their information and data. Employees were instructed to not share any company related information or intellectual property that can be leaked on the AI platforms. Furthermore, Samsung is currently developing in-house AI tools for software development, translation and summarization.
The memo states that failure to comply with this new policy could result in disciplinary action, which includes termination of employment. Though the use of AI can increase efficiency, it is important for companies to maintain a secure environment and look into the potential risks of using such tools. Samsung has demonstrated they value this aspect highly and the company has taken measures to protect its data and personnel.