Samsung, one of the largest technology companies in the world, recently prohibited its employees from using popular generative AI tools such as ChatGPT, Google Bard, and Bing. This decision was made out of concerns over data security, as Samsung feared that data used by such AI could be disclosed to unauthorized parties. This ban was announced to employees on Monday, citing the increasing security risks posed by generative AI tools.
OpenAI’s ChatGPT and Microsoft’s use of its foundation, GPT-4, for tasks such as software writing, holding conversations, and composing poetry have caused a surge in the popularity of AI-based generative tools. This has caused concern for Samsung, due to the accidental leak of internal source code uploaded by Samsung engineers to ChatGPT. In response, the tech giant has temporarily barred the use of generative AI tools on its personal computers, tablets, and mobile devices. However, Samsung assured employees that it is diligently reviewing safety measures to eventually integrate generative AI tools into the company’s workflow in a secure manner.
The security concerns posed by generative AI systems have been present in the tech industry for some time now. Back in March 2021, hundreds of AI experts and tech executives joined together in signing an open letter, calling for the leading labs of AI development to put a pause on their activity in order to address the “profound risks” posed to human society.
Samsung’s precautionary ban on its employees use of generative AI tools is thus a testament to the importance of enhancing security measures when using such technology. This will undoubtedly set a precedent for other technological companies as well, to ensure that the use of AI is secure and provide further protection of sensitive information.