Many companies have started banning the chatbot, ChatGPT, at work due to security and privacy risks. While it may seem obvious that uploading work-related information to an online artificial intelligence platform owned by another company is potentially a breach, ChatGPT can prove to be a real boon. Software engineers find ChatGPT useful for writing, testing, or debugging code, even though the technology is prone to errors.
Even with around 43% of employees using AI such as ChatGPT at work, mostly without telling their boss, some companies have recognized that the AI cat is already out of the bag and developed their own AI platforms as safer alternatives. Samsung Electronics has recently cracked down on the use of generative AI after an engineer manifested a tech company’s worst nightmare by copy-pasting sensitive source code into ChatGPT.
The fear is that proprietary or sensitive company information given to ChatGPT could be unintentionally shared with other users. OpenAI, the company that owns ChatGPT, is still ironing out security issues, which prompted some companies to develop their own AI platforms as safer alternatives. Amazon and Apple have banned the use of ChatGPT, while other companies, including banks and law firms, have issued outright bans.
While ChatGPT may become a part of office work, many companies see more risks than benefits. ChatGPT is not without risks, and it’s important to tread with caution and recognize the potential consequences before embarking on its use. Sharing sensitive information with any online platform can have adverse consequences, and an ounce of prevention is worth a pound of cure.