Security Risks of Using ChatGPT for Your Organization

Date:

ChatGPT is a large language model (LLM) tool simplifying workflows for software engineers and developers and that has come into the spotlight in 2023. Although using large language model tools such as ChatGPT can be beneficial to boost productivity, there are security risks associated with the platform that organizations should be aware of, even when the platform is not being used by the organization directly.

GitHub, a platform popular amongst developers with over 100 million users, was revealed to have exposed 10 million secrets in public repositories in 2022 alone, according to the GitGuardians 2023 State of Secrets Sprawl report. Additionally, a contractor of Toyota exposed database credentials associated with a Toyota mobile application in a public GitHub repository, exposing that even organizations not using GitHub may still be affected. Similarly, sensitive information will likely be stored on ChatGPT as even though organizations are not using it, there is a high pressure within the developer community to use such tools to avoid feeling left behind.

Cyberhaven has noted that requests to input data into ChatGPT from workers at companies of their clients were blocked due to the risks of leaking confidential information, regulated information, or source code. OpenAI API keys showed a significant growth around late 2022 as well, revealed in the GitGuardian 2023 State of Secrets Sprawl Report, verifying that developers are actively using the tool.

Although ChatGPT is programmed to deny requests to explicitly provide sensitive information, its “intuitive” answers are easy to trick, as we have seen in the aforementioned examples. Even worse, since it is unencrypted, lacks strict access control, and lacks access logs, it is highly attractive for attackers. Anything entered into the platform, including personal accounts, are vulnerable to attackers as it will have a complete history log of the code requests and queries.

See also  Conservative AI chatbot GIPPR shut down by OpenAI, chatbot maker

With the platform being so easy to access and with the AI bias of trusting its answers, it is important for organizations to educate developers about ChatGPT, the risks associated and the limitations of the technology. Additionally, secrets should always be stored in a secure way, using encryption, access control and logs.

ChatGPT is a powerful tool, but as with anything, needs to be managed and controlled to ensure it does not become a tool of attack for your organization. Make sure to educate developers, recognize and secure secrets and manage the platform with a heightened sense of security to protect your company and its data.

Frequently Asked Questions (FAQs) Related to the Above News

Please note that the FAQs provided on this page are based on the news article published. While we strive to provide accurate and up-to-date information, it is always recommended to consult relevant authorities or professionals before making any decisions or taking action based on the FAQs or the news article.

Share post:

Subscribe

Popular

More like this
Related

Digital Intelligence Revolutionizing Education Publishing at Beijing Book Fair

Discover how digital intelligence is revolutionizing education publishing at the 2024 Beijing Book Fair. Stay ahead in the evolving market landscape.

AI Films Shine at South Korea’s Fantastic Film Fest

Discover how AI films are making their mark at South Korea's Fantastic Film Fest, showcasing groundbreaking creativity and storytelling.

Revolutionizing LHC Experiments: AI Detects New Particles

Discover how AI is revolutionizing LHC experiments by detecting new particles, enhancing particle detection efficiency and uncovering hidden physics.

Chinese Tech Executives Unveil Game-Changing AI Strategies at Luohan Academy Event

Chinese tech executives unveil game-changing AI strategies at Luohan Academy event, highlighting LLM's role in reshaping industries.