Security Risks of Using ChatGPT for Your Organization

Date:

ChatGPT is a large language model (LLM) tool simplifying workflows for software engineers and developers and that has come into the spotlight in 2023. Although using large language model tools such as ChatGPT can be beneficial to boost productivity, there are security risks associated with the platform that organizations should be aware of, even when the platform is not being used by the organization directly.

GitHub, a platform popular amongst developers with over 100 million users, was revealed to have exposed 10 million secrets in public repositories in 2022 alone, according to the GitGuardians 2023 State of Secrets Sprawl report. Additionally, a contractor of Toyota exposed database credentials associated with a Toyota mobile application in a public GitHub repository, exposing that even organizations not using GitHub may still be affected. Similarly, sensitive information will likely be stored on ChatGPT as even though organizations are not using it, there is a high pressure within the developer community to use such tools to avoid feeling left behind.

Cyberhaven has noted that requests to input data into ChatGPT from workers at companies of their clients were blocked due to the risks of leaking confidential information, regulated information, or source code. OpenAI API keys showed a significant growth around late 2022 as well, revealed in the GitGuardian 2023 State of Secrets Sprawl Report, verifying that developers are actively using the tool.

Although ChatGPT is programmed to deny requests to explicitly provide sensitive information, its “intuitive” answers are easy to trick, as we have seen in the aforementioned examples. Even worse, since it is unencrypted, lacks strict access control, and lacks access logs, it is highly attractive for attackers. Anything entered into the platform, including personal accounts, are vulnerable to attackers as it will have a complete history log of the code requests and queries.

See also  Canada's ChatGPT Equivalent: An AI-powered Solution from the North

With the platform being so easy to access and with the AI bias of trusting its answers, it is important for organizations to educate developers about ChatGPT, the risks associated and the limitations of the technology. Additionally, secrets should always be stored in a secure way, using encryption, access control and logs.

ChatGPT is a powerful tool, but as with anything, needs to be managed and controlled to ensure it does not become a tool of attack for your organization. Make sure to educate developers, recognize and secure secrets and manage the platform with a heightened sense of security to protect your company and its data.

Frequently Asked Questions (FAQs) Related to the Above News

Please note that the FAQs provided on this page are based on the news article published. While we strive to provide accurate and up-to-date information, it is always recommended to consult relevant authorities or professionals before making any decisions or taking action based on the FAQs or the news article.

Share post:

Subscribe

Popular

More like this
Related

Samsung Unpacked Event Teases Exciting AI Features for Galaxy Z Fold 6 and More

Discover the latest AI features for Galaxy Z Fold 6 and more at Samsung's Unpacked event on July 10. Stay tuned for exciting updates!

Revolutionizing Ophthalmology: Quantum Computing’s Impact on Eye Health

Explore how quantum computing is changing ophthalmology with faster information processing and better treatment options.

Are You Missing Out on Nvidia? You May Already Be a Millionaire!

Don't miss out on Nvidia's AI stock potential - could turn $25,000 into $1 million! Dive into tech investments for huge returns!

Revolutionizing Business Growth Through AI & Machine Learning

Revolutionize your business growth with AI & Machine Learning. Learn six ways to use ML in your startup and drive success.