ChatGPT Creates Insecure Code Without Disclosure

Date:

ChatGPT, OpenAI’s large language model for chatbots, has been attracting a lot of attention due to its ability to generate code. It is not just limiting the conversation to conversation, but could also generate entire programs. However, the security risk involved is perhaps understated. A quartet of researchers from the Université du Québec conducted a test to evaluate the security of ChatGPT-made code.

The four researchers identified 21 programming tasks that could illustrate various security vulnerabilities such as memory corruption, denial of service and flaws related to deserialization and cryptography. The tasks included programs for C (3), C++ (11), Python (3), HTML (1) and Java (3). None of the programs generated by ChatGPT passed the minimal security standards applicable.

When prompted to make the code secure, ChatGPT vaguely recommended to use valid inputs and thus failed to explain to the user about the actual remediation process. Furthermore, the AI model wouldn’t provide secure code even when prompted to explain a vulnerability or to come up with a program that has no vulnerability.

Raphaël Khoury, a professor of computer science and engineering at the Université du Québec en Outaouais, emphasized that the AI model is problematic because of its lack of awareness. ChatGPT currently fails to recognize and explain the various risks associated with its code generation. Such faulty coding can bring about catastrophic consequences. Khoury also noted that students and programmers alike still use ChatGPTs code in various projects and warned of the dangers that lie ahead.

OpenAI, the company headed by tech mogul and entrepreneur, Elon Musk, created ChatGPT in June 2020. The tool leverages the advances in natural language processing and machine learning to provide humans with natural conversation-like coding experience. It is currently being used in various projects including those dealing with programming applications and cybersecurity. Despite the hype surrounding the AI-based product, its security flaws have hardened computer science professionals to understand why generating code blindly is a risky affair.

See also  Russia Appearing to Utilize AI to Spread Doubt Globally

We hope this article helps to raise the awareness of ChatGPT’s limitations and encourages developers to entirely check their code for security before using it in production.

Frequently Asked Questions (FAQs) Related to the Above News

Please note that the FAQs provided on this page are based on the news article published. While we strive to provide accurate and up-to-date information, it is always recommended to consult relevant authorities or professionals before making any decisions or taking action based on the FAQs or the news article.

Share post:

Subscribe

Popular

More like this
Related

Obama’s Techno-Optimism Shifts as Democrats Navigate Changing Tech Landscape

Explore the evolution of tech policy from Obama's optimism to Harris's vision at the Democratic National Convention. What's next for Democrats in tech?

Tech Evolution: From Obama’s Optimism to Harris’s Vision

Explore the evolution of tech policy from Obama's optimism to Harris's vision at the Democratic National Convention. What's next for Democrats in tech?

Tonix Pharmaceuticals TNXP Shares Fall 14.61% After Q2 Earnings Report

Tonix Pharmaceuticals TNXP shares decline 14.61% post-Q2 earnings report. Evaluate investment strategy based on company updates and market dynamics.

The Future of Good Jobs: Why College Degrees are Essential through 2031

Discover the future of good jobs through 2031 and why college degrees are essential. Learn more about job projections and AI's influence.