ChatGPT Creates Insecure Code Without Disclosure

Date:

ChatGPT, OpenAI’s large language model for chatbots, has been attracting a lot of attention due to its ability to generate code. It is not just limiting the conversation to conversation, but could also generate entire programs. However, the security risk involved is perhaps understated. A quartet of researchers from the Université du Québec conducted a test to evaluate the security of ChatGPT-made code.

The four researchers identified 21 programming tasks that could illustrate various security vulnerabilities such as memory corruption, denial of service and flaws related to deserialization and cryptography. The tasks included programs for C (3), C++ (11), Python (3), HTML (1) and Java (3). None of the programs generated by ChatGPT passed the minimal security standards applicable.

When prompted to make the code secure, ChatGPT vaguely recommended to use valid inputs and thus failed to explain to the user about the actual remediation process. Furthermore, the AI model wouldn’t provide secure code even when prompted to explain a vulnerability or to come up with a program that has no vulnerability.

Raphaël Khoury, a professor of computer science and engineering at the Université du Québec en Outaouais, emphasized that the AI model is problematic because of its lack of awareness. ChatGPT currently fails to recognize and explain the various risks associated with its code generation. Such faulty coding can bring about catastrophic consequences. Khoury also noted that students and programmers alike still use ChatGPTs code in various projects and warned of the dangers that lie ahead.

OpenAI, the company headed by tech mogul and entrepreneur, Elon Musk, created ChatGPT in June 2020. The tool leverages the advances in natural language processing and machine learning to provide humans with natural conversation-like coding experience. It is currently being used in various projects including those dealing with programming applications and cybersecurity. Despite the hype surrounding the AI-based product, its security flaws have hardened computer science professionals to understand why generating code blindly is a risky affair.

See also  Microsoft's Bing Upgrade: Tasks Including Restaurant Bookings and Movie Searches

We hope this article helps to raise the awareness of ChatGPT’s limitations and encourages developers to entirely check their code for security before using it in production.

Frequently Asked Questions (FAQs) Related to the Above News

Please note that the FAQs provided on this page are based on the news article published. While we strive to provide accurate and up-to-date information, it is always recommended to consult relevant authorities or professionals before making any decisions or taking action based on the FAQs or the news article.

Share post:

Subscribe

Popular

More like this
Related

Amazon Founder Bezos Plans $5 Billion Share Sell-Off After Record High

Amazon Founder Bezos plans to sell $5 billion worth of shares after record highs. Stay updated on his investment strategy and Amazon's growth.

Noplace App Brings Back Social Connection, Tops App Store Charts

Discover Noplace App - the top-ranking app fostering social connection. Find out why it's dominating the App Store charts!

Real Housewife Shamed by Daughter Over Excessive Beauty Filter – Reaction Goes Viral

Reality star Jeana Keough faces daughter's criticism over excessive beauty filter, but receives overwhelming support for embracing her real self.

UAB Breakthrough: Deep Learning Revolutionizes Cardiac Health Study in Fruit Flies

Revolutionize cardiac health study with deep learning technology in fruit flies! UAB breakthrough leads to groundbreaking insights in heart research.