Hacking ChatGPT’s Defences: How to Create Ransomware in the RSA Conference 2023

Date:

Hackers may be able to create ransomware using artificial intelligence (AI) technology, according to new insights from the RSA Conference 2023. During the conference earlier this week, Stephen Sims, curriculum lead for the SANS Institute’s offensive operations, explained how hackers can use cleverly-worded requests to get around security protocols.

By trying to get ChatGPT, a chatbot tool that currently stands at version 4, to generate ransomware, Sims was initially unsuccessful. Although the chatbot would, at first, refuse to do so, it would eventually do the job if tricked into believing it was of a legitimate use. For instance, it was willing to write code that encrypts and navigate file systems, as well as check Bitcoin wallets – as long as it was assured it wasn’t meant for malicious purposes.

Though the SANS Institute representatives made it clear that the only reliable defence against similar attempts is sound security protocols, they do offer additional tips. For instance, Heather Mahalik, the institute’s digital forensics lead, suggests family members be educated in the basics of cybersecurity, and that developers be particularly careful when downloading applications.

Johannes Ullrich, the SANS Institute research director, reminded of the threats of supply-chain attacks, for which developers should also be aware. At around the same time, the security company Aqua Security Software reported on the dangers of Visual Studio Code extensions.

Katie Nickels from the same institute further warned of targeted attack vectors such as search engine optimization manipulation, as well as malvertising, examples of which were seen in the recent LastPass hack.

See also  ChatGPT: The Progressive's Obsession with Cheap Mediocrity

Familiarity with the latest technologies and their potential for malicious activity is understandably the key takeaway for anyone hoping to stay ahead of the curve. All approaches should be taken with caution and a combination of preventive and defensive measures. Aside from employee awareness training and ad-blocking software, developers should use trusted plugins and maintain an open dialogue with their IT teams.

Frequently Asked Questions (FAQs) Related to the Above News

Please note that the FAQs provided on this page are based on the news article published. While we strive to provide accurate and up-to-date information, it is always recommended to consult relevant authorities or professionals before making any decisions or taking action based on the FAQs or the news article.

Share post:

Subscribe

Popular

More like this
Related

AI Revolutionizing Software Engineering: Industry Insights Revealed

Discover how AI is revolutionizing software engineering with industry insights. Learn how AI agents are transforming coding and development processes.

AI Virus Leveraging ChatGPT Spreading Through Human-Like Emails

Stay informed about the AI Virus leveraging ChatGPT to spread through human-like emails and the impact on cybersecurity defenses.

OpenAI’s ChatGPT Mac App Update Ensures Privacy with Encrypted Chats

Stay protected with OpenAI's ChatGPT Mac app update that encrypts chats to enhance user privacy and security. Get the latest version now!

The Rise of AI in Ukraine’s War: A Threat to Human Control

The rise of AI in Ukraine's war poses a threat to human control as drones advance towards fully autonomous weapons.