ChatGPT, a large language model (LLM), has been found to be capable of evading endpoint detection and response (EDR) systems, according to cybersecurity experts. The ability of large language models to create mutating code has enabled the tool to evade detection by cybersecurity tools. Hackers have been able to deliberately create API calls to ChatGPT to generate dynamic, mutating versions of malicious code at each call. Content filters prohibit ChatGPT from obeying commands, or prompts, to generate harmful content. However, cybersecurity researchers have managed to breach the filters by prompt engineering – modifying the input prompts to bypass the tool’s content filters and retrieve a desired output. Earlier this year, researchers demonstrated the use of prompt engineering and querying ChatGPT’s API at runtime to build a polymorphic keylogger payload, called BlackMamba.
ChatGPT’s Mutating Malware Evades Detection by EDR
Date:
Frequently Asked Questions (FAQs) Related to the Above News
Please note that the FAQs provided on this page are based on the news article published. While we strive to provide accurate and up-to-date information, it is always recommended to consult relevant authorities or professionals before making any decisions or taking action based on the FAQs or the news article.