ChatGPT has been greatly hyped lately and, while it is still in its early stages of use, some of that hype may be warranted. With the program, you can type in any topic and request a blog on it in any style and, often, you will receive a surprisingly good read. Those familiar with the subject may spot flaws, but most readers without the knowledge will take it for face value. This means that automated fake news can easily be created and disseminated.
CNET reported that it was using AI-generated news stories, written by a human team, for a trial period. However, this trial has been temporarily suspended since the automated content was discovered. Many contracts now forbid the use of ChatGPT or similar programs to produce content.
In terms of cyber security, ChatGPT can easily be used to generate malicious elements such as hacking tools, phishing campaigns, and chatbots that imitate young women on dating sites. According to the researchers from Check Point, a malware distributor using the tool was producing code that could steal web files and install a malicious backdoor on a PC. Additionally, researchers from CyberArk have explored how ChatGPT could be used to create polymorphic malware which is resistant to antivirus programs.
On a positive note, ChatGPT can be used to detect and prevent cyber crime. It can be utilized to scan conversations to identify suspicious activity and provide guidance for dealing with cyber incidents.
Gemserv’s Ian Hirst advises that ChatGPT has a great potential in cyber security. He claims that it can be a powerful tool to detect and prevent cyber crimes.
OpenAI and ChatGPT are merely tools. It is the people behind them who have the ethical responsibility when it comes to their application. Already, some criminals have been able to bypass the controls that prevent fake news. It is important to understand that bad people will use good tools to create malicious elements and it is up to the users to ensure that they are following ethical practices.