As cybercrime becomes increasingly sophisticated, it is important to understand the potential risk posed by AI language tools, such as ChatGPT. Such tools are proficient at producing content that is indistinguishable from human writing, and their misappropriation could potentially lead to a higher risk of social engineering scams. This makes it difficult to discern the differences between genuine and malicious communications.
Organizations have long relied on employee training to help them identify potential phishing attacks, including grammar and spelling irregularities that signal an email’s authenticity. However, AI language generators are slowly making these techniques obsolete. They are capable of writing text with perfect grammar, spelling and language usage, which suggests that humans may no longer be part of a company’s overall security strategy.
The risks posed by AI language tools are not technical, but social. Machines can analyze code and determine whether it is malicious or not, but humans cannot differentiate between written communications that are generated by ChatGPT or a human. As such, companies must seek other methods to identify malicious messages. Unfortunately, there are currently no commercial applications that specifically address this risk. Instead, it is up to organizations to establish strong authentication measures that can detect AI-generated content.
The emergence of AI language tools also raises questions about the use of confidential or sensitive messaging services. Messenger applications such as WhatsApp, Signal and Telegram are encrypted, making it impossible for security vendors to filter malicious messages. This means that organizations will increasingly need to implement filtering technologies at the employee device level in order to detect and report phishing attempts.
Thanks to the expertise of Forbes Councils members, it is possible to gain a better understanding of the potential cybercrime risks posed by AI language tools. Such tools are a major concern for organizational security and must be addressed with caution. This means shifting away from relying solely on end-user training and instead focusing on developing more complex forms of user identification that involve verification methods such as secret passphrases and the use of filtering technologies. With the right security protocols in place, organizations can protect themselves from being affected by automated phishing attempts and other malicious activities.