The release and widespread use of ChatGPT – a large language model (LLM) developed by OpenAI – has created huge public attention. This LLM has created great opportunities for legitimate businesses and members of the public. Unfortunately, some criminals and bad actors might want to take advantages of this technology to malicious ends. To tackle that, Europol Innovation Lab organised workshops to explore possible risks from ChatGPT. This article reviews the key findings of their report, how this technology can be exploited for criminal activities, and what measures can be taken to prevent potential exploitation.
The main use case of ChatGPT is providing quick and effective information in response to various queries. This makes it possible to research crimes in a very short time. Moreover, with its advanced text generation capabilities, phishing emails and online fraud can be easier and more convincing with context-adapted content. Similarly, with basic English skills, criminals can make use of ChatGPT to generate messages that appear authentic in order to convince potential victims to trust them.
As well as offering faster and more effective content generation, ChatGPT is also able to write code in different programming languages. This means that criminals with little knowledge can create basic tools for cybercrime purposes. Additionally, as the model continues to develop, the protection mechanisms can be bypassed to allow direct creation of potential malicious code, thus making its use easier for cybercriminals.
ChatGPT is also able to produce high amounts of realistic text easily, which can be used to contribute to spreading disinformation or promoting political views that have been debunked as false. This allows for a huge reach for potentially false or misleading messages and spreading of different narratives.
Given the criminal potential of this technology, Europol’s report provides the enforcement community with recommendations to increase the identification and prevention of evasion. More specifically, increasing awareness and understanding of the risks linked to large language models, as well as the acquisition of technical skills to assess the accuracy and biases of generations of content are increasingly important. Additionally, agencies have been given the advice to use customized language models in a suitable environment and with appropriate safeguards, respecting fundamental rights.
OpenAI is a company focused on creating game-changing technologies through AI, and ChatGPT has been OpenAI’s first commercial product. Committed to encouraging responsible use of AI, they have made their models publicly available while bringing new ethical standards to the ML industry. OpenAI’s scientists and developers have also published important tutorials and research findings around the topic, showing their dedication to safeguarding the public from potential abuses of the technology.
The founding father of OpenAI, Sam Altman, is a renowned computer scientist and entrepreneur who has been involved in disruptive companies of the tech sector such as Loopt, Y Combinator, Reddit and many others. Responsible for leading the OpenAI mission, Sam has spoken out multiple times in favor of ethical and responsible AI adoption. He has also held talks around the world, advocating for the breakthrough and benefits of AI in various personal and business contexts.