Criminals are now turning to artificial intelligence to enhance their illicit activities, leveraging generative AI to work more efficiently and internationally. Rather than developing their own AI models, cybercriminals are utilizing existing tools to ensure reliable results. This shift indicates a focus on ease and quick gains, as new technologies must offer substantial benefits to outweigh potential risks.
OpenAI recently faced challenges with its latest AI model release, GPT-4o, with controversies ranging from the resignation of its safety team to accusations of replicating voices without consent. Moreover, the model’s Chinese data blunder, tainted by phrases from spam websites related to pornography and gambling, raises concerns about performance issues and potential misuse. This situation reflects broader challenges in training large language models for Chinese applications.
The convergence of AI and criminal activities as well as the complexities in managing AI models underscore the need for vigilant oversight and robust safeguards in the evolving technological landscape.