Microsoft Discovers Chinese, Russian Hackers Exploiting OpenAI Tools
State-backed hackers from Russia, China, and Iran have been utilizing tools from OpenAI, a project backed by Microsoft, for their hacking activities. According to a report released by Microsoft on Wednesday, these hacking groups have been leveraging large language models, which are a form of artificial intelligence (AI) that generate human-like responses based on extensive text input. The findings prompted Microsoft to impose a blanket ban on state-sponsored hacking groups from accessing its AI products.
In an interview with Reuters, Tom Burt, Microsoft’s Vice-President for Customer Security, stated that irrespective of whether these actors are violating the law or the terms of service, Microsoft does not want them to have access to this technology. The report revealed that hacking groups affiliated with Russian military intelligence, Iran’s Revolutionary Guard, and the Chinese and North Korean governments have been honing their hacking campaigns with the help of OpenAI’s tools.
While China’s embassy in the United States opposed the allegations, emphasizing the importance of the safe, reliable, and controllable deployment of AI, no immediate comments were received from other countries mentioned in the report. The revelation that state-backed hackers are using AI tools to enhance their spying capabilities raises concerns about the potential misuse of this rapidly proliferating technology.
Notably, this is one of the few instances where an AI company has discussed the use of AI technologies by cybersecurity threat actors publicly. OpenAI and Microsoft described the hackers’ use of their AI tools as early-stage and incremental, with no breakthroughs observed by cyber spies. The report shed light on how different hacking groups used large language models to pursue their objectives.
Russian hackers, allegedly affiliated with the GRU, used the models to explore various military technologies, including those related to radar and satellites, that could be deployed in conventional military operations in Ukraine. North Korean hackers utilized the models to generate content for spear-phishing campaigns targeting regional experts, while Iranian hackers employed them to craft convincing emails. In one instance, they even used the models to draft an email attempting to attract prominent feminists to a malicious website.
Tom Burt and Bob Rotsted refrained from providing specific numbers regarding the volume of activity or the accounts suspended as a result. However, Burt justified the zero-tolerance ban on hacking groups by emphasizing the novelty and power of AI technology, which raises concerns about its deployment.
As hacking activities continue to evolve and adapt, it is crucial for technology companies like Microsoft to remain vigilant and take proactive steps to prevent malicious actors from exploiting AI tools. By imposing a strict ban on state-sponsored hacking groups, Microsoft aims to mitigate the risks associated with the potential misuse of AI technology.