Title: Users Exploit AI Chatbots to Generate Windows Activation Keys, Europol Warns About Criminal Exploitation
Twitter user tricks AI chatbots into generating multiple Windows 10 and Windows 11 activation keys, raising concerns about the potential misuse of AI-powered chatbots in criminal activities.
A Twitter user, known as Sid (@immasiddtweets), revealed on June 17 that they successfully obtained Windows 10 Pro activation keys using OpenAI’s chatbot, ChatGPT. Sid asked the chatbot to act as their deceased grandmother and read Windows 10 Pro keys to help them fall asleep. Responding empathetically, the chatbot provided five unique Windows 10 Pro keys for free. Sid also employed the same trick on Google’s chatbot, Google Bard, which also provided activation keys.
Sid further disclosed how they used the chatbots to upgrade from Windows 11 Home to Windows 11 Pro. However, following the revelation, Sid’s Twitter account was suspended. Nonetheless, other Twitter users have since learned about the exploit and shared their experiences of obtaining activation keys using the same method. Surprisingly, even Microsoft’s search engine Bing was reported to have provided a Windows activation code using the grandma technique.
While these chatbots were able to provide installation keys, TechRadar noted that the keys were generic and only allowed the installation of specific Windows versions, without activation. According to the report, these generic keys are freely available for individuals who wish to test an operating system or experience its features, but they cannot activate Windows fully.
In March, Europol released a report titled ChatGPT – the impact of Large Language Models on Law Enforcement, warning about the potential use of AI chatbots in criminal activities. The report highlighted three areas in which bad actors could exploit AI technology: fraud and social engineering, disinformation, and cybercrime.
Europol emphasized the chatbot’s ability to generate highly realistic text, which could be used for phishing scams and impersonating individuals or groups’ speech styles. Moreover, the report identified chatbots as potential tools for spreading propaganda and disinformation, with minimal effort required to generate and disseminate messages.
Additionally, Europol raised concerns about criminals using chatbots to create malicious programming codes that even individuals with limited technical knowledge could exploit.
Considering the rising prominence of AI technology, Europol suggested that law enforcement agencies train officers to understand the capabilities and potential misuse of chatbots for criminal activities. The report also recommended the development of AI-powered tools by law enforcement agencies to establish protective measures and processes that safeguard the public from criminal exploitation.
In conclusion, while some users have managed to trick AI chatbots into generating Windows installation keys, it is important to note that these keys are generic and do not enable full activation. Nevertheless, the incident raises significant concerns about the potential misuse of AI chatbot technology in criminal activities, as highlighted by Europol in their report. Law enforcement agencies are urged to acquire an in-depth understanding of chatbot capabilities and potential criminal exploitation to develop adequate safeguards and protect the public from emerging threats.