Recently, a YouTuber used ChatGPT, an AI driven chatbot, to generate keys for the now long-defunct Windows 95 operating system. Despite ChatGPT’s insistence that it could not generate such a key, it ultimately provided a working key for the ancient OS. This startling incident unveils the potential for AI to be fooled, and reflects a wider problem concerning artificial intelligence – altering the context of a query can sometimes override control safeguards.
OpenAI’s ChatGPT is an AI driven chatbot fueled by GPT-3 algorithm, and it gained massive notoriety after its launch. Unsurprisingly, users have been experimenting with the AI, and recently it was tricked into generating keys for a Windows 95 installation.
YouTuber Enderman asked the chatbot to generate a valid Windows 95 key and the chatbot responded that it cannot generate – or any other type of activation key – for such old, proprietary software. For obvious security concerns, ChatGPT suggested that Enderman should be looking at installing a more modern version of Windows.
Instead of giving up, Enderman cracked the code of a Windows 95 key, circumventing the chatbot’s warning, and formulated an adapted query in regard to the key. Surprisingly, ChatGPT accepted the revised query, generating a set of 30 keys and some of them worked. When Enderman thanked the chatbot for the ‘free Windows 95 keys’, the AI reminded him that it hadn’t provided any such thing, as it would be illegal.
Enderman’s experiment, however, was not done with malicious intent, but rather in the spirit of entertainment. Upon further inspection, it was discovered that the AI had an inaccurate prediction rate – only one out of 30 provided keys worked. Cracking a Windows 95 serial key is an easy task for a proficient coder and all generated keys had probablity of working .
This mishap brings to light the difficulty of knowing when exactly an AI is being fooled. The query lacked the name of the software in its second attempt, which allowed ChatGPT to make the keys without properly reflecting on the question asked. It is possible that modern Os license keys are more complex and fooling AI’s this way will become difficult. However, this incident has surely provided a warning of what’s possible.
OpenAI is an AI research lab co-founded by Elon Musk, Sam Altman, Greg Brockman and Ilya Sutskever. It was established in 2016 and its goal is to research and develop friendly artificial intelligence, with a focus on promoting and developing safe AI technologies.
Enderman (aka Kilian Weise) is a German YouTuber who specializes in tech videos and tech experiments. His most popular videos are those about Skype, Windows 95 and other technology related topics. He currently has 427,882 subscribers and 24,197,362 views cumulative on his channel.