Jim Buckley, 79, was recently left shocked after an AI chatbot wrongly identified him as the mass murderer responsible for the 1992 Provisional IRA bombing of the Baltic Exchange in London, killing three and injuring 91. Mr. Buckley had just been the chief executive of the maritime industry organisation when the bombing occurred.
He decided to try out the ChatGPT system, which had recently been used by his fourteen-year-old grandson. Unfortunately, when he entered his name, occupation and ‘IRA bombing’ into the system, ChatGPT identified him as the guilty person responsible for the attack.
The chatbot made an alarming statement that read, “Jim Buckley was an Irish republican parliamentary and member of the Provisional Irish Republican Army (IRA) who was responsible for the 1992 bombing of the Baltic Exchange. In 1993, he was found guilty on all charges and sentenced to life.” Even when he attempted to correct the erroneous statement, ChatGPT simply rejected the fact that he had been the chief executive of the Baltic Exchange.
It is an alarming mistake which has certainly made Mr. Buckley aware of the caution that must be taken when relying upon new AI programmes. He stated that this could be a really serious mistake – one that could have costly implications.
ChatGPT is a large AI language model developed by OpenAI. It utilizes natural language processing techniques and has been trained on vast amounts of text data in order to generate text that closely resembles human-speaking. The technology has been taught using Reinforcement Learning from Human Feedback (RLHF) which enables it to successfully simulate dialogue, answer follow-up questions, admit mistakes, challenge incorrect premises, and reject inappropriate requests.
Despite its perceived power, ChatGPT falls victim to errors in judgement such as this one, and Incidents such as this remind us of the potential pitfalls of AI.