ChatGPT and other AI chatbots are not the problem – it is us. We can not ignore the power of these technological advancements and how humans are using them for their own ends. We have seen it with chatbots like ChatGPT and the best AI image-generators like Stable Diffusion, where fake images and stories are generated and presented as real facts. It’s a shame how, recently, a German magazine even used an AI chatbot known as character.ai to create a false interview with former Formula 1 driver Michael Schumacher.
The creators of character.ai are careful to mention that “everything Characters say is made up! Don’t trust everything they say or take them too seriously”. Regrettably, this didn’t stop some people from misusing it to generate fake content they could pass as their own. We must be aware of this tendency toward AI-machine hallucinations.
At this point, the crucial thing is to accept responsibility. We possess neither the neutral objectivity nor the infallible morality that some seem to expect from AI. As humans, we are prone to succumbing to any pleasure that might come our way – including the marvels of AI chatbots.
That being said, AI chatbots are certainly not meant to be trusted as dependable sources. It would be inexcusable for a journalist to use what they “learned” from a chatbot in their reporting, considering that the chatbot might well be plagiarizing from other sources.
Therefore, it is essential to draw a line and make sure that fake news does not spread. We must learn to responsibly use the highly beneficial products of AI chatbot technology – this includes being cautious when it comes to our trust of them and never relying on them in professional fields, such as journalism.
The company mentioned in this article, character.ai, is an AI chatbot rival of ChatGPT. It offers users an opportunity to talk to figures both dead and alive and provides entertaining conversations of a pretty convincing quality. Its main purpose is to entertain rather than to educate, however, due to its potential misuse, it is important to make sure not to take its responses too seriously.
Michael Schumacher is the person mentioned in the article. He attained international fame in the 1990s and 2000s as a Formula One driver and was seriously injured in a 2013 skiing accident. This made it even worse when a German magazine published a false interview that it got from an AI chatbot that impersonated him. As a result, it’s essential to be cautious around chatbots that aim to recreate conversations with people, both past and present.