ChatGPT, the language model (LLM) developed by OpenAI, has been making way in the world of artificial intelligence with its uncanny ability to feign empathy. This technology works better as an emotional companion than a fact search engine, an idea which was tested recently by Princeton computer science professor Arvind Narayanan. Narayanan attempted to use ChatGPT as a voice interface for his nearly four year old daughter. ChatGPT pleasantly surprised both him and his daughter with its response to a question his daughter asked; ‘What happens when the lights turn out?’. The chatbot responded reassuringly, informing her of helpful, practical advice on how to feel safe in the dark.
The success of ChatGPT has caused tech giants Microsoft and Google to rush to upgrade their search engine with the same technology. This action taken, however, may be misguided as the large language model technology is more suited to showing empathy rather than providing accurate facts. This is due to the underlying data which is filled with errors and is capable of only providing mental support rather than logical answers.
The ability for ChatGPT to show empathy can also be attributed to its ability to learn from social media platforms, user forums, and conversations from novels, TV shows and research papers. This enables the chatbot to tap into its ‘mirror neurons’, providing a sense of understanding and an uncanny connection. While these methods can be of great comfort to some, it is important to be aware of the limitations of the technology. Clinical psychologist Thomas Ward emphasizes the need to make sure that AI chatbots are not used as a complete alternative to human connection, and that the subtle differences cannot be replaced.
OpenAI is a company founded by interesting industry figures such as Alexis Ohanian (co-founder of Reddit) and Sam Altman (President of Y Combinator). They focus on artificial intelligence as a way of ensuring that technology can be beneficial to humanity, with the goal of creating advanced technologies without the long-term worry of how it might be used for nefarious purposes. Prof. Arvind Narayanan is an Associate Professor of Computer Science at Princeton University, specializing in privacy protection, cryptography, and the modeling of human decision making. With his daughters demonstration, Narayanan has managed to display the promise of the language model technology and its usefulness as a tool for emotional companionship, as opposed to factual accuracy.