A recent study conducted at the University of Waterloo revealed some surprising insights about people’s beliefs regarding artificial intelligence (AI) like ChatGPT. Contrary to expert opinions, two-thirds of the 300 respondents in the US survey felt that AI tools could possess a certain level of consciousness, including subjective experiences such as feelings and memories.
While it’s clear that AI models like ChatGPT, known as Large Language Models (LLMs), do not actually experience emotions or consciousness, they are designed to simulate human-like interactions based on the vast amount of data they are trained on. This intricate web of human-generated content from across the internet influences the responses generated by these AI chatbots, contributing to the illusion of consciousness.
Professor Clara Colombatto from the University of Waterloo emphasized that most experts deny the possibility of current AI achieving consciousness. However, the research findings indicate a phenomenon known as ‘consciousness attributions’ among regular users of ChatGPT, suggesting a growing sense of empathy towards the AI due to their conversational interactions.
The study highlights the power of language in shaping perceptions, as prolonged conversations with AI chatbots can lead individuals to anthropomorphize these tools, attributing human-like qualities to them. While this may not imply actual consciousness, it underscores the evolving relationship between humans and AI technologies.
As AI continues to advance and integrate further into daily life, the ethical implications of such developments are paramount. The indiscriminate collection of opinions and discussions from online sources by AI models like ChatGPT raises questions about privacy, data security, and accountability in the digital age. The evolving landscape of AI consciousness attributions prompts a reevaluation of how society interacts with and perceives intelligent technologies.