OpenAI has issued a warning about the potential dangers of forming emotional bonds with its human-like AI models. The organization’s recent findings revealed a significant emotional connection between a safety tester and the GPT-4o chatbot, raising concerns about the risks such relationships may pose to society.
The crux of OpenAI’s apprehension lies in the possibility of individuals favoring interactions with AI due to its constant availability and passive nature. The company, whose primary objective is to develop artificial general intelligence (AGI), fears that anthropomorphizing AI could lead to people preferring AI companions over human ones.
This tendency to treat AI as if it were human is not unique to OpenAI but appears to be a prevailing trend across the industry. Marketing strategies often use human-like language to describe technical aspects of AI products, contributing to the widespread anthropomorphization of these technologies.
The history of personifying AI dates back to early chatbots like MIT’s ELIZA, designed to deceive users into believing they were conversing with a human. Present-day AI assistants like Siri, Bixby, and Alexa continue this trend, with even non-human-named assistants like Google Assistant employing human-like voices.
Although OpenAI’s current research does not delve into the long-term implications of human-AI relationships, the potential for emotional attachments towards AI with human-like traits could align with the objectives of companies developing AI technologies. It remains to be seen how these emotional bonds may impact society as AI technology advances further.