A new study published in Frontiers of Psychology reveals that ChatGPT, an AI-powered chatbot developed by OpenAI, outperforms professional advice columnists in providing advice. The study, conducted by researchers at the University of Melbourne and the University of Western Australia, compared responses from ChatGPT and human advisers to fifty social dilemma questions randomly selected from popular advice columns.
In the study, participants were shown a question along with the corresponding response from an advice columnist and ChatGPT. They were then asked to rate which answer they perceived as more balanced, empathetic, comprehensive, helpful, and overall better. The results showed that ChatGPT surpassed human advisers in all categories, with preference rates ranging from 70 to 85 percent in favor of the AI chatbot.
Interestingly, the study found that ChatGPT’s responses were more detailed and comprehensive compared to those of the advice columnists. Despite this, 77 percent of participants stated a preference for human responses to their social dilemma questions, reflecting a social or cultural phenomenon rather than dissatisfaction with the AI’s advice.
To explore this phenomenon further, the researchers plan to conduct future studies that inform participants in advance which answers were written by humans and which were generated by AI. This will help determine if participants’ willingness to seek advice from AI-powered tools like ChatGPT increases when they are aware of the source of the responses.
The study adds to the growing evidence of ChatGPT’s capabilities, as previous research demonstrated its ability to understand and respond to human emotions. This development is seen as a step closer to artificial general intelligence (AGI), prompting OpenAI to offer $10 million in research grants to ensure the responsible control of superintelligent AI systems.
While ChatGPT’s advice was highly rated in the study, the preference for human responses highlights the enduring value that people place on human interaction and empathy when seeking advice. Despite the AI’s ability to provide detailed and balanced responses, the majority of participants still prefer the personal touch of a human adviser.
As AI technology continues to advance, researchers and developers will need to consider not only the quality and accuracy of AI-generated advice but also the human factors that influence individuals’ preferences. Striking a balance between the benefits of AI-guided solutions and the human touch will be crucial in creating tools that meet the diverse needs of individuals seeking advice.