Move over, agony aunt: study finds ChatGPT gives better advice than professional columnists
Melbourne, Nov 22 – Can an AI chatbot provide better personal advice than professional columnists? A recent study published in the journal Frontiers in Psychology suggests that ChatGPT, the powerful language model developed by OpenAI, might just have the upper hand. The study found that participants perceived ChatGPT’s advice to be more balanced, complete, empathetic, helpful, and better overall compared to advice given by professional human columnists. However, despite the positive feedback, participants still expressed a preference for their own social dilemmas to be addressed by a human, rather than a computer.
ChatGPT, created by OpenAI, has gained immense popularity since its public release in November last year, with an estimated 100 million active monthly users. Utilizing massive amounts of text data scraped from the internet, ChatGPT can answer questions, provide information, and even give personal advice. While its vast knowledge and conversational abilities have impressed users and AI experts alike, the question of whether it can effectively deliver empathetic and sound judgment in social advice has been a topic of debate.
In the study, researchers compared the responses of ChatGPT, powered by GPT-4, with those of professional advice columnists. Participants were presented with a range of social dilemma questions and were unaware of which response came from ChatGPT. Surprisingly, about three-quarters of the participants perceived ChatGPT’s advice to be superior in terms of empathy, balance, completeness, and overall helpfulness. However, despite this, the majority still expressed a preference for human advice, highlighting the inherent bias towards human interaction.
The challenge lies in the fact that while ChatGPT can provide information and suggestions, it lacks the ability to genuinely feel and understand human emotions. An earlier version of ChatGPT struggled to adequately address users’ emotional needs and was penalized with poor ratings. However, the latest version of ChatGPT, utilizing GPT-4, allows users to request and indicate preferred responses, feedback that helps the model generate more socially appropriate and empathetic answers.
The study’s findings raise questions about the role of AI chatbots in providing personal advice and whether they can augment therapy or counseling in the future. The researchers acknowledge that while appropriately designed chatbots could have a place in enhancing therapy, they should not replace professional advisers or therapists. It is important to address safety and ethical concerns, as previous chatbots have delivered potentially dangerous advice.
The study also underscores the need for advice columnists to learn from AI’s success and improve their game. The longer and more comprehensive responses of ChatGPT were not the sole reason for participants’ preference; even when constrained to similar lengths, ChatGPT’s advice was still favored. Despite the effectiveness of AI chatbots, the human touch, emotions, and judgment remain vital to tackling complex social dilemmas.
In an increasingly AI-driven world, the study’s findings serve as a reminder that while technology can provide valuable insights and suggestions, human interaction and empathy continue to hold significant importance in addressing personal quandaries. Advice columnists can use the emergence of AI as an opportunity to enhance their own approaches, offering a nuanced blend of compassion, knowledge, and relatability.
As the adoption of AI continues to grow, it is crucial to strike a balance between technological advancements and the preservation of human connection, empathy, and emotional understanding.