AI Chatbots in Healthcare and HR: Lessons Learned from Controversial Case
Artificial intelligence (AI) chatbots have become a popular tool for businesses to assist customers and provide answers to their inquiries. In some cases, they have even replaced humans in call center roles. However, caution must be exercised when implementing AI chatbots in sectors such as healthcare and human resources, as ethical oversight is crucial.
A recent highly publicized case involving a chatbot called Tessa has shed light on the potential pitfalls of using these frontline tools. The National Eating Disorder Association (NEDA) in the US decided to replace its helpline staff with Tessa, citing increased call volumes, long wait times, and legal liabilities associated with volunteer staff. However, shortly after its implementation, Tessa was taken offline due to reports that it had provided problematic advice to individuals seeking help for eating disorders, potentially exacerbating their symptoms.
Dr. Ellen Fitzsimmons-Craft and Dr. C Barr Taylor, researchers who assisted in creating Tessa, made it clear that the chatbot was never intended to replace an existing helpline or provide immediate assistance to those experiencing intense eating disorder symptoms. The original version of Tessa was a rule-based chatbot with pre-programmed responses and limited understanding and adaptability to unanticipated user responses.
Tessa’s transformation into an AI chatbot employing generative AI, similar to ChatGPT, occurred when the hosting company decided to enhance its questions and answers feature. These advanced AI chatbots simulate human conversational patterns and provide more realistic and useful responses. However, the use of large databases of information and complex technological processes like machine learning and natural language processing to generate customized answers can lead to potentially harmful responses, as seen in Tessa’s case.
Lessons need to be learned from incidents like this as AI integration becomes increasingly prevalent across various systems. Ethical oversight, involving a human in the loop and adherence to the original purpose of the chatbot’s design, could have potentially prevented the harmful outcomes experienced with Tessa.
The UK’s approach to AI integration appears somewhat fragmented, with the dissolution of the advisory board to the Centre for Data Ethics and Innovation and the establishment of the Frontier AI Taskforce. AI systems are also being trialed in London as tools to aid workers, although not as replacements for helplines.
A tension between ethical considerations and business interests arises in using AI chatbots. Striking a balance between the well-being of individuals and the benefits and efficiency that AI can offer is crucial. However, in certain areas where organizations interact with the public, AI-generated responses and simulated empathy may never be enough to replace genuine humanity and compassion, especially in fields like medicine and mental health.
As AI continues to advance and be integrated into various sectors, it is important to prioritize ethical standards and human oversight. The lessons learned from cases like Tessa’s should serve as a reminder that careful implementation and a thoughtful approach are necessary to avoid potential harm and ensure the well-being of those seeking assistance.