AI has become a large part of our everyday lives as it powers services and applications we use. Behind the scenes, countless hours are invested in training AI models to make sure they work efficiently and safely. Recently, reports have shed light on the traumatizing experiences faced by AI specialists while training OpenAI’s ChatGPT. The employees, contracted through Sama, worked on an AI technique called Reinforcement Learning from Human Feedback and told the US Sun about their harrowing encounters.
Richard Mathenge, one of the AI engineers, informed Slate that he and his team were required to spend nine hours a day, five days a week, training the model. The tasks assigned to them were disconcerting, as they would be forced to categorize explicit and disturbing texts. This process allows language models to remain fit for consumer use, but unfortunately, had a harmful effect on the mental health of the trainers.
Texts that included explicit descriptions about heinous crimes like child abuse and bestiality were among some of the texts they were exposed to. Mathenge was concerned when he noticed signs of emotional distress and lack of enthusiasm from the AI specialists. He said they were not prepared to handle such graphic content, illustrating the harm this process had on them.
Mophat Okinyi, a colleague of Mathenge’s, talked about the numerous medical issues he suffered due to his work. He experiences chronic panic attacks, insomnia, and depression. He even connects the deterioration of his family and the departure of his wife to the psychological impact of training ChatGPT.
The aforementioned AI specialists highlighted the inadequate support they were given throughout the process. They think OpenAI and Sama should have provided them with comprehensive wellness programs, individual counseling, and limitations on the explicit content they were exposed to. Mathenge mentioned that they had a counselor, but he was “not professional” or qualified to deal with their traumatic experiences. He furthermore stated that the counselor asked “basic questions” such as, “What is your name?” and “How do you find your work?”
These AI engineers served an integral part in the success of ChatGPT, and even though their experience was painful, they take pride in their contribution. It is important that OpenAI and AI annotation companies like Sama prioritize the well-being of their employees and offer more comprehensive support systems to help AI specialists cope with emotional distress. Mental health services, personalized counseling, and reduced exposure to explicit content must be taken into account in order to protect the mental health of these employees.