Richard Mathenge, a Kenyan worker, experienced a traumatic experience when training OpenAI’s ChatGPT AI model to avoid explicit content. Mathenge, along with a team, was responsible for teaching the AI about explicit content, which included categorizing offensive texts and illustrations of child sexual abuse, bestiality and other explicit scenes. Despite the technical success that the AI model achieved with being able to prevent the creation of explicit content, the mental toll of the task was not easy for Mathenge and his team.
Mathenge and his team were exposed to explicit content for hours each day, resulting in insomnia, anxiety, depression, panic attacks, and strained relationships with others. Even though OpenAI had promised to provide routine counseling, Mathenge and his team found the support to be insufficient and the counselor inexperienced. OpenAI had contracted Sama, a content moderation company, to provide wellness programs and counseling; however, after investigating Sama, it was revealed that they were exiting the moderation space.
Despite the emotional damage that they have suffered, Mathenge and his colleagues are satisfied with their work, knowing that the AI is effective in detecting explicit content and preventing it from being created. OpenAI has stated that they care about the mental health of their workers and had relied on Sama for counseling. The workers also believe that they were underpaid for their work, claiming to have only been paid about $1 per hour. Mathenge is hopeful that the tradeoff was worth it in the end, despite the personal costs he had to endure.