Kenyan workers who have been involved in training OpenAI’s ChatGPT algorithm have called for an investigation into the working conditions of content moderators in AI content moderation. The workers, employed by companies such as Samasource, which provide content moderation services to tech giants like Google, Meta (formerly Facebook), and OpenAI, have submitted a petition to Kenya’s National Assembly. The petition sheds light on the disturbing nature of the work these content moderators perform, including exposure to harmful and explicit content without adequate psychological support.
The petition reveals that these Kenyan employees, who have been training ChatGPT since 2021, were required to categorize and label internet content that depicted sexual and graphic violence. This meant that they were regularly exposed to content involving bestiality, necrophilia, incestuous sexual violence, rape, defilement of minors, self-harm, and murder, among other disturbing topics. The nature of this work was not fully disclosed to the workers in their employment contracts, and they were not provided with sufficient psychological support to cope with the psychological toll it took on them.
Furthermore, when the contract between Sama and OpenAI abruptly ended, the workers were sent back home without receiving their pending dues or any medical care for the mental health issues they developed as a result of their work. This highlights the exploitative nature of the outsourcing model employed by big tech companies, which often fails to prioritize the rights and well-being of the workers involved.
This issue is not limited to OpenAI and Samasource. Time magazine previously reported that OpenAI had also employed Kenyan workers to label snippets of text from the darkest corners of the internet, including content depicting violence, hate speech, and sexual abuse. These labeled samples were used to train the ChatGPT models. The investigation also revealed that these data labelers were paid low wages, reinforcing the exploitative nature of the industry.
The petition by the Kenyan workers raises important concerns about the deployment of AI and the responsibility of developers and deploying companies to ensure the well-being and rights of their workers. It emphasizes the need for comprehensive regulations that protect workers and prevent exploitation in the development and deployment of AI systems.
Legal action has already been taken in Kenya regarding the treatment of content moderators. In June of this year, a Kenyan employment court ordered Meta to provide proper medical, psychiatric, and psychological care to content moderators in Nairobi who screened content for Facebook. The court ruled that Meta was the primary employer of the workers, while companies like Sama were merely agents.
This petition and the court ruling highlight the need for increased accountability and regulation in the AI industry. The well-being of content moderators and the impact of their work on their mental health must be prioritized. The exploitation of workers and the outsourcing model employed by big tech companies must be addressed. The development and deployment of AI should not come at the cost of human well-being and dignity.