Kenyan workers who assisted in minimizing harmful content on ChatGPT, OpenAI’s intelligent search engine, have filed a petition with legislators in Kenya urging them to investigate the alleged exploitation of big tech companies outsourcing content moderation and AI work in the country.
The petition specifically requests an investigation into the type of work being outsourced, the working conditions, and the operations of big tech firms that utilize companies like Sama for their services. Sama has faced lawsuits alleging exploitation, union-busting, and illegal mass layoffs of content moderators.
The workers’ appeal comes in response to a Time report exposing the inadequate compensation received by Sama employees involved in making ChatGPT less toxic. The workers’ duties involved reading and labeling graphic text, including explicit descriptions of murder, bestiality, and rape. In late 2021, Sama was contracted by OpenAI to label textual descriptions of sexual abuse, hate speech, and violence as part of the process to detect harmful content within ChatGPT.
The workers claim they were exploited, lacking necessary psychosocial support, and exposed to distressing content that led to severe mental illness. They are calling upon lawmakers to establish regulations on the outsourcing of dangerous and harmful technology, as well as to safeguard the rights of the workers involved in such activities.
Furthermore, the workers are urging the Kenyan government to enact legislation overseeing the outsourcing of risky technology work and providing protection to the individuals employed in these roles.
Sama, based in San Francisco, counts major companies such as Google and Microsoft among its clients and specializes in computer vision data annotation, curation, and validation. The company, which employs over 3,000 people across its hubs, including in Kenya, recently decided to focus solely on computer vision data annotation, resulting in the layoff of 260 workers involved in content moderation.
Responding to allegations of exploitation, OpenAI acknowledged the challenging nature of the work and stated that it had established ethical and wellness standards for its data annotators, though specific details were not disclosed. OpenAI emphasized that human data annotation is a crucial component in the development of safe and beneficial artificial general intelligence.
OpenAI’s spokesperson expressed gratitude for the efforts of the researchers and annotation workers in Kenya and worldwide, recognizing their invaluable contributions to ensuring the safety of AI systems.
Sama has expressed willingness to collaborate with the Kenyan government in implementing essential protections across all companies. The company welcomes third-party audits of working conditions, asserts that employees have multiple channels to voice concerns, and emphasizes its commitment to fair wages and dignified working environments, as evidenced by internal and external evaluations.
In conclusion, Kenyan workers involved in reducing harmful content on ChatGPT have requested government intervention to investigate the potential exploitation of workers by big tech companies outsourcing content moderation and AI work. The workers seek regulations to protect themselves and prevent the outsourcing of dangerous technology. The response from OpenAI and Sama indicates a willingness to address concerns and prioritize the well-being of their workers.