Kenyan Workers Demand Investigation into Disturbing AI Content Moderation Work Conditions

Date:

Kenyan workers who have been involved in training OpenAI’s ChatGPT algorithm have called for an investigation into the working conditions of content moderators in AI content moderation. The workers, employed by companies such as Samasource, which provide content moderation services to tech giants like Google, Meta (formerly Facebook), and OpenAI, have submitted a petition to Kenya’s National Assembly. The petition sheds light on the disturbing nature of the work these content moderators perform, including exposure to harmful and explicit content without adequate psychological support.

The petition reveals that these Kenyan employees, who have been training ChatGPT since 2021, were required to categorize and label internet content that depicted sexual and graphic violence. This meant that they were regularly exposed to content involving bestiality, necrophilia, incestuous sexual violence, rape, defilement of minors, self-harm, and murder, among other disturbing topics. The nature of this work was not fully disclosed to the workers in their employment contracts, and they were not provided with sufficient psychological support to cope with the psychological toll it took on them.

Furthermore, when the contract between Sama and OpenAI abruptly ended, the workers were sent back home without receiving their pending dues or any medical care for the mental health issues they developed as a result of their work. This highlights the exploitative nature of the outsourcing model employed by big tech companies, which often fails to prioritize the rights and well-being of the workers involved.

This issue is not limited to OpenAI and Samasource. Time magazine previously reported that OpenAI had also employed Kenyan workers to label snippets of text from the darkest corners of the internet, including content depicting violence, hate speech, and sexual abuse. These labeled samples were used to train the ChatGPT models. The investigation also revealed that these data labelers were paid low wages, reinforcing the exploitative nature of the industry.

See also  Generative AI Holds Vast Opportunities, But Risks Loom Large in India

The petition by the Kenyan workers raises important concerns about the deployment of AI and the responsibility of developers and deploying companies to ensure the well-being and rights of their workers. It emphasizes the need for comprehensive regulations that protect workers and prevent exploitation in the development and deployment of AI systems.

Legal action has already been taken in Kenya regarding the treatment of content moderators. In June of this year, a Kenyan employment court ordered Meta to provide proper medical, psychiatric, and psychological care to content moderators in Nairobi who screened content for Facebook. The court ruled that Meta was the primary employer of the workers, while companies like Sama were merely agents.

This petition and the court ruling highlight the need for increased accountability and regulation in the AI industry. The well-being of content moderators and the impact of their work on their mental health must be prioritized. The exploitation of workers and the outsourcing model employed by big tech companies must be addressed. The development and deployment of AI should not come at the cost of human well-being and dignity.

Frequently Asked Questions (FAQs) Related to the Above News

Who are the workers calling for an investigation into AI content moderation work conditions?

The workers involved in training OpenAI's ChatGPT algorithm, employed by companies such as Samasource, are calling for an investigation into AI content moderation work conditions.

What did the petition submitted to Kenya's National Assembly reveal?

The petition shed light on the disturbing nature of the work performed by content moderators, including exposure to harmful and explicit content without adequate psychological support.

What kind of content were the Kenyan workers exposed to?

The Kenyan workers were exposed to content depicting sexual and graphic violence, including bestiality, necrophilia, incestuous sexual violence, rape, defilement of minors, self-harm, and murder.

Were the workers aware of the nature of their work in advance?

The nature of the work was not fully disclosed to the workers in their employment contracts.

Were the workers provided with sufficient psychological support?

No, the workers were not provided with sufficient psychological support to cope with the psychological toll of their work.

What happened when the contract between Sama and OpenAI ended?

When the contract ended, the workers were sent back home without receiving their pending dues or any medical care for the mental health issues they developed as a result of their work.

Is this issue limited to OpenAI and Samasource?

No, similar issues have been reported with other companies. OpenAI had also employed Kenyan workers to label snippets of text depicting violence, hate speech, and sexual abuse.

What does the petition highlight about the deployment of AI?

The petition emphasizes the need for comprehensive regulations to protect workers and prevent exploitation in the development and deployment of AI systems.

Has any legal action been taken regarding the treatment of content moderators?

Yes, in June of this year, a Kenyan employment court ordered Meta (formerly Facebook) to provide proper medical, psychiatric, and psychological care to content moderators in Nairobi who screened content for the platform.

What does the court ruling indicate?

The court ruling indicates that Meta is considered the primary employer of the workers, while companies like Sama are viewed as agents.

What is the message regarding the well-being of content moderators and the deployment of AI?

The well-being of content moderators and the impact of their work on their mental health should be prioritized, and the exploitation of workers in the AI industry must be addressed.

Please note that the FAQs provided on this page are based on the news article published. While we strive to provide accurate and up-to-date information, it is always recommended to consult relevant authorities or professionals before making any decisions or taking action based on the FAQs or the news article.

Aryan Sharma
Aryan Sharma
Aryan is our dedicated writer and manager for the OpenAI category. With a deep passion for artificial intelligence and its transformative potential, Aryan brings a wealth of knowledge and insights to his articles. With a knack for breaking down complex concepts into easily digestible content, he keeps our readers informed and engaged.

Share post:

Subscribe

Popular

More like this
Related

Samsung Unpacked Event Teases Exciting AI Features for Galaxy Z Fold 6 and More

Discover the latest AI features for Galaxy Z Fold 6 and more at Samsung's Unpacked event on July 10. Stay tuned for exciting updates!

Revolutionizing Ophthalmology: Quantum Computing’s Impact on Eye Health

Explore how quantum computing is changing ophthalmology with faster information processing and better treatment options.

Are You Missing Out on Nvidia? You May Already Be a Millionaire!

Don't miss out on Nvidia's AI stock potential - could turn $25,000 into $1 million! Dive into tech investments for huge returns!