Ex-OpenAI employee shares their experiences with witnessing distressing content while training ChatGPT.

Date:

AI has become a large part of our everyday lives as it powers services and applications we use. Behind the scenes, countless hours are invested in training AI models to make sure they work efficiently and safely. Recently, reports have shed light on the traumatizing experiences faced by AI specialists while training OpenAI’s ChatGPT. The employees, contracted through Sama, worked on an AI technique called Reinforcement Learning from Human Feedback and told the US Sun about their harrowing encounters.

Richard Mathenge, one of the AI engineers, informed Slate that he and his team were required to spend nine hours a day, five days a week, training the model. The tasks assigned to them were disconcerting, as they would be forced to categorize explicit and disturbing texts. This process allows language models to remain fit for consumer use, but unfortunately, had a harmful effect on the mental health of the trainers.

Texts that included explicit descriptions about heinous crimes like child abuse and bestiality were among some of the texts they were exposed to. Mathenge was concerned when he noticed signs of emotional distress and lack of enthusiasm from the AI specialists. He said they were not prepared to handle such graphic content, illustrating the harm this process had on them.

Mophat Okinyi, a colleague of Mathenge’s, talked about the numerous medical issues he suffered due to his work. He experiences chronic panic attacks, insomnia, and depression. He even connects the deterioration of his family and the departure of his wife to the psychological impact of training ChatGPT.

See also  Using ChatGPT in eClinicalWorks Electronic Health Records

The aforementioned AI specialists highlighted the inadequate support they were given throughout the process. They think OpenAI and Sama should have provided them with comprehensive wellness programs, individual counseling, and limitations on the explicit content they were exposed to. Mathenge mentioned that they had a counselor, but he was “not professional” or qualified to deal with their traumatic experiences. He furthermore stated that the counselor asked “basic questions” such as, “What is your name?” and “How do you find your work?”

These AI engineers served an integral part in the success of ChatGPT, and even though their experience was painful, they take pride in their contribution. It is important that OpenAI and AI annotation companies like Sama prioritize the well-being of their employees and offer more comprehensive support systems to help AI specialists cope with emotional distress. Mental health services, personalized counseling, and reduced exposure to explicit content must be taken into account in order to protect the mental health of these employees.

Frequently Asked Questions (FAQs) Related to the Above News

Please note that the FAQs provided on this page are based on the news article published. While we strive to provide accurate and up-to-date information, it is always recommended to consult relevant authorities or professionals before making any decisions or taking action based on the FAQs or the news article.

Share post:

Subscribe

Popular

More like this
Related

Samsung Unpacked Event Teases Exciting AI Features for Galaxy Z Fold 6 and More

Discover the latest AI features for Galaxy Z Fold 6 and more at Samsung's Unpacked event on July 10. Stay tuned for exciting updates!

Revolutionizing Ophthalmology: Quantum Computing’s Impact on Eye Health

Explore how quantum computing is changing ophthalmology with faster information processing and better treatment options.

Are You Missing Out on Nvidia? You May Already Be a Millionaire!

Don't miss out on Nvidia's AI stock potential - could turn $25,000 into $1 million! Dive into tech investments for huge returns!

Revolutionizing Business Growth Through AI & Machine Learning

Revolutionize your business growth with AI & Machine Learning. Learn six ways to use ML in your startup and drive success.