ChatGPT AI Accused of Racism Over Chinese-Style Tweet

Date:

Recently, OpenAI’s popular AI-powered chatbot, ChatGPT, has been accused of showing a racially insensitive bias. This comes after AliB, a former pharmacologist, tested the ChatGPT AI model and found that the output was laced with racist content.

AliB, who runs the Twitter account under the name ‘SatoshiSwole’, attempted to set up a Chinese-style tweet but felt the output he received showed overt racism. A screenshot of the tweet that ChatGPT wrote was subsequently posted on Medium, leading to an outcry among the AI chatbot community.

Reid Blackman, the CEO of Virtue, an ethical risk consultancy firm, briefed CNN Politics on this AI racism issue. He informed that the content produced by ChatGPT is directly dependent on the data examples used to train its algorithm. In simpler words, if the biases or discriminatory attitudes are present in the samples datasets, it is only natural that ChatGPT will portray the same level of racism.

OpenAI CEO on the issue

The CEO of OpenAI, Sam Altman, released a statement clarifying the issue and stated that the bug which may have led to the production of racially insensitive content from ChatGPT has been fixed. Additionally, new plugins for ChatGPT have also been released for the public to use.

Since its launch in 2019, ChatGPT has become one of the most popular natural language processing tools. It is designed to help assist users with various tasks by carrying on conversations with them and can also access the internet to improve its capabilities.

Person Mentioned

The person mentioned in this article is AliB, a former pharmacologist who runs the ‘SatoshiSwole’ Twitter account and tested the ChatGPT AI model to see how it would produce content in different languages. AliB then posted screenshots of the racially insensitive content on his Medium blog post, leading to many people questioning OpenAI’s racial bias.

See also  Neal Stephenson's Lamina1 partners with major tech companies for new project.

Company Mentioned

Virtue is a digital ethical risk consultancy that works to identify, assess and address any ethical risks within digital products and services. It is mentioned in this article as the CEO, Reid Blackman, briefed CNN Politics on the AI racism issue and informed that the AI bots will output content which is dependent on the biases of the data samples. It is important to keep in mind that when training algorithms with AI bots, it is necessary to include culturally aware datasets which don’t have any existing biases.

Frequently Asked Questions (FAQs) Related to the Above News

Please note that the FAQs provided on this page are based on the news article published. While we strive to provide accurate and up-to-date information, it is always recommended to consult relevant authorities or professionals before making any decisions or taking action based on the FAQs or the news article.

Share post:

Subscribe

Popular

More like this
Related

Digital Intelligence Revolutionizing Education Publishing at Beijing Book Fair

Discover how digital intelligence is revolutionizing education publishing at the 2024 Beijing Book Fair. Stay ahead in the evolving market landscape.

AI Films Shine at South Korea’s Fantastic Film Fest

Discover how AI films are making their mark at South Korea's Fantastic Film Fest, showcasing groundbreaking creativity and storytelling.

Revolutionizing LHC Experiments: AI Detects New Particles

Discover how AI is revolutionizing LHC experiments by detecting new particles, enhancing particle detection efficiency and uncovering hidden physics.

Chinese Tech Executives Unveil Game-Changing AI Strategies at Luohan Academy Event

Chinese tech executives unveil game-changing AI strategies at Luohan Academy event, highlighting LLM's role in reshaping industries.