Daum Facing Criticism Over Political Bias in Comment Screening Algorithm, South Korea

Date:

**Daum Faces Backlash Over Alleged Political Bias in Comment Screening Algorithm**

Daum, the second largest internet portal in Korea, is under scrutiny for its comment screening algorithm, with allegations of political bias surfacing. Concerns have been raised about the implications of this bias on the upcoming general elections. The criticism came after speculations that Daum has a left-leaning inclination.

Member of the Science, ICT, Broadcasting, and Communications Committee and Representative of the ruling People’s Power Party, Park Sung-joong, expressed concerns over the political bias evident in both news distribution and comment filtering on internet portals. He highlighted the selective deletion or hiding of comments such as ‘daeggae’ and ‘daeggaeMoon,’ which target liberal figures. However, comments critical of conservative politicians like President Yoon Suk-yeol were left untouched.

The term daeggaeMoon is a derogatory expression used to belittle supporters of former President Moon Jae-in. It combines offensive language regarding human heads with explicitly violent language, implying physical harm. SafeBot, an AI-powered software application used by Daum since December 2020, is responsible for detecting and blocking comments containing offensive language or vulgar slang.

Rep. Park raised concerns about the validity of the data labeling process used to train SafeBot’s AI model. Since the data labeling is done by Kakao employees, he speculates that the deletion or hiding of comments like daeggaeMoon is not a mere coincidence.

At the heart of the issue lies the question of how much screening technology employed by internet portals should interfere with the publication of unfiltered public opinions. Defining hate speech and determining the responsibility of platforms towards it are also under scrutiny.

See also  Whales Cause 23% Surge in Trendy Meme Coin PEPE, SHIB Also Soars

Internet portals resorted to technological solutions to tackle the surge in malicious comments targeting victims of disasters, celebrities, and athletes in the early and mid-2000s. Naver, the largest internet portal in Korea, implemented an automatic word replacement function in 2012 and has since advanced its software to utilize AI for comment management. Similarly, Kakao introduced SafeBot to automatically hide comments containing forbidden words.

Daum denied the allegations of political bias, with Kakao, Daum’s parent company, stating that SafeBot does not consider the political context of specific words. They clarified that daeggaeMoon was banned due to its violent nature, not its reference to any political group. Expressions like jwi-Baky or dak-Geun-hye are allowed because they are comprised of neutral terms and not classified as hate speech.

The controversy surrounding the objectivity of AI algorithms used by internet portals has highlighted the need for transparency. While some argue for the algorithms to be made public to resolve doubts, internet portal companies deem algorithm composition as trade secrets. However, as AI continues to integrate into various daily IT services, the objectivity of these algorithms will likely remain a subject of debate.

Naver, the largest internet portal and news distribution platform in Korea, also employs AI software called Cleanbot to filter hate speech by analyzing the entire context of comments rather than specific words. This approach differs from Daum’s SafeBot.

As the debate continues, there is a call for social consensus on the seriousness of algorithm bias, potentially leading to regulation through legislation. Some suggest the voluntary disclosure of training data and the collection of opinions from citizens and academia through public hearings. However, this process is time-consuming and costly.

See also  Analysis Reveals Leftward Bias in Large Language Models, Urges Neutrality

The controversies surrounding comment-screening AI on internet portals are likely to persist, especially as generative AI like ChatGPT finds its way into daily IT services. At present, AI companies refrain from disclosing training data due to copyright claims.

It remains crucial to strike a balance between effectively filtering hateful content and preserving freedom of speech, while ensuring transparency and accountability in algorithmic decision-making on internet platforms.

*[Translated article: Daum Facing Criticism Over Political Bias in Comment Screening Algorithm – Original Korean article by Kwen Yu-Jin]*

Frequently Asked Questions (FAQs) Related to the Above News

What is Daum and why is it facing backlash?

Daum is the second largest internet portal in Korea. It is facing backlash due to allegations of political bias in its comment screening algorithm.

What are the concerns raised about Daum's algorithm?

Concerns have been raised about the alleged left-leaning inclination of Daum's algorithm, which could have implications for the upcoming general elections. Critics claim that comments targeting liberal figures are selectively deleted or hidden, while those critical of conservative politicians are left untouched.

What specific comments have sparked controversy?

Comments such as 'daeggae' and 'daeggaeMoon' have sparked controversy. 'DaeggaeMoon' is a derogatory term used to belittle supporters of former President Moon Jae-in. Critics argue that these comments were deleted or hidden, indicating a bias in the algorithm.

What is SafeBot and how does it work?

SafeBot is an AI-powered software application used by Daum since December 2020. It is responsible for detecting and blocking comments containing offensive language or vulgar slang. It utilizes AI to identify and filter out inappropriate content.

Who raised concerns about the data labeling process for SafeBot?

Representative Park Sung-joong, a member of the ruling People's Power Party and the Science, ICT, Broadcasting, and Communications Committee, expressed concerns about the data labeling process used to train SafeBot's AI model. He speculates that the deletion or hiding of certain comments is not a coincidence and questions the validity of the process carried out by Kakao employees.

How have other internet portals in Korea addressed comment screening?

Other internet portals, such as Naver, have implemented technological solutions to tackle malicious comments. Naver uses AI software called Cleanbot, which analyzes the entire context of comments to filter out hate speech, rather than relying solely on specific words.

What is Daum's response to the allegations of bias?

Daum and its parent company, Kakao, have denied the allegations of political bias. They state that SafeBot does not consider the political context of specific words and that comments like daeggaeMoon are banned due to their violent nature, not their reference to any political group.

What is the ongoing debate surrounding internet portal algorithms?

The debate centers around the objectivity of AI algorithms used by internet portals and the need for transparency. While some argue for the algorithms to be made public to resolve doubts, internet portal companies consider algorithm composition as trade secrets.

Are there any proposed solutions to address algorithm bias?

There are suggestions for social consensus on the seriousness of algorithm bias, potentially leading to regulation through legislation. Some propose the voluntary disclosure of training data and the collection of opinions through public hearings. However, this process is time-consuming and costly.

What is the importance of balancing content filtering and freedom of speech?

It is crucial to strike a balance between effectively filtering hateful content and preserving freedom of speech. Transparency and accountability in algorithmic decision-making on internet platforms are necessary to address concerns about bias.

Please note that the FAQs provided on this page are based on the news article published. While we strive to provide accurate and up-to-date information, it is always recommended to consult relevant authorities or professionals before making any decisions or taking action based on the FAQs or the news article.

Share post:

Subscribe

Popular

More like this
Related

Obama’s Techno-Optimism Shifts as Democrats Navigate Changing Tech Landscape

Explore the evolution of tech policy from Obama's optimism to Harris's vision at the Democratic National Convention. What's next for Democrats in tech?

Tech Evolution: From Obama’s Optimism to Harris’s Vision

Explore the evolution of tech policy from Obama's optimism to Harris's vision at the Democratic National Convention. What's next for Democrats in tech?

Tonix Pharmaceuticals TNXP Shares Fall 14.61% After Q2 Earnings Report

Tonix Pharmaceuticals TNXP shares decline 14.61% post-Q2 earnings report. Evaluate investment strategy based on company updates and market dynamics.

The Future of Good Jobs: Why College Degrees are Essential through 2031

Discover the future of good jobs through 2031 and why college degrees are essential. Learn more about job projections and AI's influence.