Daum Facing Criticism Over Political Bias in Comment Screening Algorithm, South Korea

Date:

**Daum Faces Backlash Over Alleged Political Bias in Comment Screening Algorithm**

Daum, the second largest internet portal in Korea, is under scrutiny for its comment screening algorithm, with allegations of political bias surfacing. Concerns have been raised about the implications of this bias on the upcoming general elections. The criticism came after speculations that Daum has a left-leaning inclination.

Member of the Science, ICT, Broadcasting, and Communications Committee and Representative of the ruling People’s Power Party, Park Sung-joong, expressed concerns over the political bias evident in both news distribution and comment filtering on internet portals. He highlighted the selective deletion or hiding of comments such as ‘daeggae’ and ‘daeggaeMoon,’ which target liberal figures. However, comments critical of conservative politicians like President Yoon Suk-yeol were left untouched.

The term daeggaeMoon is a derogatory expression used to belittle supporters of former President Moon Jae-in. It combines offensive language regarding human heads with explicitly violent language, implying physical harm. SafeBot, an AI-powered software application used by Daum since December 2020, is responsible for detecting and blocking comments containing offensive language or vulgar slang.

Rep. Park raised concerns about the validity of the data labeling process used to train SafeBot’s AI model. Since the data labeling is done by Kakao employees, he speculates that the deletion or hiding of comments like daeggaeMoon is not a mere coincidence.

At the heart of the issue lies the question of how much screening technology employed by internet portals should interfere with the publication of unfiltered public opinions. Defining hate speech and determining the responsibility of platforms towards it are also under scrutiny.

See also  US Special Operations Combat Disinformation Threats with AI-Powered Software

Internet portals resorted to technological solutions to tackle the surge in malicious comments targeting victims of disasters, celebrities, and athletes in the early and mid-2000s. Naver, the largest internet portal in Korea, implemented an automatic word replacement function in 2012 and has since advanced its software to utilize AI for comment management. Similarly, Kakao introduced SafeBot to automatically hide comments containing forbidden words.

Daum denied the allegations of political bias, with Kakao, Daum’s parent company, stating that SafeBot does not consider the political context of specific words. They clarified that daeggaeMoon was banned due to its violent nature, not its reference to any political group. Expressions like jwi-Baky or dak-Geun-hye are allowed because they are comprised of neutral terms and not classified as hate speech.

The controversy surrounding the objectivity of AI algorithms used by internet portals has highlighted the need for transparency. While some argue for the algorithms to be made public to resolve doubts, internet portal companies deem algorithm composition as trade secrets. However, as AI continues to integrate into various daily IT services, the objectivity of these algorithms will likely remain a subject of debate.

Naver, the largest internet portal and news distribution platform in Korea, also employs AI software called Cleanbot to filter hate speech by analyzing the entire context of comments rather than specific words. This approach differs from Daum’s SafeBot.

As the debate continues, there is a call for social consensus on the seriousness of algorithm bias, potentially leading to regulation through legislation. Some suggest the voluntary disclosure of training data and the collection of opinions from citizens and academia through public hearings. However, this process is time-consuming and costly.

See also  Tim Cook Expresses 3 Concerns While Considering ChatGPT: The Motley Fool

The controversies surrounding comment-screening AI on internet portals are likely to persist, especially as generative AI like ChatGPT finds its way into daily IT services. At present, AI companies refrain from disclosing training data due to copyright claims.

It remains crucial to strike a balance between effectively filtering hateful content and preserving freedom of speech, while ensuring transparency and accountability in algorithmic decision-making on internet platforms.

*[Translated article: Daum Facing Criticism Over Political Bias in Comment Screening Algorithm – Original Korean article by Kwen Yu-Jin]*

Frequently Asked Questions (FAQs) Related to the Above News

What is Daum and why is it facing backlash?

Daum is the second largest internet portal in Korea. It is facing backlash due to allegations of political bias in its comment screening algorithm.

What are the concerns raised about Daum's algorithm?

Concerns have been raised about the alleged left-leaning inclination of Daum's algorithm, which could have implications for the upcoming general elections. Critics claim that comments targeting liberal figures are selectively deleted or hidden, while those critical of conservative politicians are left untouched.

What specific comments have sparked controversy?

Comments such as 'daeggae' and 'daeggaeMoon' have sparked controversy. 'DaeggaeMoon' is a derogatory term used to belittle supporters of former President Moon Jae-in. Critics argue that these comments were deleted or hidden, indicating a bias in the algorithm.

What is SafeBot and how does it work?

SafeBot is an AI-powered software application used by Daum since December 2020. It is responsible for detecting and blocking comments containing offensive language or vulgar slang. It utilizes AI to identify and filter out inappropriate content.

Who raised concerns about the data labeling process for SafeBot?

Representative Park Sung-joong, a member of the ruling People's Power Party and the Science, ICT, Broadcasting, and Communications Committee, expressed concerns about the data labeling process used to train SafeBot's AI model. He speculates that the deletion or hiding of certain comments is not a coincidence and questions the validity of the process carried out by Kakao employees.

How have other internet portals in Korea addressed comment screening?

Other internet portals, such as Naver, have implemented technological solutions to tackle malicious comments. Naver uses AI software called Cleanbot, which analyzes the entire context of comments to filter out hate speech, rather than relying solely on specific words.

What is Daum's response to the allegations of bias?

Daum and its parent company, Kakao, have denied the allegations of political bias. They state that SafeBot does not consider the political context of specific words and that comments like daeggaeMoon are banned due to their violent nature, not their reference to any political group.

What is the ongoing debate surrounding internet portal algorithms?

The debate centers around the objectivity of AI algorithms used by internet portals and the need for transparency. While some argue for the algorithms to be made public to resolve doubts, internet portal companies consider algorithm composition as trade secrets.

Are there any proposed solutions to address algorithm bias?

There are suggestions for social consensus on the seriousness of algorithm bias, potentially leading to regulation through legislation. Some propose the voluntary disclosure of training data and the collection of opinions through public hearings. However, this process is time-consuming and costly.

What is the importance of balancing content filtering and freedom of speech?

It is crucial to strike a balance between effectively filtering hateful content and preserving freedom of speech. Transparency and accountability in algorithmic decision-making on internet platforms are necessary to address concerns about bias.

Please note that the FAQs provided on this page are based on the news article published. While we strive to provide accurate and up-to-date information, it is always recommended to consult relevant authorities or professionals before making any decisions or taking action based on the FAQs or the news article.

Share post:

Subscribe

Popular

More like this
Related

Canadian Intelligence Chief Warns Against TikTok Use, Cites Chinese Data Threat

Canadian Intelligence Chief warns against TikTok due to Chinese data threat. Stay informed on privacy and security risks.

EU Demands Microsoft’s Internal Data on Generative AI Risks + Fines Threatened

EU demands Microsoft's internal data on generative AI risks for Bing. Fines threatened for non-compliance. Will Microsoft comply?

OpenAI Faces Departures of Top Safety Experts Amid Concerns of Neglecting Safety Measures

OpenAI faces departures of top safety experts amid concerns of neglecting safety measures, raising questions about AI development.

African Media Urged to Embrace AI for Growth: President Akufo-Addo’s Call at AMC

President Akufo-Addo urges African media to embrace AI for growth at AMC, emphasizing ethical use and environmental awareness.