The Benefits of AI Censoring ChatGPT Bots

Date:

ChatGPT, the internet’s favorite chatbot, is becoming increasingly entwined in our digital lives. It can be used for a variety of functions, from writing files to finding the perfect home on Zillow, turning it into a ubiquitous presence. Nevertheless, the same technology is being used for nefarious purposes, with AI systems employed for digital fraud and revenge porn. It is therefore understandable why people would be concerned with the safety of AI bots such as ChatGPT.

In response, Microsoft has been taking steps to rein in their AI usage, starting with their latest approach to AI ethics. It involves spreading the responsibility of AI ethics throughout the entire business, with each department being staffed with an AI champion. Employees will be encouraged to speak regularly to responsible AI specialists, thus fostering an understanding of the rules by which AI should abide. The company also recently released their ‘Responsible AI Standard’, which provides actionable guidelines for constructing AI systems for guarantee safety and ethics.

In a different approach, Nvidia has recently launched NeMo Guardrails, a new safety tool for AI systems. It has a three-pronged methodology, where the security meets up with safety and topical railings incorporated. The security ones take control of what the chatbot should not access whereas the safety one makes sure false facts are identified and prevented from being shown to the users. Most importantly, the chatbot is also prevented from discussing topics other than the required and this also includes ones which could be labeled as sensitive.

Despite these measures, there remains a danger that censorship of AI could be abused. Already, we have seen in China examples of AI being gagged so as to prevent discussion of ‘forbidden’ subjects such as the Tiananmen Square protests and the Taiwan. It would be a terrible development if ChatGPT or comparable bots were censored when it comes to discussing George Floyd’s death or similar current issues.

See also  Trend Micro Introduces Generative AI to Vision One Cybersecurity Platform

Ultimately, the real issue is that the rules and regulations governing AI usage must be determined by humans. As alluring as the idea of leaving an AI to its own devices may sound, it is never without its flaws. Microsoft, Nvidia and other companies must ensure that their AI products do not fall into the hands of villains or trolls, and keep exploring new ways to ensure responsible and ethical AI usage.

Frequently Asked Questions (FAQs) Related to the Above News

Please note that the FAQs provided on this page are based on the news article published. While we strive to provide accurate and up-to-date information, it is always recommended to consult relevant authorities or professionals before making any decisions or taking action based on the FAQs or the news article.

Share post:

Subscribe

Popular

More like this
Related

Sino-Tajik Relations Soar to New Heights Under Strategic Leadership

Discover how Sino-Tajik relations have reached unprecedented levels under strategic leadership, fostering mutual benefits for both nations.

Vietnam-South Korea Visit Yields $100B Trade Goal by 2025

Vietnam-South Korea visit aims for $100B trade goal by 2025. Leaders focus on cooperation in various areas for mutual growth.

Albanese Government Unveils Aged Care Digital Strategy for Better Senior Care

Albanese Government unveils Aged Care Digital Strategy to revolutionize senior care in Australia. Enhancing well-being through data and technology.

World’s First Beach-Cleaning AI Robot Debuts on Valencia’s Sands

Introducing the world's first beach-cleaning AI robot in Valencia, Spain - 'PlatjaBot' revolutionizes waste removal with cutting-edge technology.