ChatGPT, the internet’s favorite chatbot, is becoming increasingly entwined in our digital lives. It can be used for a variety of functions, from writing files to finding the perfect home on Zillow, turning it into a ubiquitous presence. Nevertheless, the same technology is being used for nefarious purposes, with AI systems employed for digital fraud and revenge porn. It is therefore understandable why people would be concerned with the safety of AI bots such as ChatGPT.
In response, Microsoft has been taking steps to rein in their AI usage, starting with their latest approach to AI ethics. It involves spreading the responsibility of AI ethics throughout the entire business, with each department being staffed with an AI champion. Employees will be encouraged to speak regularly to responsible AI specialists, thus fostering an understanding of the rules by which AI should abide. The company also recently released their ‘Responsible AI Standard’, which provides actionable guidelines for constructing AI systems for guarantee safety and ethics.
In a different approach, Nvidia has recently launched NeMo Guardrails, a new safety tool for AI systems. It has a three-pronged methodology, where the security meets up with safety and topical railings incorporated. The security ones take control of what the chatbot should not access whereas the safety one makes sure false facts are identified and prevented from being shown to the users. Most importantly, the chatbot is also prevented from discussing topics other than the required and this also includes ones which could be labeled as sensitive.
Despite these measures, there remains a danger that censorship of AI could be abused. Already, we have seen in China examples of AI being gagged so as to prevent discussion of ‘forbidden’ subjects such as the Tiananmen Square protests and the Taiwan. It would be a terrible development if ChatGPT or comparable bots were censored when it comes to discussing George Floyd’s death or similar current issues.
Ultimately, the real issue is that the rules and regulations governing AI usage must be determined by humans. As alluring as the idea of leaving an AI to its own devices may sound, it is never without its flaws. Microsoft, Nvidia and other companies must ensure that their AI products do not fall into the hands of villains or trolls, and keep exploring new ways to ensure responsible and ethical AI usage.