Recently, controversy has erupted over the Chinese AI-based chatbot Ernie Bot, developed by Baidu in response to OpenAI’s ChatGPT. Ernie Bot has been criticized for declining to answer sensitive questions about President Xi Jinping and the COVID-19 pandemic. In response to queries about the origins of the virus, the chatbot failed to mention that the virus originated in China or the possibility of a laboratory leak. Similarly, it refused to provide commentary on China’s decision to end its “zero-COVID” policy, and remained silent when asked about President Xi’s potential rule for life, topics that are banned in the country.
The evasive answers and refusal to provide accurate information has raised concerns about the regulation of AI chatbots. OpenAI CEO, Sam Altman, recently addressed these issues in a Congressional hearing, where he stressed the importance of working with governments to ensure that the use of AI platforms does not cause any negative consequences. Companies such as Apple, JPMorgan Chase, Verizon and Amazon have also taken action to combat potential misuse of AI technologies, with some of them implementing bans for their employees or encouraging the use of their internal tools instead. To ensure the security of users, OpenAI also introduced a feature on ChatGPT for disabling chat histories, preventing conversations from being used to train AI models or to appear in the app’s history sidebar.
The emergence of Ernie Bot has further emphasized the need for responsible and ethical use of AI, and the importance of establishing regulations to prevent misuse of these technologies. As this field continues to expand, it is essential that companies and governments take measures to ensure accurate and reliable information is provided in a safe and responsible way.