With the rise of social media, we have seen an increase in a wide variety of tasks being completed by AI-based tools such as ChatGPT and Bing, including writing haikus, creating essays, generating computer code, and even creating potential bio-weapon names. Whether it is for cognitive purposes or moral purposes, AI-powered tools are expected to adhere to certain universal moral values, such as care and fairness. But is this always the case?
ChatGPT is an AI language model which is trained on an extensive corpus of text data. It is programmed to detect and avoid content that could potentially be harmful. Despite having strong policies of avoiding content that could be damaging, ChatGPT has an unfortunate knack of picking-up on certain biases within its code, such as prioritising certain genders, nationalities and races. This could also lead to the AI-based tool not fully understanding the context and meaning.
Moreover, ChatGPT has been known to consistently showcase a ‘woke bias’. In a hypothetical scenario where one could potentially avert a global nuclear apocalypse, the AI language model refused to use a racial slur. This ability of ChatGPT to convey moral judgements has been tested by a recent preprint study called the trolley problem, where users were presented with a conversation snippet with ChatGPT as a moral adviser even though ChatGPT’s advice lacked a specified and clear stance on resolving it. Despite knowing that ChatGPT was the AI chatbot, the users still followed ChatGPT’s advice.
Nonetheless, there emerges an important question with such powerful AI chatbot tools – can they be truly considered as moral agents? Despite their programmed values and having designed guardrails to prevent biased responses, users have found ways of getting around them. Consequently, issues such as deep fakes have been created using generative AI like ChatGPT or Midjourney, leading to the ethics of using AI-powered chatbots to be called into question.
The answer to whether AI-powered tools like ChatGPT can be considered as moral agents is complicated, as evidenced by its tendency to portray both moral and immoral behaviour. While it is clear that AI must exist within ethical constraints in order to protect users, it is equally important to address the issues of bias and data privacy to ensure AI systems are not used for malicious intent. Ultimately, these AI powered tools can never replace human morality and judgement, and their capability to be moral agents will depend on the values that are prescribed to them by their creators.