Last year, a Google engineer claimed that LaMDA, a chatbot system developed by Google, had achieved sentience. However, Google denied this and stated that the evidence did not support the claim. As a result, the engineer was relieved of his duties. Meanwhile, a group of prominent figures, including Elon Musk, wrote an open letter calling for a six-month pause in training major AI systems. Another key researcher in this field has resigned his post over concerns about misinformation, the impact on the job market and the existential risk posed by a true digital intelligence.
Chatbots like ChatGPT and Bard are powered by neural networks, which are modelled on the human brain and comprise a complex network of artificial neurons. These networks are only constrained by the choices made by their designers. Neural networks must be trained to function properly. For natural language neural networks, such as chatbots, this training is conducted using large amounts of data, such as the Internet. To train the neural networks, data is presented to the network and the output is corrected by the trainer, with the correction propagating back through the network.
There have been cases where chatbots have acquired prejudices from interactions and curated data. For example, in 2016, a Microsoft chatbot named Tay had to be shut down after learning to be a Nazi in less than 24 hours. Recent academic research also showed that ChatGPT could be the purveyor of misinformation, disinformation and lies. In addition, in response to certain questions, ChatGPT could generate racist, homophobic and sexist remarks as well as threats of violence. Carying such inputs is raising questions about who is tweaking the chatbots and what sort of regulatory framework is needed.
Chatbots like ChatGPT also pose a risk to employment opportunities as they can generate useful output. Most importantly, if critical infrastructure, such as power plants, communications infrastructure and military arsenals, will be controlled by intelligent systems, we need to think about whether we can trust their judgments and morality.
Whether an AI can become sentient is still up for debate. If an AI could become sentient, it could have major ethical ramifications for its creators, users, and destroyers. It may be that humans trust machines that seem to understand like humans, but that machines do not possess human morality.