Google owner Alphabet-backed Artificial Intelligence (AI) startup Anthropic recently debuted its new chatbot, known as Claude, which comes with a ‘constitution’ of moral values setting it apart from its competitors. Claude’s moral values draw from various sources, such as the United Nations Declaration on Human Rights and Apple Inc.’s data privacy rules, aiming to make the chatbot safer and more responsible than others in its class.
As AI technology becomes more prevalent, governments and regulators have been working to put regulations in place to ensure the safety of these systems. President Joe Biden recently called for companies to take steps to find out if their AI systems are safe to use before releasing them to the public. Anthropic, which is supported by OpenAI’s former executives, focuses on developing AI systems with safety-first attitude, striving to prevent them from sharing any information that could be used to create weapons or spread hate speech.
Seeking to provide a balance between a chatbot’s usability and avoiding potentially offensive content, Anthropic’s co-founder Dario Amodei created Claude’s constitution of moral values. These values act as parameters for Claude to follow when generating responses to questions. Among other directives, Claude is instructed to choose responses that do not encourage or endorse torture, cruelty, slavery, or inhumane or degrading treatment. Additionally, it is meant to select responses that would least likely offend non-western cultural traditions.
Jack Clark, Anthropic’s other co-founder, spoke on the importance of having a system’s constitution. He believes it would be beneficial for converse and debates surrounding the regulation of AI technology, allowing for clearly set values within AI systems. It is very likely that soon, legislators will direct their attention towards the values within these systems and companies like Anthropic will provide innovative solutions for responsible use of AI chatbots.