Former employees of OpenAI have created a new startup, Anthropic, with the mission of building a more responsible Artificial Intelligence (AI). This innovative approach to AI incorporates ethical and moral values, which they have established in the form of a “constitution”. This unique approach has caught the attention of US President Joe Biden and Google, securing the startup with a $300 million investment.
This “constitution” guides the behavior of Anthropic’s chatbot, Claude. It takes into account sources such as the UN Universal Declaration of Human Rights and Apple’s privacy policy, and utilizes AI research labs’ ideologies such as the Deepmind Sparrow principles. Claude’s “constitution” instructs him to choose the most appropriate response with regards to ethical values, and abstain from stereotypes or generalizing statements which might be harmful to certain groups.
Anthropic does not believe their “constitution” is the ultimate solution for responsible AI, but it does see it as a transparent and explicit starting point for the AI community. Claude continuously learns from his training and has incorporated ethical values into his responses, allowing him to make the best decisions for each situation. Anthropic aims to assist companies in developing a personalized constitution for their AI, better suited for their specific use cases.
Anthropic has managed to make an impression in the industry and has received much attention. The White House even invited them to a meeting to discuss the ethics and safety of AI. They also offer a integration of Claude with Slack and have further plans to collaborate with different companies.
Walid is a student at thesilverink.com, where he is part of the editorial staff. He enjoys writing and discussing Science, and is passionate about presenting the most accurate information.