As technology advances, questions arise around who is responsible for safety and what values should be considered “right” and “wrong”. Daniela Amodei, co-founder of Anthropic, an AI lab rivaling OpenAI, has made it her mission to prove that trust and safety is not a bug, but a feature. This viewpoint has proven successful: her company has already raised over a billion dollars in funding and achieved a valuation of $4.1 billion.
In pursuit of a future where trust and safety is central to Artificial Intelligence, Amodei and her Anthropic colleagues are utilizing a “triple H” framework—helpful, honest, and harmless—in their research. Through this method, they are building “constitutional AI”, or models trained with a set of human-provided principles, in a bid to ensure compliance with human values. Furthermore, the team has already released Claude, a “more steerable” version of OpenAI’s ChatGPT, used by companies such as Notion and Quora.
Her career in tech began in 2013, joining the then-under-the-radar payments company Stripe when it had just forty employees. Amodei was attracted by the drive to combine business, engineering and operations, and the desire to deepen tech skills while transitioning away from politics. She then moved to OpenAI in 2018.
In 2020, Amodei and six of her OpenAI colleagues left the company to start Anthropic, a move that was mired in controversy. It had been reported that Dario Amodei, OpenAI’s lead safety researcher, had concerns around the company’s fast-paced approach to releasing products. She has emphasized the importance of safety being at the “center and core” of every step of Anthropic’s research, from giving feedback on model outputs to building constitutional AI.
Anthropic itself has a unique interdisciplinary culture. Its one hundred plus employees hail from diverse backgrounds, from physics to computational biology, with even the founders having nontraditional tech backgrounds, such as tech journalism. The company is determined to be the leader in the AI field and it is evident in the ways they are tackling the challenge of creating artificial intelligence safely. According to Amodei, by prioritizing trust and safety from the beginning, Anthopic’s speedy development will remain nimble and prepared for any issues that may come up.