Anthropic, an AI startup, has released its latest major language model called Claude 2.0. The company claims that this new model has shown improvements in coding, math, and reasoning skills, while also producing fewer harmful answers compared to its predecessor. In an effort to make Claude 2.0 more widely available, Anthropic has launched a beta-test website called claude.ai for general users to register in the U.S. and U.K., as well as offering businesses access to the model through an API at the same price as its previous version.
According to Dario Amodei, the CEO of Anthropic, Claude 2.0 represents more of an evolutionary progress rather than a gigantic leap. In tests, Claude 2.0 outperformed its predecessor in various measures, including Python coding, middle school math quizzes, and the Bar exam. Additionally, Claude 2.0 has an increased capacity to analyze prompts, now capable of handling up to double the length compared to the previous version.
The launch of Claude 2.0 comes shortly after Anthropic disclosed $450 million in new funding led by Spark Capital, and thousands of businesses have already been utilizing Claude’s API. Anthropic is also working with larger customers such as Zoom, Notion, and Midjourney on building customized models.
While the release of Claude 2.0 may seem like a reversal for Anthropic, which originally split from OpenAI over differences in commercialization, Amodei insists that commercializing their models was always part of their plan. The decision to open up Claude 2.0 to a wider audience, including business users, was driven by the need for a broader safety testing ground to evaluate potential risks associated with the model.
Anthropic’s models were trained using a framework called Constitutional AI, which improves AI results without human oversight. However, some human feedback and oversight was still incorporated into the development of Claude 2.0. Anthropic claims that the new model is twice as effective as its predecessor at limiting harmful outputs.
Amodei acknowledges that no model can be perfect and there will always be potential flaws or harmful outputs. Despite this, Anthropic aims to mitigate risks associated with AI while continuing to release new models. Rather than advocating for a freeze on model releases, Amodei proposes implementing safety checks and regulations to manage the risks posed by AI models.
Overall, Anthropic’s release of Claude 2.0 showcases the company’s commitment to advancing AI technology while prioritizing safety and improvements in performance for users.