Anthropic, a San Francisco-based AI company backed by Alphabet Inc., recently released a powerful new language processing model named Claude which is poised to challenge Microsoft-backed OpenAI’s ChatGPT. Built to generate human-like text, Claude is used in tasks such as code creation and editing legal contracts.
OpenAI’s ChatGPT has done amazing work in this area of tech, but Anthropic has set out to make their AI product different by focusing on specific safety issues. It is at the core of their Claude model to produce AI systems that are less likely to generate dangerous content such as instructions to make weapons or hack computer systems.
Therefore, Anthropic has designed Claude to explain objections and have certain principles in place while it is being trained. This way, users will be less likely to use prompt engineering – talking their way around restrictions – to generate illegal or harmful content.
This approach has been well-received, as Robin AI, a London-based startup, saw when they were given the opportunity to test out Claude’s capabilities. Rich Robinson, the CEO of Robin AI noted that the challenge was to loosen the restrictions of Claude sufficiently for allowed uses.
At a time when dangerous content is a major concern, Anthropic offers a promising solution with Claude. With its ability to understand dense legal language and offer explanations for its objections, Claude is certainly a viable competitor to OpenAI’s ChatGPT in the artificial intelligence space.
Anthropic is a San Francisco-based AI company backed by Alphabet Inc. It was co-founded by siblings Dario and Daniela Amodei who both have backgrounds as former OpenAI executives. Their mission is to create AI systems that are less likely to generate content that is offensive or unlawful.
Richard Robinson is the CEO of Robin AI, a London-based startup which uses AI to analyze legal contracts. The company was granted early access to Claude and was lucky enough to test out the capabilities. Robinson noted that the challenge was to loosen the restrictions of Claude for acceptable uses.
Overall, the introduction of Claude is a key milestone in ensuring AI safety and preventing the creation of dangerous and unlawful content. With its ability to explain objections and its focus on safety, Anthropic’s new model is certainly a formidable competitor in the language model space.