Ex-OpenAI engineers have developed a new AI bot called Claude 2, which promises to be helpful, harmless, and honest. Released by San Francisco-based startup Anthropic, Claude 2 aims to provide a safer and more accurate AI experience. Anthropic, valued at $4 billion, positions itself as a builder of AI models that avoid toxic and discriminatory responses, as well as illegal and unethical activities.
Claude 2 has undergone significant improvements compared to its previous versions. It boasts enhanced capabilities in coding, math, and reasoning, achieving a higher score on the multiple-choice section of the 2021 multistate practice bar exam. Anthropic has implemented a self-critical training process, allowing the bot to revise potentially harmful responses and continuously improve its future outputs.
Crucially, humans are still involved in evaluating the bot’s responses before deployment. Anthropic follows a set of rules and principles known as Constitutional AI to ensure the models avoid engaging in discriminatory behavior and illegal activities.
Anthropic was founded in 2021 by former OpenAI employees Daniela Amodei and Dario Amodei, who held significant positions at OpenAI before leaving to focus on research and development. Anthropic’s Claude, launched in March, distinguishes itself from OpenAI’s ChatGPT by offering more up-to-date knowledge and the ability to write longer documents. Additionally, Claude 2 allows users to upload files in various formats, a feature not available in the free version of ChatGPT.
In a test comparing Claude and ChatGPT, it became evident that Claude adheres more closely to its principles. When asked about their opinion on the book Harry Potter and the Chamber of Secrets, Claude provided factual information and described the plot, while ChatGPT offered an analysis, leaving room for potential errors. Claude’s responses felt friendly, and the bot emphasized its purpose of being helpful, harmless, and honest when approached with more subjective or controversial subjects.
Anthropic’s clients, which include major companies like Slack and Quora, have praised Claude for its ability to resemble a human touch and adopt desired tones and personalities. The user-friendly interface of the Claude website, with its warm color scheme and bubblier response boxes, contributes to the bot’s positive reception.
Moving forward, Anthropic plans to expand Claude’s availability to more countries in the upcoming months. By positioning itself as a provider of a safer and more reliable AI, Anthropic aims to address the growing wariness surrounding the development of AI technology. With human involvement and the application of Constitutional AI principles, Claude 2 represents an important step toward responsible AI development.