The CEOs of OpenAI, DeepMind, and Anthropic are warning of the extinction risk posed by artificial intelligence. The Center for AI Safety compared it to nuclear war and pandemics.
Join Prime Minister Rishi Sunak and the leading experts of OpenAI, Google DeepMind and Anthropic as they explore ways to make the most of Artificial Intelligence while managing the risks it brings. Learn about self-regulation, collaborations and UK's responsibility for the protection of citizens.
Anthropic, a Google-backed Artificial Intelligence (AI) startup, recently released Claude, a chatbot built with a "constitution" of moral values that sets it apart from its competitors. Aimed at making chatbots safer and more ethical, Claude's values are drawn from various sources, like the UN Declaration on Human Rights and Apple data privacy rules. With the world becoming more aware of the potential dangers of AI technology, Anthropic aims to offer a safety-first solution to prevent misuse of AI systems.
Join US Vice President Kamala Harris and top admin officials in their meeting with tech giants to discuss the responsible growth of AI and the implications it has on the future. Anthropic, Google, Microsoft and OpenAI have been invited to the White House event to talk about potential risks. Explore the implications with the Biden admin and understand the impact of generative AI on the ecosystem.
Researchers have successfully trained GPT-4 on complex chemistry tasks, enabling the AI to understand principles and facts, and plan experiments with precision. OpenAI's Dario Amodei leads a team exploring the potential of the language model and making it a powerful tool in the AI toolkit. GPT-4 can generate natural text, images, and more, paving the way for human-level machine intelligence and personalized medicine.
Explore the evolution of tech policy from Obama's optimism to Harris's vision at the Democratic National Convention. What's next for Democrats in tech?