Anthropic, Google, Microsoft, and OpenAI have joined forces to establish the Frontier Model Forum, a groundbreaking industry body committed to promoting responsible and trusted AI practices. This collaboration, announced in late July 2023, aims to ensure the safe and responsible development of frontier AI models. The Forum will draw on the expertise of its member companies to advance technical evaluations, benchmarks, and industry standards, while also providing a public library of solutions to support best practices.
Frontier models, as defined by the Forum, are large-scale machine-learning models that surpass the capabilities of existing advanced models and possess the ability to perform a wide range of tasks. Membership in the Forum is open to organizations that meet certain criteria, and the initiative welcomes their participation in the pursuit of safe and responsible development of frontier AI models.
Recognizing the need for appropriate guardrails and mitigation of risks associated with AI, both governments and the industry have been actively engaged in these efforts. The US and UK governments, the European Union, the OECD, the G7, and others have already made significant contributions to shaping AI safety standards. However, further work is necessary, particularly in the areas of safety standards and evaluations for frontier AI models.
The Frontier Model Forum aims to facilitate cross-organizational discussions and actions focused on AI safety and responsibility. Over the course of the next year, the Forum will concentrate on three key areas to support the safe and responsible development of frontier AI models. These areas include technical evaluations and benchmarks, the development of a public library of solutions, and fostering collaboration and best practices across the industry.
Leaders from the participating companies, including Kent Walker from Google & Alphabet, Brad Smith from Microsoft, Anna Makanju from OpenAI, and Dario Amodei from Anthropic, have all expressed enthusiasm for this collaborative initiative. They share a common belief in the necessity of responsible AI innovation that benefits humanity as a whole.
To effectively carry out their mission, the Frontier Model Forum plans to establish an Advisory Board comprising individuals from diverse backgrounds and perspectives. This board will help guide the strategy and priorities of the Forum. Moreover, the founding companies will define key institutional arrangements, including a charter, governance structure, and funding mechanisms. They also intend to seek input from civil society and governments to ensure the Forum’s design aligns with meaningful collaboration.
The Frontier Model Forum is eager to support existing initiatives by governments and multilateral organizations, such as the G7 Hiroshima process, the OECD’s work on AI risks, standards, and social impact, and the US-EU Trade and Technology Council. Additionally, the Forum aims to collaborate with other valuable multi-stakeholder efforts in the AI community, including the Partnership on AI and MLCommons.
As the use of AI becomes increasingly prevalent, responsible development and deployment are paramount. The formation of the Frontier Model Forum represents a significant step towards ensuring the safe, secure, and ethically sound utilization of AI technologies. By fostering collaboration, advancing technical evaluations, and establishing industry benchmarks, the Forum strives to address the challenges associated with frontier AI models and promote their broadest benefit to society.
Through this united front, Anthropic, Google, Microsoft, and OpenAI are laying the foundations for a safer and more responsible AI landscape that can positively transform the world.