National AI Safety Institute Consortium Takes Shape
In accordance with President Joe Biden’s 2023 executive order on artificial intelligence, the federal government has taken significant steps towards the creation of an AI safety consortium. The Biden-Harris administration recently announced the official launch of the AI Safety Institute Consortium (AISIC), dedicated to ensuring the safety of AI systems.
The establishment of the consortium is an outcome of President Biden’s executive order, which mandated the National Institute of Standards and Technology (NIST) to form the U.S. AI Safety Institute (USAISI). The USAISI is tasked with developing rigorous standards to test AI models and ensure their safety for public use. As part of their work, the institute has formed the AI Safety Institute Consortium, open to participants from any organization interested in AI safety. In November 2023, NIST began inviting organizations to join the consortium, setting the stage for its official creation.
The inaugural cohort of the AI Safety Institute Consortium comprises over 200 stakeholders representing various sectors. From the private sector, notable companies such as Apple, Meta, and Microsoft have become members. Higher education institutions including Carnegie Mellon University, Stanford Institute for Human-Centered AI, and Ohio State University have also joined the consortium. Additionally, public sector entities like the state of Kansas Office of Information Technology Services and the state of California Department of Technology are part of this initiative.
Secretary of Commerce Gina Raimondo highlighted the consortium’s purpose, stating, President Biden directed us to pull every lever to accomplish two key goals: set safety standards and protect our innovation ecosystem. That’s precisely what the U.S. AI Safety Institute Consortium is set up to help us do.
Comprising individuals closely involved in understanding and advancing AI’s societal transformation, the consortium is the largest gathering of test and evaluation teams dedicated to AI safety to date. Bruce Reed, White House deputy chief of staff, commended the establishment of the AI Safety Consortium, noting its critical role in seizing AI’s potential while managing its inherent risks.
Recently, Secretary Raimondo revealed the key members of the USAISI leadership team. Elizabeth Kelly has been appointed as the inaugural director of the institute, responsible for executive leadership, management, and coordination with other AI policy initiatives across the government. Elham Tabassi, on the other hand, will serve as the chief technology officer, leading the institute’s technical programs, research and evaluation of AI models, and the development of guidance.
With the creation of the AI Safety Institute Consortium, the United States is taking significant strides in ensuring the safe development and deployment of AI systems. By bringing together stakeholders from diverse sectors, the consortium aims to establish robust safety standards to protect both the public and the innovation ecosystem. This collaborative effort will play a crucial role in harnessing the promise of AI while effectively managing its potential risks for the betterment of society.