NIST Invites Participation in Consortium to Enhance AI Safety
The National Institute of Standards and Technology (NIST) is seeking participants for a consortium aimed at advancing the evaluation of artificial intelligence (AI) systems. NIST’s initiative is a fundamental part of the recently introduced U.S. AI Safety Institute, led by NIST and announced at the UK AI Safety Summit. The consortium is also a response to the Executive Order on Safe, Secure, and Trustworthy Development and Use of AI. As per the order, NIST has been assigned the task of developing a companion resource to the NIST AI Risk Management Framework.
The consortium will serve as a platform for informed dialogue and facilitate the sharing of insights and information. Organizations will then have the opportunity to enter into a cooperative research and development agreement to support the safe and trustworthy implementation of AI. The U.S. AI Safety Institute will collaborate with other government agencies to assess the capabilities, limitations, risks, and impacts of AI. Additionally, they will work together to establish testbeds. Internationally, the U.S. AI Safety Institute will partner with organizations to exchange best practices, evaluate capabilities, and provide red-team guidance.
Interested organizations have until December 2, 2023, to register their interest and be part of this influential consortium. The goal is to foster collaboration and generate innovative methods for evaluating AI systems, ensuring their safety and reliability.
NIST’s initiative is a crucial step in addressing the challenges posed by AI and aligns with the broader focus on enhancing the safety and security of AI technologies. The safe development and use of AI are of utmost importance to not only the United States but also the global AI community.
By establishing the consortium, NIST aims to bring together diverse stakeholders to exchange knowledge and expertise. This collaborative approach will contribute to the establishment of a comprehensive framework that addresses critical issues related to auditing AI capabilities, authenticating content generated by humans using AI, watermarking AI-generated content, and creating test environments for AI systems.
The participation of various agencies and organizations in this consortium will lead to a robust evaluation process that ensures AI technologies are safe, secure, and trustworthy. By evaluating AI capabilities and risks, as well as coordinating on building testbeds, the U.S. AI Safety Institute will foster a holistic approach to AI development.
Moreover, the global reach of the U.S. AI Safety Institute will enable international collaboration with organizations worldwide. This cooperation will facilitate the sharing of best practices and the evaluation of AI capabilities. By providing red-team guidance, the U.S. AI Safety Institute will actively contribute to ensuring the responsible and ethical use of AI technologies across borders.
In conclusion, NIST’s call for a consortium to enhance AI safety marks a significant step forward in the development of comprehensive frameworks for evaluating AI systems. By fostering collaboration and information sharing, this initiative aims to address the challenges associated with AI technologies effectively. As organizations worldwide join forces to develop innovative evaluation methods, the safe and trustworthy use of AI will be promoted, benefiting society as a whole.