US AI Safety Institute consortium established to ensure safe AI innovation
The US AI Safety Institute Consortium (AISIC) has been established by the Department of Commerce’s National Institute of Standards and Technology. With over 200 stakeholders, including AI creators and users, academic institutions, government and industry researchers, and civil society organizations, the consortium aims to accelerate the development and deployment of safe and reliable artificial intelligence.
In collaboration with the US government, AISIC plans to set the measurements and standards required to maintain America’s competitiveness while promoting responsible AI development. The consortium will focus on areas such as red teaming, risk management, safety and security, and synthetic content watermarking.
One of the notable members of the consortium is Sonar, a clean code innovator. Sonar utilizes its AI Coding assistant to assist developers in writing code that is consistent, purposeful, adaptable, and responsible. The CEO of Sonar emphasizes that AI-generated code, similar to code written by humans, can have issues like bugs, errors, readability, maintainability, and security concerns. To ensure quality, thorough code reviews must be conducted before implementation. Sonar believes that AISIC plays a crucial role in building scalable models for the secure development and use of generative AI.
With the establishment of AISIC, stakeholders from various sectors come together, driven by a shared commitment to advancing the field of AI while prioritizing safety, reliability, and responsibility. By collaborating and developing guidelines, the consortium aims to address the challenges and complexities posed by emerging AI technologies.
This initiative not only highlights the importance of AI safety but also recognizes the need for collaboration among different stakeholders. By uniting researchers, academics, industry leaders, civil society organizations, and government agencies, AISIC creates a platform that brings together diverse expertise and perspectives.
The establishment of AISIC is a significant step towards ensuring the responsible development, deployment, and use of AI technologies. It provides a framework for collaboration, standardization, and knowledge-sharing in the field of AI safety. By leveraging the expertise of its members, AISIC aims to create guidelines that will shape the future of AI innovation, enabling the adoption of AI technologies with confidence and trust.
As AI continues to advance at a rapid pace, ensuring safety and reliability becomes paramount. By working together through AISIC, stakeholders can collectively address the challenges and opportunities presented by AI, establishing a safe and responsible ecosystem for the development and deployment of artificial intelligence.