Cross-government cybersecurity best practices announced for safer AI development
The U.K.’s National Cyber Security Center, in collaboration with numerous governments’ cyber agencies and AI vendors, has released the Guidelines for Secure AI System Development. These guidelines, which focus on secure design, development, deployment, and operation of AI systems, aim to enhance cybersecurity practices in the development of artificial intelligence.
The guidelines cover various aspects of AI system development, including threat modeling, supply chain security, protection of AI and model infrastructure, and regular updates to AI models. By emphasizing the importance of cybersecurity, the guidelines seek to ensure the safety, security, and trustworthiness of AI systems.
According to Secretary of Homeland Security Alejandro Mayorkas, this milestone agreement is crucial in the current stage of artificial intelligence development, noting that cybersecurity plays a pivotal role in building secure and reliable AI systems. Mayorkas commended the guidelines as a useful resource for organizations involved in AI development.
AI technology presents unprecedented power in data mining and natural language understanding, but it also introduces new risks and attack vectors. Ron Reiter, co-founder and CTO of Sentra, stresses the significance of adhering to best security practices, as skipping corners during AI model development can expose organizations to numerous consequences.
The newly released guidelines build upon previous government efforts and initiatives aimed at enhancing the security of AI systems globally. These include the CISA Roadmap for Artificial Intelligence, President Biden’s Executive Order from October, Singapore’s AI Governance Testing Framework and Software toolkit called AI Verify, and Europe’s Multilayer Framework for Good Cybersecurity Practices for AI. The document provides links to additional AI security resources released in the past year.
Supply chain security is a critical focus within the guidelines, emphasizing the need to understand the origin and components of AI models, including training data and construction tools. Ensuring that libraries have safeguards against loading untrusted models is one of the suggested measures. The guidelines also highlight the importance of thorough data checks and sanitization, particularly when incorporating user feedback or continuous learning data into corporate models. Additionally, developers are encouraged to adopt a holistic approach to assess threats and anticipate unexpected user behaviors as part of risk management processes and tooling.
Effective access control mechanisms for all AI components, such as training and processing data pipelines, are emphasized in the guidelines. Maintaining a continuous risk-based approach is recommended, recognizing the potential for attackers to manipulate models, data, or prompts during training or after deployment, compromising the integrity and trustworthiness of the system’s output.
Despite its concise 20-page format, the document serves as a valuable reminder of fundamental principles for safe development of generative AI models and techniques. As enterprise technology managers navigate the evolving landscape of AI development, the guidelines provide a foundation for constructing customized security playbooks and educating developers new to AI tools and methodologies.
By adhering to the guidelines outlined in this document, organizations can take significant strides toward ensuring the cybersecurity of AI systems, mitigating potential risks, and building AI technologies that can be trusted. As artificial intelligence continues to shape various industries, safeguarding its development becomes paramount to secure and reliable digital transformation.