International Agreement on Securing AI: Guidelines for Safe Development, US

Date:

Few innovations throughout history have progressed as rapidly as generative artificial intelligence (AI). The technology’s advancements have gotten to the point where there is a growing schism in the field over whether AI’s developing capabilities, should they ever reach the point of full human-level cognition, can be constrained. But lost in the debate around both AI and artificial generative intelligence (AGI) is the simple fact that underneath all the hype and apocalyptic hysteria around it, the innovation remains no more than a piece of software.

And just like with other software tools, enterprises looking to integrate it into their workflows — and companies looking to develop and ship the latest, greatest version — need to be aware of best practices as it relates to anti-fraud and cyber protection.

This, as the U.S., U.K. and over a dozen other nations on Sunday (Nov. 26) released a detailed international agreement on how to keep AI safe from rogue actors and hackers, pushing for companies developing AI products and systems to ensure they are secure by design.

Co-sealed by 23 domestic and international cybersecurity organizations, this publication marks a significant step in addressing the intersection of artificial intelligence (AI), cybersecurity, and critical infrastructure, the U.S. Cybersecurity and Infrastructure Security Agency (CISA) said in a statement.

Germany, Italy, the Czech Republic, Estonia, Poland, Australia, Chile, Israel, Nigeria and Singapore are among the other signatories on the non-binding Guidelines for secure AI system development agreement.

Outside of Beijing, very few national governments have put in place regulations or laws dedicated to addressing AI and the risks around it.

The guidelines agreed to by the U.S. and other nations do not seek to impact areas such as copyright protection around AI system training data, or even how that data is collected, and also avoids tackling issues like which uses of AI are appropriate.

See also  Samsung Unveils AI-Packed Galaxy S24 Flagship Range, Starting at $800, South Korea

Rather, the agreement seeks to treat AI the same as any other software tool, and create a shared set of values, tactics and practices to help creators and distributors use this powerful technology responsibly as it evolves.

The guidelines are broken down into four key areas within the AI system development life cycle: secure design, secure development, secure deployment, and secure operation and maintenance.

They establish a framework for monitoring AI systems and keeping them safe from hackers, as well as other best practices around data protection and external vendor vetting, ensuring companies designing and using AI can develop and deploy it in a way that keeps customers and the wider public safe from misuse.

The Guidelines apply to all types of AI systems, not just frontier models. We provide suggestions and mitigations that will help data scientists, developers, managers, decision-makers, and risk owners make informed decisions about the secure design, model development, system development, deployment, and operation of their machine learning AI systems, wrote the U.S. CISA.

The multinational agreement is aimed primarily at providers of AI systems, whether based on models hosted by an organization or making use of external application programming interfaces. It comes after White House introduced an executive order on AI last month.

Western observers believe that there must be an ongoing process of interaction between governments, the private sector and other relevant organizations for AI regulation to be effectively implemented in the U.S. The agreement, by treating AI systems as software infrastructures, takes a first step at compartmentalizing and addressing specific vulnerabilities and potential attack vectors that could open the innovation up for abuse when deployed within an enterprise setting.

See also  High-Profile Lawsuits Test Future of ChatGPT and AI Products in Copyright Battle

PYMNTS has previously covered how a healthy, competitive market is one where the doors are open to innovation and development, not shut to progress.

Shaunt Sarkissian, CEO and founder of AI-ID, told PYMNTS that it is important to compartmentalize the AI’s functions to restrict its scope and purpose, as well as to develop specific rules and regulations for different use cases.

He added that the evolving dynamics between the government and AI innovators underscores the importance of government agencies setting high-level standards and criteria for AI companies hoping to work with them.

As AI continues to advance at an unprecedented pace, the need to prioritize security and responsible development becomes increasingly crucial. The international agreement signed by the U.S. and other nations signifies a significant milestone in addressing the intersection of AI, cybersecurity, and critical infrastructure. By establishing guidelines for secure AI system development, the signatories aim to ensure that AI remains a powerful technology that is used responsibly and ethically. The agreement focuses on the entire AI system development lifecycle, emphasizing secure design, development, deployment, and operation. Through these guidelines, companies can proactively safeguard AI systems from hackers, protect data, and vet external vendors. Importantly, the guidelines apply to all types of AI systems, promoting informed decision-making by data scientists, developers, managers, decision-makers, and risk owners. As AI becomes increasingly integrated into various industries, it is paramount to prioritize security and responsible deployment to protect customers and the wider public from potential misuse.

Frequently Asked Questions (FAQs) Related to the Above News

Please note that the FAQs provided on this page are based on the news article published. While we strive to provide accurate and up-to-date information, it is always recommended to consult relevant authorities or professionals before making any decisions or taking action based on the FAQs or the news article.

Share post:

Subscribe

Popular

More like this
Related

Global Data Center Market Projected to Reach $430 Billion by 2028

Global data center market to hit $430 billion by 2028, driven by surging demand for data solutions and tech innovations.

Legal Showdown: OpenAI and GitHub Escape Claims in AI Code Debate

OpenAI and GitHub avoid copyright claims in AI code debate, showcasing the importance of compliance in tech innovation.

Cloudflare Introduces Anti-Crawler Tool to Safeguard Websites from AI Bots

Protect your website from AI bots with Cloudflare's new anti-crawler tool. Safeguard your content and prevent revenue loss.

Paytm Founder Praises Indian Government’s Support for Startup Growth

Paytm founder praises Indian government for fostering startup growth under PM Modi's leadership. Learn how initiatives are driving innovation.