Surprising Popularity: Americans Favor Mandatory Safety Audits for AI Models, Finds New Poll, US

Date:

Americans Show Surprising Support for Mandatory Safety Audits for AI Models, New Poll Finds

In the quest for ensuring a safe future with the rapid advancement of artificial intelligence (AI), many have turned to lawmakers, coders, scientists, philosophers, and activists for guidance. However, it seems that one group may have been overlooked as a potential source of inspiration: accountants.

A recent poll conducted by the Artificial Intelligence Policy Institute found that a seemingly wonky policy idea, namely mandatory safety audits of AI models before their release, enjoys surprising popularity among American adults. While the idea of audits may not be as exciting as more dramatic responses such as bans or nationalization, it has gained traction as a means of independently assessing the risks associated with new AI systems.

The concept of audits for AI models is not entirely new, but it has not received much attention in policy discourse until now. Ryan Carrier, a chartered financial analyst and advocate for AI audits, described it as under-represented and under-understood. However, when respondents were asked about various AI policy responses in head-to-head preference questions, the idea of AI safety audits came out on top two-thirds of the time, making it second only to the broader concept of preventing dangerous and catastrophic outcomes.

The popularity of government-mandated audits of digital technology is already evident, as demonstrated by the EU’s Digital Services Act. The act requires large online platforms like Amazon, YouTube, and Wikipedia to undergo annual independent audits to ensure compliance with its provisions. Additionally, last month, senators Josh Hawley and Richard Blumenthal unveiled an AI policy framework that calls for an independent oversight body to license and audit risky AI models.

See also  OpenAI Unveils GPT-4 Turbo Updates, Plans Launch of Innovative GPT-4 Turbo with Vision

The inclusion of audits among these policy responses was driven by their popularity in expert surveys. Daniel Colson, founder of the AI Policy Institute, noted that audits were in the sweet spot of being feasible and a priority for the safety community.

So, how would AI audits actually work? The idea is to adopt a similar system to financial audits, where publicly traded companies must submit to audits by independently certified accountants who are held responsible for their conclusions. Ideally, audits would involve pre-deployment assessments of AI model plans and post-deployment assessments of their functioning in the real world.

However, AI audits present unique challenges because even the designers of large language models do not fully comprehend their inner workings. As a result, auditors would need access to the model’s training data and would have to rely on observations of its inputs and outputs. Despite the complexity, the idea of independent oversight through standardized processes, as opposed to relying solely on powerful agencies, appeals to many at a time when trust in government bodies is low and anxiety about AI is high.

As discussions around AI regulation continue, the upcoming executive order on AI from the Biden administration remains a topic of interest. Some experts have expressed concern about potential regulations interfering with federal procurement and have called for a more focused approach. They suggest that the government should primarily focus on responsibly integrating AI into its operations, which would shape markets through its size and scope.

Furthermore, the Equal Employment Opportunity Commission (EEOC) is positioning itself as a governing body for AI in the workplace. This move provides the opportunity to shape policy in the frontier of AI governance. The EEOC could update existing hiring guidelines and establish new guidelines to prevent AI from violating anti-discrimination laws.

See also  Congress Under Pressure to Act on AI Legislation as Tech Industry Advances, US

Although challenges and complexities exist in implementing AI audits, the surprising popularity of this policy idea among Americans highlights the growing recognition that independent assessments are crucial for ensuring the safe and responsible development of AI technologies. As discussions around AI regulation continue, incorporating audits as part of the policy response may gain even more traction.

Frequently Asked Questions (FAQs) Related to the Above News

What are AI safety audits?

AI safety audits are independent assessments conducted on artificial intelligence (AI) models to evaluate their potential risks and ensure compliance with safety measures.

Why have safety audits for AI models gained popularity?

Safety audits have gained popularity as a means of independently assessing the risks associated with new AI systems. They provide a framework for evaluating AI models' safety and identifying any potential dangers before their deployment.

How do AI audits compare to financial audits?

AI audits adopt a similar system to financial audits, where independently certified accountants assess the AI models. These auditors would be responsible for evaluating the models' plans before deployment and assessing their functioning in the real world after deployment.

What are the challenges of conducting AI audits?

AI audits present unique challenges because even the designers of large language models may not fully comprehend their inner workings. Auditors would need access to the models' training data and rely on observations of their inputs and outputs to assess their safety.

What are some examples of existing government-mandated audits?

The EU's Digital Services Act requires large online platforms to undergo annual independent audits. Senators Josh Hawley and Richard Blumenthal also proposed an AI policy framework that calls for an independent oversight body to license and audit risky AI models.

How do AI audits address concerns about government oversight?

AI audits provide independent oversight through standardized processes, reducing reliance on powerful agencies and potentially increasing trust in AI development. This approach appeals to many who have low trust in government bodies and anxiety about AI.

How do AI audits fit into the broader AI regulation discussion?

As discussions around AI regulation continue, the popularity of AI audits may result in their inclusion as part of the policy response. Their independent assessment approach can contribute to the safe and responsible development of AI technologies.

Are there any government actions related to AI regulation?

The upcoming executive order on AI from the Biden administration remains a topic of interest. The Equal Employment Opportunity Commission (EEOC) is also positioning itself as a governing body for AI in the workplace, which could shape policy and prevent AI from violating anti-discrimination laws.

How could AI audits impact the integration of AI into government operations?

Some experts suggest that the government should primarily focus on responsibly integrating AI into its operations, allowing its size and scope to shape markets. AI audits can contribute to this integration by ensuring the safety and compliance of AI models used by the government.

Please note that the FAQs provided on this page are based on the news article published. While we strive to provide accurate and up-to-date information, it is always recommended to consult relevant authorities or professionals before making any decisions or taking action based on the FAQs or the news article.

Share post:

Subscribe

Popular

More like this
Related

Elon Musk Opposes TikTok Ban Over Free Speech Concerns

Elon Musk opposes TikTok ban citing free speech concerns. Get insights on the debate in the U.S. over national security and individual freedoms.

Amazon CEO Andy Jassy Leverages AI for Rapid Ad Growth and Consumer Insights

Amazon CEO Andy Jassy harnesses AI for rapid ad growth and consumer insights, driving innovation and success in a competitive market.

China Condemns AUKUS Pact, Warns of Nuclear Proliferation

China condemns AUKUS pact, warns of nuclear proliferation in South Pacific. Foreign Minister Wang Yi criticizes Western powers for escalating tensions.

Netflix Accused of Using AI-Generated Images in True Crime Documentary What Jennifer Did

Discover the controversy surrounding Netflix's documentary What Jennifer Did and the potential use of AI-generated images in true crime storytelling.