Surprising Popularity: Americans Favor Mandatory Safety Audits for AI Models, Finds New Poll, US

Date:

Americans Show Surprising Support for Mandatory Safety Audits for AI Models, New Poll Finds

In the quest for ensuring a safe future with the rapid advancement of artificial intelligence (AI), many have turned to lawmakers, coders, scientists, philosophers, and activists for guidance. However, it seems that one group may have been overlooked as a potential source of inspiration: accountants.

A recent poll conducted by the Artificial Intelligence Policy Institute found that a seemingly wonky policy idea, namely mandatory safety audits of AI models before their release, enjoys surprising popularity among American adults. While the idea of audits may not be as exciting as more dramatic responses such as bans or nationalization, it has gained traction as a means of independently assessing the risks associated with new AI systems.

The concept of audits for AI models is not entirely new, but it has not received much attention in policy discourse until now. Ryan Carrier, a chartered financial analyst and advocate for AI audits, described it as under-represented and under-understood. However, when respondents were asked about various AI policy responses in head-to-head preference questions, the idea of AI safety audits came out on top two-thirds of the time, making it second only to the broader concept of preventing dangerous and catastrophic outcomes.

The popularity of government-mandated audits of digital technology is already evident, as demonstrated by the EU’s Digital Services Act. The act requires large online platforms like Amazon, YouTube, and Wikipedia to undergo annual independent audits to ensure compliance with its provisions. Additionally, last month, senators Josh Hawley and Richard Blumenthal unveiled an AI policy framework that calls for an independent oversight body to license and audit risky AI models.

See also  Disney Launches AI Task Force to Revolutionize Entertainment Industry

The inclusion of audits among these policy responses was driven by their popularity in expert surveys. Daniel Colson, founder of the AI Policy Institute, noted that audits were in the sweet spot of being feasible and a priority for the safety community.

So, how would AI audits actually work? The idea is to adopt a similar system to financial audits, where publicly traded companies must submit to audits by independently certified accountants who are held responsible for their conclusions. Ideally, audits would involve pre-deployment assessments of AI model plans and post-deployment assessments of their functioning in the real world.

However, AI audits present unique challenges because even the designers of large language models do not fully comprehend their inner workings. As a result, auditors would need access to the model’s training data and would have to rely on observations of its inputs and outputs. Despite the complexity, the idea of independent oversight through standardized processes, as opposed to relying solely on powerful agencies, appeals to many at a time when trust in government bodies is low and anxiety about AI is high.

As discussions around AI regulation continue, the upcoming executive order on AI from the Biden administration remains a topic of interest. Some experts have expressed concern about potential regulations interfering with federal procurement and have called for a more focused approach. They suggest that the government should primarily focus on responsibly integrating AI into its operations, which would shape markets through its size and scope.

Furthermore, the Equal Employment Opportunity Commission (EEOC) is positioning itself as a governing body for AI in the workplace. This move provides the opportunity to shape policy in the frontier of AI governance. The EEOC could update existing hiring guidelines and establish new guidelines to prevent AI from violating anti-discrimination laws.

See also  Mosti explores legislation to enhance transparency and ethics of AI usage

Although challenges and complexities exist in implementing AI audits, the surprising popularity of this policy idea among Americans highlights the growing recognition that independent assessments are crucial for ensuring the safe and responsible development of AI technologies. As discussions around AI regulation continue, incorporating audits as part of the policy response may gain even more traction.

Frequently Asked Questions (FAQs) Related to the Above News

What are AI safety audits?

AI safety audits are independent assessments conducted on artificial intelligence (AI) models to evaluate their potential risks and ensure compliance with safety measures.

Why have safety audits for AI models gained popularity?

Safety audits have gained popularity as a means of independently assessing the risks associated with new AI systems. They provide a framework for evaluating AI models' safety and identifying any potential dangers before their deployment.

How do AI audits compare to financial audits?

AI audits adopt a similar system to financial audits, where independently certified accountants assess the AI models. These auditors would be responsible for evaluating the models' plans before deployment and assessing their functioning in the real world after deployment.

What are the challenges of conducting AI audits?

AI audits present unique challenges because even the designers of large language models may not fully comprehend their inner workings. Auditors would need access to the models' training data and rely on observations of their inputs and outputs to assess their safety.

What are some examples of existing government-mandated audits?

The EU's Digital Services Act requires large online platforms to undergo annual independent audits. Senators Josh Hawley and Richard Blumenthal also proposed an AI policy framework that calls for an independent oversight body to license and audit risky AI models.

How do AI audits address concerns about government oversight?

AI audits provide independent oversight through standardized processes, reducing reliance on powerful agencies and potentially increasing trust in AI development. This approach appeals to many who have low trust in government bodies and anxiety about AI.

How do AI audits fit into the broader AI regulation discussion?

As discussions around AI regulation continue, the popularity of AI audits may result in their inclusion as part of the policy response. Their independent assessment approach can contribute to the safe and responsible development of AI technologies.

Are there any government actions related to AI regulation?

The upcoming executive order on AI from the Biden administration remains a topic of interest. The Equal Employment Opportunity Commission (EEOC) is also positioning itself as a governing body for AI in the workplace, which could shape policy and prevent AI from violating anti-discrimination laws.

How could AI audits impact the integration of AI into government operations?

Some experts suggest that the government should primarily focus on responsibly integrating AI into its operations, allowing its size and scope to shape markets. AI audits can contribute to this integration by ensuring the safety and compliance of AI models used by the government.

Please note that the FAQs provided on this page are based on the news article published. While we strive to provide accurate and up-to-date information, it is always recommended to consult relevant authorities or professionals before making any decisions or taking action based on the FAQs or the news article.

Share post:

Subscribe

Popular

More like this
Related

Global Data Center Market Projected to Reach $430 Billion by 2028

Global data center market to hit $430 billion by 2028, driven by surging demand for data solutions and tech innovations.

Legal Showdown: OpenAI and GitHub Escape Claims in AI Code Debate

OpenAI and GitHub avoid copyright claims in AI code debate, showcasing the importance of compliance in tech innovation.

Cloudflare Introduces Anti-Crawler Tool to Safeguard Websites from AI Bots

Protect your website from AI bots with Cloudflare's new anti-crawler tool. Safeguard your content and prevent revenue loss.

Paytm Founder Praises Indian Government’s Support for Startup Growth

Paytm founder praises Indian government for fostering startup growth under PM Modi's leadership. Learn how initiatives are driving innovation.