Biden Signs Order to Govern AI, Ensure Safety and Trustworthiness, US

Date:

President Joe Biden has taken a significant step toward governing artificial intelligence (AI) by signing an executive order that aims to ensure the safety and trustworthiness of the technology. In his view, AI is all around us and has the potential to bring immense benefits, but it also carries risks that need to be addressed.

The order, which may need further support from Congress, seeks to guide the development of AI in a way that allows companies to profit while safeguarding public safety. To achieve this, the order requires leading AI developers to share safety test results and other relevant information with the government, using the Defense Production Act.

Additionally, the National Institute of Standards and Technology will create standards to ensure that AI tools are safe and secure before they are released to the public. The Commerce Department will issue guidance on labeling and watermarking AI-generated content, in order to differentiate between authentic interactions and those generated by software.

The executive order also covers a wide range of other areas, including privacy, civil rights, consumer protections, scientific research, and worker rights. The aim is to address these issues in relation to AI and ensure that its deployment is responsible and beneficial.

Biden’s chief of staff, Jeff Zients, highlighted the urgency with which the president approached the issue, emphasizing the need to move as fast as the technology itself. This sense of urgency stems from the recognition that the government was late in addressing the risks associated with social media, leading to mental health issues among US youth. Biden does not want to repeat the same mistake with AI.

See also  The European Union Urges United States to Join Efforts in Regulating Artificial Intelligence

The order builds on voluntary commitments made by technology companies and forms part of a broader strategy that involves congressional legislation and international diplomacy. The Biden administration is leveraging the levers it can control to shape private sector behavior and set an example for the federal government’s own use of AI.

Various governments worldwide have also taken steps to establish protections and regulations around AI. The European Union is nearing the final stages of passing a comprehensive law to rein in AI harms, while China has implemented its own rules. The United Kingdom is seeking to position itself as an AI safety hub, and the Group of Seven nations recently agreed on AI safety principles and a voluntary code of conduct for developers.

In the United States, the focus on AI is particularly significant due to the country’s position as a hub for leading AI developers. Tech giants such as Google, Meta, and Microsoft, as well as startups like OpenAI, are based on the country’s West Coast. The White House has already secured commitments from these companies to implement safety mechanisms in their AI models.

However, the Biden administration also faced pressure from Democratic allies, including labor and civil rights groups, to ensure that its policies address real-world harms caused by AI. One of the challenges the administration grappled with was how to regulate law enforcement’s use of AI tools, such as facial recognition technology, which has been linked to racial biases and mistaken arrests.

While the EU’s forthcoming AI law bans real-time facial recognition in public, Biden’s executive order falls short of such explicit restrictions. Instead, federal agencies will review how AI is being used in the criminal justice system, leaving room for improvement in protecting individuals from potential harms.

See also  AI Unleashed: OpenAI and Microsoft Shut Down State-Affiliated Cyberattackers

Overall, the executive order represents a significant initiative in governing AI and ensuring its responsible and safe development. It sets the stage for further action by Congress and demonstrates the Biden administration’s commitment to leading on AI regulation and standards. By addressing privacy, civil rights, and other concerns related to AI, the order aims to strike a balance between harnessing the technology’s benefits and mitigating potential risks.

The guidance within the order will be implemented over the course of 90 to 365 days. It is a comprehensive and proactive approach to governing AI, reflecting the need to keep pace with the rapidly evolving technology and its implications.

Frequently Asked Questions (FAQs) Related to the Above News

What is the purpose of President Joe Biden's executive order on artificial intelligence (AI)?

The purpose of President Joe Biden's executive order is to ensure the safety and trustworthiness of AI technology while allowing companies to profit. It aims to guide the development of AI in a responsible and beneficial way.

What are the key requirements outlined in the executive order?

The executive order requires leading AI developers to share safety test results and relevant information with the government. It also mandates the creation of standards by the National Institute of Standards and Technology to ensure the safety and security of AI tools. The Commerce Department will issue guidance on labeling and watermarking AI-generated content to differentiate between authentic and software-generated interactions.

What other areas does the executive order cover?

The executive order also addresses privacy, civil rights, consumer protections, scientific research, and worker rights in relation to AI. It aims to tackle these issues and ensure responsible deployment of AI technology.

Why did President Biden emphasize the urgency of addressing AI risks?

President Biden recognizes the need to act urgently, as the government was late in addressing the risks associated with social media, leading to negative impacts on mental health. He aims to avoid repeating the same mistake with AI and is keen on addressing the risks associated with its deployment.

How does the executive order fit into the broader strategy of the Biden administration?

The executive order builds upon voluntary commitments made by technology companies and forms part of a broader strategy that involves congressional legislation and international diplomacy. The administration is leveraging its influence to shape private sector behavior and set an example for the government's own use of AI.

What steps have other governments taken to regulate AI?

Various governments worldwide have taken steps to establish protections and regulations around AI. The European Union is in the final stages of passing a comprehensive AI law. China has implemented its own rules, and the United Kingdom aims to position itself as an AI safety hub. The Group of Seven nations has agreed on AI safety principles and a voluntary code of conduct for developers.

How does the executive order address concerns raised by labor and civil rights groups?

The executive order addresses concerns raised by labor and civil rights groups by reviewing how AI is used in the criminal justice system. It aims to ensure that AI tools, such as facial recognition technology, do not perpetuate racial biases or lead to mistaken arrests.

Does the executive order explicitly restrict the use of facial recognition technology?

No, the executive order falls short of explicit restrictions on the use of facial recognition technology. Instead, federal agencies will review its use in the criminal justice system, leaving room for improvements in protecting individuals from potential harms.

What is the timeframe for implementing the guidance outlined in the executive order?

The guidance within the executive order will be implemented over the course of 90 to 365 days. This timeframe reflects the need to keep pace with the rapidly evolving technology and its implications while allowing for a comprehensive and proactive approach to governing AI.

Please note that the FAQs provided on this page are based on the news article published. While we strive to provide accurate and up-to-date information, it is always recommended to consult relevant authorities or professionals before making any decisions or taking action based on the FAQs or the news article.

Share post:

Subscribe

Popular

More like this
Related

Global Edge Data Centers Market to Reach $46.4 Billion by 2030

Global edge data centers market set to hit $46.4 billion by 2030. Asia-Pacific leads growth with focus on IoT, cloud, and real-time analytics.

Baidu Inc Faces Profit Decline, Boosts Revenue with AI Advertising Sales

Baidu Inc faces profit decline but boosts revenue with AI advertising sales. Find out more about the company's challenges and successes here.

Alexander & Baldwin Holdings Tops FFO Estimates, What’s Next for the REIT?

Alexander & Baldwin Holdings surpasses FFO estimates, investors await future outlook in the REIT industry. Watch for potential growth.

Salesforce Stock Dips Despite New Dividend & Buyback

Despite introducing a new dividend & buyback, Salesforce's stock dipped after strong quarterly results. Investors cautious about future guidance.