President Joe Biden’s executive order on artificial intelligence (AI) is causing a debate over the role of the government in regulating AI. While some worry that the government will overstep its bounds, others are concerned that it won’t do enough. The order requires multiple departments to collect public comments, develop regulations, and prepare reports on AI. The Homeland Security and Commerce departments, as well as the National Institute of Standards and Technology, have been given significant responsibilities in this regard. The order also calls for the development of regulations for responsible AI use in various fields, such as defense, veterans affairs, and healthcare.
To address the potential bias and other harms caused by AI systems, the Federal Trade Commission (FTC), the Consumer Financial Protection Bureau (CFPB), and the Federal Housing Finance Agency (FHFA) have been directed to create regulations. The FTC is also tasked with examining the enforcement of fair competition among AI companies. The order sets a timeline of three to nine months for the agencies to produce reports and seek public comments. It also identifies funding opportunities for AI in different fields.
The order has drawn the attention of various special interest groups. The U.S. Chamber of Commerce sees the executive order as an opportunity for the United States to set a global standard for AI safety and fund new projects. However, the chamber is concerned about the potential burden of multiple new regulations and the large number of public comments required. It worries that agencies like the FTC, CFPB, and FHFA, which have been criticized for exceeding their authority, could use the order as a justification to further expand their power.
Digital rights groups have expressed fear that the order may result in little oversight of AI. They argue that agencies should take full advantage of the order to implement positive changes that benefit everyday people. However, they worry agencies might only do the bare minimum, rendering the executive order ineffective and wasting valuable time while vulnerable individuals continue to face discrimination and biased AI systems.
The National Institute of Standards and Technology (NIST) is expected to play a crucial role in creating safety standards for AI. Vice President Kamala Harris recently announced that NIST would establish an AI Safety Institute to develop rigorous standards for testing the safety of AI models. However, a study by the National Academies of Sciences, Engineering, and Medicine highlighted serious deficiencies at NIST in terms of physical and financial needs.
The implementation of the executive order will require additional funding and an expansion of NIST’s team of AI experts. There are concerns that the agency does not currently have the necessary resources to meet the order’s requirements, including the establishment of the AI Safety Institute.
In conclusion, President Biden’s executive order on AI has sparked a debate over the government’s role in regulating the technology. While some welcome the order as an opportunity to set safety standards and fund new projects, others worry about the potential overreach of agencies and the lack of oversight. The order sets specific responsibilities for different departments and calls for the development of regulations to address bias and other harms caused by AI systems. Timeframes have been established for the production of reports and the collection of public comments. NIST’s role in creating safety standards is emphasized, but additional funding and resources will be required for its implementation.