Governments worldwide are faced with the challenge of regulating the dual potential of artificial intelligence (AI) – a technology that has the power to both help and harm people. AI has permeated every sector of the economy and is evolving at such a rapid pace that even experts struggle to keep up. Balancing the need for regulation with fostering innovation has posed a significant obstacle for governments.
Reacting too slowly to regulate AI could mean missing the opportunity to prevent potential hazards and dangerous misuse of the technology. On the other hand, regulating too swiftly risks imposing flawed or harmful rules that stifle innovation, similar to the European Union’s situation. The EU released its AI Act in 2021, but it quickly became outdated as new generative AI tools emerged. Although the act was rewritten to incorporate some of the new technology, it remains somewhat awkward.
The White House recently announced its own attempt to govern AI through a comprehensive executive order. This order imposes new regulations on companies and directs federal agencies to establish safeguards around the use of AI. Pressure has been mounting on governments, including the Biden administration, since the introduction of ChatGPT and other generative AI applications, which brought AI’s potential and risks into the public spotlight. Activist groups have pushed for government action to prevent the creation of cyber weapons and misleading deepfakes.
Silicon Valley has also become a battleground for different AI perspectives. Some researchers and experts advocate for a slowdown in AI development, while others emphasize the need for full-throttle acceleration.
President Joe Biden’s executive order represents a middle ground, allowing AI development to proceed while implementing modest regulations. It signals that the federal government intends to closely monitor the AI industry in the years ahead. The order, spanning over 100 pages, contains something for various stakeholders.
AI safety advocates concerned about the technology’s risks will appreciate the order’s requirements for companies building powerful AI systems. The largest AI system developers will be obligated to notify the government and share safety testing results before releasing their models to the public. These reporting requirements will apply to models surpassing a certain computing power threshold, likely including next-generation models from OpenAI, Google, and other prominent AI companies.
To enforce these requirements, the order will utilize the Defense Production Act, granting the president broad authority to compel U.S. companies to support national security endeavors. This strengthens the regulations compared to earlier voluntary commitments.
Additionally, the order compels cloud service providers like Microsoft, Google, and Amazon to disclose information about their foreign customers to the government. It also directs the National Institute of Standards and Technology to devise standardized tests for assessing the performance and safety of AI models.
In response to concerns raised by the AI ethics community, the executive order instructs federal agencies to take measures to prevent AI algorithms from exacerbating discrimination in areas such as housing, federal benefits programs, and the criminal justice system. Furthermore, it requires the Commerce Department to develop guidance for watermarking AI-generated content, which aids in combating the spread of AI-generated misinformation.
AI companies targeted by these rules generally responded positively. Executives expressed relief that the order did not require them to obtain licenses for training large AI models, a proposal that garnered criticism within the industry. There are no provisions mandating the removal of current products from the market or forcing disclosure of private information, such as model size and training methods.
The order refrains from curbing the use of copyrighted data in training AI models, which has faced opposition from artists and creative workers. Tech companies also benefit from the order’s initiatives to relax immigration restrictions and streamline visa processes for AI-specialized workers as part of a national AI talent surge.
While some stakeholders may not be entirely satisfied, the executive order strikes a balance between pragmatism and caution. Without comprehensive AI regulations enacted by Congress, this order provides a clear framework for the foreseeable future.
Other attempts at regulating AI are expected, especially in the European Union, where the AI Act could become law next year. Britain is also hosting a global summit that may result in new efforts to rein in AI development. The White House’s executive order underscores the administration’s commitment to act swiftly. The key question, as always, is whether AI itself will outpace regulatory efforts.