Biden Administration to Implement New AI Regulations on Tech Companies
The Biden Administration is set to introduce new regulations pertaining to artificial intelligence (AI) that will affect tech companies, specifically those involved in AI and cloud services. These regulations will require companies to report their AI activities to the government, ensuring greater transparency and oversight.
One of the new rules is expected to involve the use of the Defense Production Act, a federal law that empowers the government to prioritize contracts and orders necessary for national defense. Tech companies will be required to notify the government when they train an AI model using a substantial amount of computing power. The implementation of this rule is anticipated as early as next week, according to Wired.
By enforcing this requirement, the government will gain access to sensitive programs within companies like OpenAI LP, Google LLC, and Amazon Web Services Inc., providing a deeper understanding of their AI capabilities. Additionally, companies will be obligated to conduct safety testing for all new AI creations.
These proposed regulations build upon an executive order signed in October, which outlined guidelines for the development of AI, emphasizing industry practices, security standards, consumer protections, and federal oversight. The government intends to approach AI safety from various angles, with this new rule forming part of their comprehensive strategy.
Nevertheless, the choice of employing the Defense Production Act to regulate AI safety has raised eyebrows. While AI does find applications in military contexts, it is typically governed by conventional regulations rather than an Act originating from the Korean War, established to enhance the supply of materials and services for national defense purposes.
Another significant change expected relates to foreign access to U.S. data centers for training AI models, constituting another aspect of AI safety. Commerce Secretary Gina Raimondo announced that cloud companies must ascertain whether foreign entities, including China, have been accessing U.S. data centers for this purpose. Raimondo emphasized the need to prevent unwanted actors from leveraging U.S. technological resources for their AI initiatives.
To address this concern, the administration recently proposed the implementation of a know your customer (KYC) regulation, similar to requirements for opening a bank account. Under this proposal, cloud computing companies will be required to verify the identity of foreigners who sign up for or use U.S. cloud computing services. The regulation also establishes minimum standards for identifying foreign users and mandates annual certification of compliance for cloud computing firms.
These developments reflect the Biden Administration’s commitment to enhancing AI regulation and safeguarding national security interests. By implementing these new rules, the government seeks to bolster transparency, protect critical technologies, and optimize the responsible development of AI.
In conclusion, the Biden Administration’s plan to implement new AI regulations on tech companies signals a significant shift in the approach to AI governance. The upcoming rules will require companies to report their AI activities to the government and undergo safety testing, promoting responsible AI development. Additionally, measures will be taken to prevent unwanted access to U.S. data centers for training AI models. These initiatives underline the administration’s dedication to strengthening national security while fostering innovation in the AI sector.