Biden Administration Takes Steps to Regulate Artificial Intelligence Safeguards
The Biden administration has issued an executive order aimed at regulating artificial intelligence (AI) safeguards. The order focuses on establishing standards for safety and security, as well as protecting personal information. President Biden emphasized the importance of responsible innovation and expressed his commitment to promoting it.
Under the new order, developers of AI systems will be required to share their safety test results with the federal government before making them available to the public. This measure aims to ensure that proper safeguards are in place, guaranteeing the safety and effectiveness of AI tools before their release. Alexandra Reeve Givens, CEO of the Center for Democracy & Technology, supports this approach, highlighting how AI already impacts various aspects of our daily lives, such as job applications, loan approvals, and rental agreements.
The executive order also addresses AI-enabled fraud, including scams using voice cloning technology to deceive individuals and steal money. To combat these fraudulent activities, the order directs the Commerce Department to develop guidance for labels and watermarks specifically for AI-generated content.
Various stakeholders have expressed their support for AI regulation. Veritone, an AI software and services provider to law enforcement and the Department of Justice, emphasized the importance of transparency, trust, security, and compliance in responsible AI use.
Administration officials stated that this executive order builds upon voluntary commitments made by numerous tech companies, reflecting a collaborative effort to promote responsible and ethical AI practices.
As AI technology continues to rapidly evolve, the Biden administration is taking proactive measures to ensure that the development, deployment, and use of AI systems prioritize safety, security, and the protection of individual rights.