President Joe Biden has issued an executive order that aims to establish strict safety standards for artificial intelligence (AI) and promote ethical AI use within government agencies. The order introduces six new standards focused on safety, security, trust, and openness in AI development.
One of the mandates in the executive order requires companies developing foundation models posing significant risks to national security, economic security, or public health to share safety test results with officials. This move is aimed at safeguarding citizens, government entities, and companies from potential AI risks.
While the order emphasizes the importance of privacy-preserving techniques in AI development, the lack of specific implementation details has raised concerns in the industry, particularly regarding the implications for top-tier model development. Developers find it challenging to predict future risks based on assumptions and worry that the order’s lack of clear directives affects the open-source community.
However, the order suggests managing these guidelines through AI chiefs and governance boards within regulatory agencies. This implies that companies operating within those agencies should adhere to government-approved regulatory frameworks, emphasizing data compliance, privacy, and unbiased algorithms.
The Biden administration has already disclosed over 700 use cases demonstrating the government’s internal use of AI through the ai.gov website. However, some individuals in the AI community have expressed concerns about the potential impact of the executive order on open-source AI.
A letter sent to the Biden administration by AI researchers, academics, and founders emphasizes the importance of open-source software in ensuring safety and preventing monopolies. The letter criticizes the order’s broad definitions of certain AI model types and raises concerns about smaller companies struggling to meet requirements designed for larger firms.
Some individuals, such as Jeff Amico of Gensyn, believe that the order is detrimental to innovation in the United States. However, others, like Matthew Putman, CEO and co-founder of Nanotronics, stress the need for regulatory frameworks that prioritize consumer safety and ethical AI development. Putman also highlights the overexaggeration of AI’s catastrophic potential, stating that the technology is more likely to bring positive impacts than destructive outcomes.
Overall, the executive order issued by President Biden indicates a commitment to establishing stringent AI safety standards and promoting ethical AI utilization within government agencies. While concerns regarding implementation details and potential impacts on open-source AI exist, regulators are urged to prioritize consumer safety and ethics while fostering innovation in the AI industry.