The White House has taken action to address the risks posed by artificial intelligence (AI) with the unveiling of a comprehensive set of guidelines in an executive order. The move demonstrates the U.S. government’s commitment to building responsible and trustworthy AI. However, the executive order does not resolve the issue of comprehensive data privacy legislation, which remains a challenge.
While technology is typically evaluated based on performance, cost, and quality, aspects such as equity, fairness, and transparency often go overlooked. Researchers and practitioners of responsible AI have been advocating for a shift in focus towards these critical areas.
The National Institute of Standards and Technology (NIST) has issued a comprehensive AI risk management framework that serves as the foundation for the executive order. The order also empowers the Department of Commerce to play a key role in implementing the proposed directives.
One important aspect highlighted in the order is the need for stronger auditing of AI systems to ensure genuine accountability. Claims of AI ethics practices often outpace actual initiatives, and the executive order could help address this discrepancy by specifying avenues for enforcement.
Additionally, the order recognizes the potential harm that AI systems can pose to civil and human rights, as well as individual well-being. It emphasizes the need to address existing inequities, discriminatory practices, and online and physical harms caused by irresponsible AI deployment.
However, a major challenge in AI regulation is the absence of comprehensive federal data protection and privacy legislation. While the executive order calls on Congress to adopt privacy legislation, it does not offer a legislative framework. Without robust data privacy laws, AI systems can put individuals at risk by revealing sensitive or confidential information.
Algorithmic transparency is also a critical aspect to consider, but it is not a cure-all solution. While the European Union’s General Data Protection Regulation mandates transparency in automated decisions, knowing how an AI system works may not necessarily explain why it made a specific decision.
The executive order is a significant step towards addressing AI risks, but the lack of comprehensive data privacy legislation remains a challenge. Without stronger laws in place, efforts to protect individuals and uphold privacy in the face of AI advancements may be limited.