With the EU’s AI Act set to be finalized in April 2024, there are mixed feelings among advocacy groups regarding its effectiveness in upholding human rights. Despite a lengthy negotiation process, concerns remain about whether the legislation adequately safeguards fundamental rights in the realm of artificial intelligence.
For the past three years, a coalition of digital, human rights, and social justice organizations has been calling for AI systems to prioritize the protection of human rights. They have advocated for an approach that is truly human-centric, ensuring that individuals are treated with dignity and that lawmakers establish clear boundaries against unacceptable uses of AI.
While EU institutions prepare to celebrate the impending adoption of the AI Act, critics argue that the final law falls short of meeting the collective demands for robust human rights protections. Key concerns revolve around the potential impact on privacy, equality, non-discrimination, the presumption of innocence, and other fundamental rights and freedoms in the context of AI technology.
Advocates stress the need for stronger safeguards to prevent potential abuses of AI systems and highlight missed opportunities to embed human rights principles more effectively within the legislation. As stakeholders continue to scrutinize the final provisions of the AI Act, discussions are ongoing about how best to address these critical human rights considerations in the rapidly evolving landscape of artificial intelligence.