A coalition of 150 non-governmental organizations (NGOs) including Human Rights Watch, Amnesty International, Transparency International, and Algorithm Watch has issued a statement to the European Union (EU) urging stronger protection of human rights in relation to the regulation of artificial intelligence (AI). These NGOs are calling on the EU not only to maintain but enhance human rights safeguards when adopting the proposed AI Act.
The AI Act, proposed by the EU, is the first of its kind. However, different camps have opposing views on its effectiveness: some believe it is restricting Europe’s tech sovereignty, while others argue it does not go far enough in curbing potentially dangerous AI deployments.
The NGOs’ collective statement highlights concerns that without robust regulation, companies and governments will continue to use AI systems that exacerbate issues like mass surveillance, structural discrimination, the concentrated power of large technology companies, unaccountable decision-making, and environmental damage.
Rather than just making a superficial statement about the risks posed by AI, the NGOs have outlined specific sections of the Act that they believe should be maintained or strengthened. For example, they stress the need for a framework of accountability, transparency, accessibility, and redress, which entails AI deployers publishing impact assessments on fundamental rights, registering their use in a publicly accessible database, and ensuring individuals affected by AI-made decisions have the right to be informed.
The NGOs also take a firm stance against AI-based public surveillance and are calling for an outright ban on real-time and post-remote biometric identification in publicly accessible spaces by all actors without exceptions. They also request the EU to prohibit the use of AI in predictive and profiling systems in law enforcement, migration contexts, and emotional recognition systems.
Moreover, the statement warns against giving in to lobbying efforts by big tech companies in order to circumvent regulation for financial gain. The NGOs emphasize the importance of upholding an objective process to determine which AI systems should be classified as high-risk.
As per the proposed AI Act, AI systems will be divided into four tiers based on the level of risk they pose to health and safety or fundamental rights. Applications such as social scoring systems used by governments would be classified as unacceptable, while systems used for spam filters or video games would be considered minimal risk. However, high-risk systems like medical equipment or autonomous vehicles would be permitted under strict rules governing testing, data collection documentation, and accountability frameworks.
While the original proposal did not mention general purpose or generative AI, an additional section was added following the success of ChatGPT last year.
In recent months, business leaders have been actively influencing the EU in an attempt to water down the proposed legislation. These leaders have focused particularly on the classification of high-risk AI, as it would result in higher costs. Some, such as Sam Altman from OpenAI, have even resorted to personal lobbying, including issuing veiled threats.
Over 160 executives from major companies worldwide, including Meta, Renault, and Heineken, have also sent a letter to the EU expressing concerns that the draft legislation could jeopardize Europe’s competitiveness and technological sovereignty.
The European Parliament has already adopted its negotiating position on the AI Act, and trilogue negotiations among the Parliament, the Commission, and the Council are underway. These negotiations will result in the final text being adopted.
As this legislation is set to establish a global precedent, Brussels is undoubtedly abuzz with advocates from all interested parties, each keen to shape the outcome in their favor while ensuring the law remains adaptable to evolving technology.