Reimagining AI Regulation: The Emergence of the Strasbourg Effect
Powerful quarters are now recognizing the need to protect citizens from the potential harms of artificial intelligence (AI), both the known issues such as discrimination, privacy violations, and copyright theft, as well as the yet-to-be-discovered ones. While some nations have approached AI regulation by allowing individual sectors to regulate themselves, recent events have sparked a desire for broader AI regulation that spans across society.
Countries like the United States, Japan, and the United Kingdom believe that adaptive sectoral regulation and potential international agreements are sufficient to address AI risks. However, other nations aim to go further. Notably, China has already implemented strict control measures governing AI, including internet filtering and a social credit scoring system. As the largest consumer market, the European Economic Area plans to adopt the European Regulation on AI, known as the ‘AI Act’, which is currently being negotiated within the European Union.
However, it is important to note that simply adopting and implanting the EU’s AI Act in another jurisdiction is not feasible due to the interconnected web of laws within European institutions. This is where the concept of the ‘Brussels Effect’ comes into play. It refers to the adoption and adaptation of EU law in other nations. One notable example is the General Data Protection Regulation (GDPR), which set a global standard for data protection. However, it is not the only model for AI regulation.
A more nuanced analysis reveals the existence of a potential alternative: the ‘Strasbourg Effect’. Unlike EU laws, the Conventions established by the Council of Europe, a human rights organization based in Strasbourg, do not directly impact national law. However, nations beyond the Council’s 47 members can sign onto Conventions through international agreements. For instance, Convention 108+ has 55 members, including Canada and countries across Latin America and Africa. By negotiating with countries like the United States, the United Kingdom, and Japan, the Council of Europe’s Convention on AI is likely to have a more flexible approach, emphasizing co-regulation with industry and greater attention to human rights implications.
As the EU’s AI Act and the Council’s AI Convention are finalized, it is expected that liberal democracies such as Australia, the United Kingdom, Brazil, Japan, and the United States will adopt and adapt these laws to their jurisdictions. When this rush towards AI regulation begins, it is more likely that a Strasbourg Effect will occur, with nations copying and implementing the Convention rather than strictly following the EU’s example.
The regulation of AI is becoming a global coordination exercise that will extend over many years. It is crucial for these regulations to be comprehensive and carefully crafted to ensure that the power of AI is directed towards benefiting humanity. As the international community navigates this complex landscape, the Strasbourg Effect presents an opportunity for countries to come together under a more flexible and human rights-centered approach to AI regulation. The Convention on AI will address the emerging challenges brought about by technological advancements and lay the foundation for a safer and more responsible future.