The European Union (EU) is introducing new rules for artificial intelligence (AI) to address potential risks to privacy, voting rights, and copyrighted material. The legislation, known as the AI Act, includes bans on any form of discrimination and invasive practices such as biometric identification in public spaces. It also prevents the use of predictive policing systems that could be used to illegally profile citizens.
Furthermore, the law introduces a categorization system to assess the risk AI poses, ranging from minimal to unacceptable. Systems that have a high risk are those that could have an impact on voters during election campaigns, human health and safety, and the environment. Tech companies will also be required to comply with the rules for transparency, such as disclosing AI use and measures to prevent the creation of illegal content.
This law could potentially impact how companies, such as Google, Meta, Microsoft and OpenAI, develop new AI tools and products that have rapidly advanced and started to infiltrate everyday life.
The EU member countries will begin negotiations on the AI Act, and a finalized law is expected to come out early next year. The law could influence how the United States and other countries create their regulatory systems around AI.
The OpenAI CEO Sam Altman testified during a Senate hearing on artificial intelligence and agreed that government regulation is needed to mitigate the risks of AI, which is echoed by many other technology and AI experts.
Meanwhile, Senators Josh Hawley and Richard Blumenthal have introduced a bill that states Section 230, a law that protects internet companies from the content posted by their users, does not protect AI-generated content.
While the US will be taking some time to create their own regulatory system, Europe has already proposed a concrete response to address the risks that AI may pose. The law may encourage other countries to follow suit and adopt their own regulatory rules to mitigate the risks of AI.