Tech firms, including GitHub, Hugging Face, and Creative Commons, have issued an open letter calling for a review of the European Union’s AI Act. The letter argues that certain provisions within the Act could hinder the development of open-source artificial intelligence (AI) models. The signatories believe that regulating open-source projects as if they are commercial products or deployed AI systems would be detrimental to the open source AI community. They argue that such regulation would be incompatible with open source development practices and the needs of individual developers and non-profit research organizations.
The group has put forward five key suggestions to ensure that the AI Act supports open-source models effectively. These suggestions include clearly defining AI components, clarifying that collaborative development on open source models does not subject developers to the Act’s requirements, allowing exceptions for researchers to conduct limited real-world testing, and setting proportional requirements for foundation models.
Open source software refers to software that can be freely accessed, modified, and enhanced by anyone because its source code is publicly available. In the field of AI, open source software plays a crucial role in training and deploying models.
The European Parliament passed the AI Act in June, with a large majority of votes in favor. However, the Act will only become law once the EU Council, representing the 27 member states, agrees on a common version of the text, which is anticipated in 2021. Subsequent negotiations will take place with individual EU members to refine the details.
The open letter recognizes the Act’s global significance in regulating AI and addressing associated risks while promoting innovation. The signatories believe that the regulation can further these goals by fostering transparency and collaboration among stakeholders. The letter highlights the importance of establishing clear liability, recourse for harms, and sufficient standards and oversight to mitigate risks associated with AI.
In summary, the open letter from various tech firms urges policymakers to review the European AI Act to ensure it supports open-source AI models effectively. The signatories believe that current provisions hinder open source development practices and the needs of individual developers and non-profit research organizations. They propose key suggestions to address these concerns and emphasize the Act’s opportunity to promote transparency, collaboration, and risk mitigation within the AI sector.
Frequently Asked Questions (FAQs) Related to the Above News
Why are tech firms calling for a review of the European Union's AI Act?
Tech firms are calling for a review of the European Union's AI Act because they believe that certain provisions within the Act could hinder the development of open-source AI models. They argue that regulating open-source projects as if they are commercial products or deployed AI systems would be detrimental to the open source AI community.
What are the key suggestions put forward by the tech firms?
The key suggestions put forward by the tech firms include clearly defining AI components, clarifying that collaborative development on open-source models does not subject developers to the Act's requirements, allowing exceptions for researchers to conduct limited real-world testing, and setting proportional requirements for foundation models.
What is open source software?
Open source software refers to software that can be freely accessed, modified, and enhanced by anyone because its source code is publicly available. In the field of AI, open source software plays a crucial role in training and deploying models.
When is the European AI Act expected to become law?
The European Parliament passed the AI Act in June, but it will only become law once the EU Council, representing the 27 member states, agrees on a common version of the text, which is anticipated in 2021. Subsequent negotiations will take place with individual EU members to refine the details.
What goals do the signatories believe the AI Act can achieve?
The signatories believe that the AI Act can regulate AI and address associated risks while promoting innovation. They emphasize the importance of fostering transparency and collaboration among stakeholders, establishing clear liability and recourse for harms, as well as implementing sufficient standards and oversight to mitigate risks associated with AI.
How does the open letter propose to address the concerns regarding the Act's impact on open-source AI models?
The open letter proposes key suggestions, such as clearly defining AI components, clarifying that collaborative development on open-source models does not subject developers to the Act's requirements, allowing exceptions for limited real-world testing for researchers, and setting proportional requirements for foundation models. These suggestions aim to ensure that the Act supports open-source AI models effectively.
What will happen once the EU Council agrees on a common version of the AI Act?
Once the EU Council agrees on a common version of the AI Act, subsequent negotiations will take place with individual EU members to refine the details. This process will ultimately lead to the finalization and implementation of the Act as law within the European Union.
Please note that the FAQs provided on this page are based on the news article published. While we strive to provide accurate and up-to-date information, it is always recommended to consult relevant authorities or professionals before making any decisions or taking action based on the FAQs or the news article.