Chatbots like ChatGPT and Google Bard fail to meet EU AI law standards: Study

Date:

A recent study by researchers from Stanford University has revealed that none of the large language models (LLMs) used by AI tools comply with the European Union (EU) Artificial Intelligence (AI) Act. The Act, which was recently adopted by the European Parliament, is the first of its kind to regulate AI on a national and regional level and also serves as a blueprint for worldwide AI regulations. However, the research shows that companies using high-scoring LLM models have a lot of work to do to attain compliance. The researchers evaluated 10 major model providers on a scale of 0 to 4, based on their degree of compliance with the 12 AI Act requirements. None of the providers scored over 75%, suggesting that there is room for significant improvement.

The study also highlighted some crucial areas of non-compliance. These included a lack of transparency in disclosing the status of copyrighted training data, energy consumption, emissions output, and methodology to mitigate potential risks. The researchers found an apparent disparity between open and closed model releases, with open releases leading to more robust disclosure of resources but involving greater challenges monitoring or controlling deployment.

The researchers proposed several recommendations for improving AI regulation, including ensuring that the AI Act holds larger foundation model providers to account for transparency and accountability. The need for technical resources and talent to enforce the Act was also highlighted, reflecting the complexity of the AI ecosystem. The main challenge, the study suggests, lies in how quickly model providers can adapt and evolve their business practices to meet regulatory requirements.

See also  Why Google's Staff Dislikes ChatGPT, A Rival Platform

In recent months there has been a reduction in transparency in major model releases, with OpenAI making no disclosures regarding data and compute in their reports for GPT-4, citing a competitive landscape and safety implications. The researchers’ work offers insight into the future of AI regulation and argues that if enacted and enforced, the AI Act will yield a positive impact on the ecosystem, paving the way for more transparency and accountability.

Frequently Asked Questions (FAQs) Related to the Above News

What is the European Union (EU) Artificial Intelligence (AI) Act?

The EU AI Act is the first of its kind to regulate AI on a national and regional level and also serves as a blueprint for worldwide AI regulations.

Do the large language models (LLMs) used by AI tools comply with the EU AI Act?

No, a recent study by researchers from Stanford University has revealed that none of the large language models (LLMs) used by AI tools comply with the EU AI Act.

How did researchers evaluate major model providers?

The researchers evaluated 10 major model providers on a scale of 0 to 4, based on their degree of compliance with the 12 AI Act requirements.

What are some areas of non-compliance with the EU AI Act?

Some areas of non-compliance highlighted by the study include a lack of transparency in disclosing the status of copyrighted training data, energy consumption, emissions output, and methodology to mitigate potential risks.

What are the recommendations for improving AI regulation proposed by the researchers?

The researchers proposed several recommendations for improving AI regulation, including ensuring that the AI Act holds larger foundation model providers to account for transparency and accountability, and the need for technical resources and talent to enforce the Act.

Why is there a need for AI regulation?

There is a need for AI regulation to ensure transparency, accountability, and mitigate potential risks associated with AI. The AI Act, if enacted and enforced, will yield a positive impact on the ecosystem, paving the way for more transparency and accountability.

Please note that the FAQs provided on this page are based on the news article published. While we strive to provide accurate and up-to-date information, it is always recommended to consult relevant authorities or professionals before making any decisions or taking action based on the FAQs or the news article.

Aniket Patel
Aniket Patel
Aniket is a skilled writer at ChatGPT Global News, contributing to the ChatGPT News category. With a passion for exploring the diverse applications of ChatGPT, Aniket brings informative and engaging content to our readers. His articles cover a wide range of topics, showcasing the versatility and impact of ChatGPT in various domains.

Share post:

Subscribe

Popular

More like this
Related

SREB Launches Commission on AI in Education with SC Governor, WV University President Co-Chairing

Discover how SREB's Commission on AI in Education, co-chaired by SC Governor & WV University President, navigates the integration of AI in classrooms.

Higher Education Braces for Gen AI Impact in Next 5 Years

Discover how higher education institutions are bracing for the impact of generative AI tools within the next 5 years. Prepare for the future now.

Nothing’s Earbuds Integrate ChatGPT for Revolutionary AI Features

Discover Nothing's Earbuds with integrated ChatGPT for revolutionary AI features. Experience the future of audio technology now!

US-China Tensions Soar: Blinken’s Crucial Visit Sparks Global Concern

US-China tensions escalate as Blinken's visit sparks global concern. Crucial discussions to address key issues and find common ground.