Chatbots like ChatGPT and Google Bard fail to meet EU AI law standards: Study

Date:

A recent study by researchers from Stanford University has revealed that none of the large language models (LLMs) used by AI tools comply with the European Union (EU) Artificial Intelligence (AI) Act. The Act, which was recently adopted by the European Parliament, is the first of its kind to regulate AI on a national and regional level and also serves as a blueprint for worldwide AI regulations. However, the research shows that companies using high-scoring LLM models have a lot of work to do to attain compliance. The researchers evaluated 10 major model providers on a scale of 0 to 4, based on their degree of compliance with the 12 AI Act requirements. None of the providers scored over 75%, suggesting that there is room for significant improvement.

The study also highlighted some crucial areas of non-compliance. These included a lack of transparency in disclosing the status of copyrighted training data, energy consumption, emissions output, and methodology to mitigate potential risks. The researchers found an apparent disparity between open and closed model releases, with open releases leading to more robust disclosure of resources but involving greater challenges monitoring or controlling deployment.

The researchers proposed several recommendations for improving AI regulation, including ensuring that the AI Act holds larger foundation model providers to account for transparency and accountability. The need for technical resources and talent to enforce the Act was also highlighted, reflecting the complexity of the AI ecosystem. The main challenge, the study suggests, lies in how quickly model providers can adapt and evolve their business practices to meet regulatory requirements.

See also  Spotify & Google Cloud Expand Partnership to Enhance Platform with AI Tools

In recent months there has been a reduction in transparency in major model releases, with OpenAI making no disclosures regarding data and compute in their reports for GPT-4, citing a competitive landscape and safety implications. The researchers’ work offers insight into the future of AI regulation and argues that if enacted and enforced, the AI Act will yield a positive impact on the ecosystem, paving the way for more transparency and accountability.

Frequently Asked Questions (FAQs) Related to the Above News

What is the European Union (EU) Artificial Intelligence (AI) Act?

The EU AI Act is the first of its kind to regulate AI on a national and regional level and also serves as a blueprint for worldwide AI regulations.

Do the large language models (LLMs) used by AI tools comply with the EU AI Act?

No, a recent study by researchers from Stanford University has revealed that none of the large language models (LLMs) used by AI tools comply with the EU AI Act.

How did researchers evaluate major model providers?

The researchers evaluated 10 major model providers on a scale of 0 to 4, based on their degree of compliance with the 12 AI Act requirements.

What are some areas of non-compliance with the EU AI Act?

Some areas of non-compliance highlighted by the study include a lack of transparency in disclosing the status of copyrighted training data, energy consumption, emissions output, and methodology to mitigate potential risks.

What are the recommendations for improving AI regulation proposed by the researchers?

The researchers proposed several recommendations for improving AI regulation, including ensuring that the AI Act holds larger foundation model providers to account for transparency and accountability, and the need for technical resources and talent to enforce the Act.

Why is there a need for AI regulation?

There is a need for AI regulation to ensure transparency, accountability, and mitigate potential risks associated with AI. The AI Act, if enacted and enforced, will yield a positive impact on the ecosystem, paving the way for more transparency and accountability.

Please note that the FAQs provided on this page are based on the news article published. While we strive to provide accurate and up-to-date information, it is always recommended to consult relevant authorities or professionals before making any decisions or taking action based on the FAQs or the news article.

Aniket Patel
Aniket Patel
Aniket is a skilled writer at ChatGPT Global News, contributing to the ChatGPT News category. With a passion for exploring the diverse applications of ChatGPT, Aniket brings informative and engaging content to our readers. His articles cover a wide range of topics, showcasing the versatility and impact of ChatGPT in various domains.

Share post:

Subscribe

Popular

More like this
Related

AI Predicts Alzheimer’s Development 6 Years Early – Major Healthcare Breakthrough

AI breakthrough: Predict Alzheimer's 6 years early with 78.5% accuracy. Revolutionizing healthcare for personalized patient care.

Microsoft to Expand Generative AI Services in Asian Schools

Microsoft expanding generative AI services in Asian schools, focusing on Hong Kong, to enhance education with AI tools for students.

TSMC Surpasses $1 Trillion Valuation Amid AI Frenzy

TSMC reaches $1 trillion valuation amidst AI frenzy. Leading chip manufacturer surges in market value amid growing demand for advanced chips.

Pestle App Integrates AI to Instantly Save Instagram Recipes

Save Instagram recipes instantly with Pestle App's new AI integration. Import, organize, and plan meals efficiently with on-device machine learning.