Chatbots like ChatGPT and Google Bard fail to meet EU AI law standards: Study

Date:

A recent study by researchers from Stanford University has revealed that none of the large language models (LLMs) used by AI tools comply with the European Union (EU) Artificial Intelligence (AI) Act. The Act, which was recently adopted by the European Parliament, is the first of its kind to regulate AI on a national and regional level and also serves as a blueprint for worldwide AI regulations. However, the research shows that companies using high-scoring LLM models have a lot of work to do to attain compliance. The researchers evaluated 10 major model providers on a scale of 0 to 4, based on their degree of compliance with the 12 AI Act requirements. None of the providers scored over 75%, suggesting that there is room for significant improvement.

The study also highlighted some crucial areas of non-compliance. These included a lack of transparency in disclosing the status of copyrighted training data, energy consumption, emissions output, and methodology to mitigate potential risks. The researchers found an apparent disparity between open and closed model releases, with open releases leading to more robust disclosure of resources but involving greater challenges monitoring or controlling deployment.

The researchers proposed several recommendations for improving AI regulation, including ensuring that the AI Act holds larger foundation model providers to account for transparency and accountability. The need for technical resources and talent to enforce the Act was also highlighted, reflecting the complexity of the AI ecosystem. The main challenge, the study suggests, lies in how quickly model providers can adapt and evolve their business practices to meet regulatory requirements.

See also  Struggling with Artificial Intelligence: ChatGPT & Co. Reaches Its Limits

In recent months there has been a reduction in transparency in major model releases, with OpenAI making no disclosures regarding data and compute in their reports for GPT-4, citing a competitive landscape and safety implications. The researchers’ work offers insight into the future of AI regulation and argues that if enacted and enforced, the AI Act will yield a positive impact on the ecosystem, paving the way for more transparency and accountability.

Frequently Asked Questions (FAQs) Related to the Above News

What is the European Union (EU) Artificial Intelligence (AI) Act?

The EU AI Act is the first of its kind to regulate AI on a national and regional level and also serves as a blueprint for worldwide AI regulations.

Do the large language models (LLMs) used by AI tools comply with the EU AI Act?

No, a recent study by researchers from Stanford University has revealed that none of the large language models (LLMs) used by AI tools comply with the EU AI Act.

How did researchers evaluate major model providers?

The researchers evaluated 10 major model providers on a scale of 0 to 4, based on their degree of compliance with the 12 AI Act requirements.

What are some areas of non-compliance with the EU AI Act?

Some areas of non-compliance highlighted by the study include a lack of transparency in disclosing the status of copyrighted training data, energy consumption, emissions output, and methodology to mitigate potential risks.

What are the recommendations for improving AI regulation proposed by the researchers?

The researchers proposed several recommendations for improving AI regulation, including ensuring that the AI Act holds larger foundation model providers to account for transparency and accountability, and the need for technical resources and talent to enforce the Act.

Why is there a need for AI regulation?

There is a need for AI regulation to ensure transparency, accountability, and mitigate potential risks associated with AI. The AI Act, if enacted and enforced, will yield a positive impact on the ecosystem, paving the way for more transparency and accountability.

Please note that the FAQs provided on this page are based on the news article published. While we strive to provide accurate and up-to-date information, it is always recommended to consult relevant authorities or professionals before making any decisions or taking action based on the FAQs or the news article.

Aniket Patel
Aniket Patel
Aniket is a skilled writer at ChatGPT Global News, contributing to the ChatGPT News category. With a passion for exploring the diverse applications of ChatGPT, Aniket brings informative and engaging content to our readers. His articles cover a wide range of topics, showcasing the versatility and impact of ChatGPT in various domains.

Share post:

Subscribe

Popular

More like this
Related

Security Flaw Exposes Chats in OpenAI ChatGPT App, Risks Persist

Stay informed about the latest security updates for OpenAI's ChatGPT app amidst ongoing privacy risks.

Privacy Concerns: OpenAI’s ChatGPT App for Mac Exposes Chats in Plain Text

OpenAI addresses privacy concerns over ChatGPT app on Mac by encrypting conversations, ensuring user data security.

Hacker Breaches OpenAI Messaging System, Stealing AI Design Details

Hacker breaches OpenAI messaging system, stealing AI design details. Learn about cybersecurity risks in the AI industry.

OpenAI Security Breach Exposes AI Secrets, Raises National Security Concerns

OpenAI Security Breach exposes AI secrets, raising national security concerns. Hacker steals design details from company's messaging system.