Senate Majority Leader Chuck Schumer's new initiative, the SAFE Innovation Framework, aims to regulate the emerging AI industry. The plan seeks to balance economic competitiveness and safety by managing potential risks. Some praise the initiative, while others express concerns about stifling innovation.
Discover the Chatbot Arena experiment by the University of California, Berkeley, aimed at improving the reliability of chatbots. Join the 40,000 participants and vote for your favorite AI models anonymously!
A new study by Stanford University found that none of the large language models used by AI comply with the EU AI Act. Companies using high-scoring models have work to do to attain compliance, with non-compliance in crucial areas. The study proposes recommendations for AI regulation, including accountability for transparency and technical resources to enforce the Act. The introduction and enforcement of the AI Act may yield positive impacts, paving the way for more transparency and accountability.
Learn how OpenAI, alongside other tech giants, has been lobbying for amendments to the EU's AI Act, potentially reducing regulatory burden on companies like OpenAI and undermining the Act's aim to balance innovation and safety.
The BEUC warns of potential consumer harm from generative AI and urges for immediate regulatory investigations. AI poses risks of bias, disinformation, and fraud. EU implements AI law to categorize risks.
Explore the evolution of tech policy from Obama's optimism to Harris's vision at the Democratic National Convention. What's next for Democrats in tech?