Mark Zuckerberg and Sam Altman express support for the EU's AI Act, highlighting its risk-based approach and measures such as watermarking for traceability. Expert perspectives on AI's impact and the need for responsible regulation are also discussed.
OpenAI lobbied the EU to weaken its AI regulations, but the company also promotes working with global powers to control AI. General-purpose AI systems weren't classified as high-risk in the final AI Act.
A new study by Stanford University found that none of the large language models used by AI comply with the EU AI Act. Companies using high-scoring models have work to do to attain compliance, with non-compliance in crucial areas. The study proposes recommendations for AI regulation, including accountability for transparency and technical resources to enforce the Act. The introduction and enforcement of the AI Act may yield positive impacts, paving the way for more transparency and accountability.
OpenAI is lobbying European officials to ease proposed AI Act regulations on high-risk systems. The proposed Act aims to balance regulation and innovation, but OpenAI argues that general-purpose AI systems like GPT-4 should be exempt from strict regulations to avoid hindering innovation. While the lobbying efforts have had some success, it remains unclear what the final version of the Act will entail and whether it will strike a balance between regulating AI for safety and promoting innovation.
Learn how OpenAI, alongside other tech giants, has been lobbying for amendments to the EU's AI Act, potentially reducing regulatory burden on companies like OpenAI and undermining the Act's aim to balance innovation and safety.
Explore the evolution of tech policy from Obama's optimism to Harris's vision at the Democratic National Convention. What's next for Democrats in tech?