The European Parliament has approved a mandate for the AI Act, which sets a rulebook for AI that aligns with EU's values of transparency, safety, and privacy. The act bans remote biometric surveillance and predictive policing and expands high-risk AI systems classification. MEPs also imposed stringent obligations on developers and added consumer rights. However, AI recognition systems emoting won't be banned. The next step is Trilogue, where discussions between EU Member States and the Parliament will take place. Overwhelming support from Parliament shows urgent need for regulations on AI.
The NITDA is developing a code of practice for AI, including ChatGPT, to ensure responsible and ethical deployment. The policies will address transparency, data privacy, bias, accountability, and fake news. This move follows Italy's temporary ban on ChatGPT and the EU's draft legislation to regulate AI. Collaboration between policymakers and AI developers is key to preventing spread of misinformation and ethical issues. The NITDA's initiative highlights the need for effective regulations to balance innovation with public safety.
The European Union is set to regulate AI tech, including biometric surveillance and emotion recognition. The ban aims to promote a human-centric approach while safeguarding fundamental rights and democracy. The AI Act sets the tone for worldwide development and governance.
Salesforce's new AI feature, Einstein Guided Selling, aims to bring transparency and accountability to AI decision-making, resulting in better-informed business decisions.
Discover the limitations and benefits of medical AI chatbots like Bard and ChatGPT. Double-check responses for accuracy and transparency in the healthcare industry.
Explore the evolution of tech policy from Obama's optimism to Harris's vision at the Democratic National Convention. What's next for Democrats in tech?