Tech giants OpenAI, Microsoft, and Google limit AI chatbot access in Hong Kong amid pro-democracy crackdowns, possibly due to China's national security law.
Google's search app for Android will soon introduce a game-changing feature: drag-and-drop. This feature will automatically bring up search results based on the content of the link, text, or image you dropped. However, it comes with perceived dangers, one of them being the integration of generative AI leading to reduced visibility of verified sites. The second danger is Google Bard, an AI image generator, which could plagiarise art online. The article explores potential solutions to regulate and hold AI accountable amid such concerns.
The UK government aims to lead the way in AI safety measures by hosting a global summit this fall. Key tech companies and researchers will discuss risk mitigation of frontier systems. The country hopes to strengthen its AI relationship with the US by aligning regulations and positioning itself as an industry partner. Critics argue this new summit could interfere with existing international agreements on AI regulations.
The EU urges big tech companies to label all AI-created content to prevent the spread of fake news. This voluntary code of practice is part of the EU's AI Act aiming to reduce risks associated with synthetic media. Google, Microsoft, TikTok and others are onboard, but the challenge will be to detect and label such content in real-time. Compliance with the regulation may not be mandatory until 2026, but VP Vera Jourová expects companies to honor her request. Google expresses confidence in its ability to identify and label AI-generated content.