Governments worldwide are grappling with the challenges of regulating artificial intelligence (AI) as OpenAI’s ChatGPT continues to raise concerns. The rapid advancement of AI technology, exemplified by ChatGPT, is making it increasingly difficult for governing bodies to establish comprehensive laws governing its use. However, various countries and international organizations are taking steps to address these concerns and develop regulations. Here is an overview of the measures being implemented around the world:
Australia: Australia is introducing new codes to compel search engines to prevent the sharing of child sexual abuse material created by AI and the production of deepfake versions of the same material.
Britain: At the first global AI Safety Summit held at Bletchley Park, over 25 countries, including the US, China, and the EU, signed the Bletchley Declaration, emphasizing the need for collaboration and a common oversight approach. In support of this, Britain announced an increase in funding for the AI Research Resource to ensure advanced AI models are developed safely. Additionally, Britain plans to establish the world’s first AI safety institute to assess the risks associated with various AI models.
China: China expressed its willingness to enhance collaboration on AI safety and contribute to the development of an international governance framework. It has already published proposed security requirements and temporary measures to regulate the offering of AI services.
European Union: European lawmakers have reached an agreement on the designation of high-risk AI systems, a pivotal aspect of new AI rules. This progress brings the EU closer to finalizing the landmark AI Act, which is expected to be unveiled in December. Furthermore, European Commission President Ursula von der Leyen has called for the establishment of a global panel to evaluate the risks and benefits associated with AI.
France: France’s privacy watchdog has initiated an investigation into ChatGPT following complaints.
G7: The Group of Seven countries has agreed on an 11-point code of conduct for companies developing advanced AI systems, with the aim of promoting safe and trustworthy AI globally.
Italy: Italy’s data protection authority plans to review AI platforms and recruit experts in the field. Although ChatGPT was temporarily banned in the country earlier this year, it was later made available again.
Japan: Japan intends to implement regulations closer to the US approach, rather than the stricter ones proposed by the EU, by the end of 2023. The country’s privacy watchdog has also cautioned OpenAI against collecting sensitive data without individuals’ consent.
Poland: Poland’s Personal Data Protection Office is investigating OpenAI over a complaint alleging that ChatGPT violates EU data protection laws.
Spain: Spain’s data protection agency has launched a preliminary investigation into potential data breaches involving ChatGPT.
United Nations: The UN Secretary-General has announced the creation of a 39-member advisory body, consisting of tech company executives, government officials, and academics, to address issues related to the international governance of AI. The UN Security Council held its first formal discussion on AI in July, recognizing its potential impact on global peace and security.
United States: The US plans to establish an AI safety institute to assess the risks associated with frontier AI models. Additionally, President Joe Biden issued an executive order requiring developers of AI systems posing risks to national security or public welfare to share the results of safety tests with the government. Congress has also held hearings on AI and hosted an AI forum featuring industry leaders, discussing the need for an AI referee in the US.
As governments worldwide grapple with the complex task of regulating AI, these initiatives highlight the global efforts to ensure the safe and responsible use of this powerful technology.