Governments and tech companies around the world are increasingly turning their attention towards regulating artificial intelligence (AI), acknowledging the immediate threats it poses rather than solely focusing on future worries. Recent developments have highlighted a shift in the race to develop AI to a race to regulate it, with various countries and organizations taking proactive steps.
The United States took a surprising step by issuing an executive order from President Joe Biden, which calls for AI majors to prioritize transparency and careful development. Following this, the Global AI Summit, convened by Rishi Sunak and attended by 28 countries including China, featured notable figures like Elon Musk, Demis Hassabis, and Sam Altman. The summit resulted in a joint communique on regulating Frontier AI. The European Union, China, and India are also eager to join the race with their own regulatory initiatives. OpenAI, a leading AI organization, has announced the formation of a team dedicated to tackling Super Alignment, emphasizing the need for scientific and technical breakthroughs to control AI systems more effectively.
While the proactive approach to regulating AI shows promise, concerns are raised about the focus on future risks, particularly regarding Frontier AI. Many believe that greater attention should be given to the current challenges and issues surrounding AI. For instance, large language models (LLMs) often produce misleading or false information, AI-powered driverless cars have caused accidents resulting in fatalities, and biases pervade GenAI models due to biased training datasets. Furthermore, copyright and plagiarism problems have emerged, leading to lawsuits, and the environmental impact of training massive LLMs raises concerns about CO2 emissions and ecological degradation.
Renowned AI scientist and author Gary Marcus also shares the view that the focus on long-term AI risk overshadows the immediate threats AI presents. Criminals, including terrorists, could exploit the capabilities of LLMs, raising questions about countermeasures. Critics argue that this narrow focus on ‘science fiction’ scenarios diverts attention from pressing present-day issues that major AI firms may prefer to keep off the policy agenda. It becomes crucial to assess the likelihood and certainty of hypothetical doomsday scenarios compared to the multiple immediate threats AI poses.
The key lies in regulating the use of AI by humans rather than regulating AI itself. Instances of malevolent state actors using deepfakes and false content to undermine democracy or desperate dictators utilizing AI-based lethal autonomous weapons in times of conflict paint a more realistic and urgent picture. Additionally, uncontrolled competition to develop larger LLMs exacerbates global warming, and the proliferation of fake news can become a catalyst for social discord.
Considering the potential harm that humans can cause using AI, it is imperative to establish regulations that address ethical and responsible usage. Balancing the need for innovation and the preservation of societal well-being becomes paramount in this race to regulate AI. It is crucial to maintain a comprehensive perspective that encompasses different viewpoints and perspectives while ensuring the smooth flow of the regulatory process.
In conclusion, the focus on regulating AI is shifting from its development to mitigating its immediate threats. Governments and tech companies are acknowledging the risks posed by AI and taking proactive steps to manage them. However, the emphasis on future worries should not overshadow the urgent challenges AI poses in the present. By regulating the use of AI by humans and addressing immediate concerns such as misinformation, biases, accidents, and environmental impact, we can navigate the race to regulate AI effectively and responsibly.