Drafting AI Act to Include Military Use: Risk Management and National Security Agencies Consulted
Artificial intelligence (AI) technology is rapidly advancing and finding its way into various sectors, including the military. To address the potential risks and ensure national security, the Legislative Research Bureau has recommended that authorities draft an AI act that includes provisions for military use. This move comes in response to recent developments where AI has been utilized in military operations by countries like Israel and the United States.
The Legislative Research Bureau suggests that rules regarding the military use of AI can be incorporated into the existing National Defense Act or included in the drafted basic law on AI. One example cited is the deployment of AI-guided combat drone swarms by the Israeli military in the Gaza Strip, which was deemed the world’s first AI war. The US Department of Defense also plans to develop an extensive network of AI-powered technology, including drones and autonomous systems, to counter threats from China.
The bureau acknowledges that AI’s advanced algorithms surpass human capabilities and can potentially contribute to reducing military casualties. However, critics argue that over-reliance on AI in the military could lead to collateral damage among civilians. In fact, Belgium has already taken a proactive stance by limiting or banning the use of lethal autonomous weapons.
To strengthen risk management and ensure a balanced approach, the bureau recommends consulting with national security and military agencies during the drafting of the AI act. Authorities should also refer to the Government Procurement Act and clearly define authorization specifications for AI use in the military within the law.
To expedite the process, the bureau suggests that if integrating opinions from various government agencies prolongs the legislation process for the drafted act, the authorization of AI use in the military can be added to the existing National Defense Act instead. This would include defining its scope of application and limitations.
Regarding the management of AI use in the military, the bureau proposes adopting a hierarchical categorization system. This system would classify situations as absolutely prohibited if they could potentially cause mass casualties, relatively prohibited if they might cause partial casualties, and permitted exceptions for situations that might only harm uncrewed vehicles, equipment, or facilities without risking human lives.
The bureau emphasizes that proper AI management in the military has the potential to save lives. By implementing effective regulations and risk management strategies, countries can harness the benefits of AI technology while minimizing the potential harms associated with its military application.
In conclusion, as AI technology continues to evolve and play a significant role in military operations, it is crucial to establish clear guidelines and regulations. By including national security and military agencies in the drafting process of an AI act and considering the risks and benefits associated with its military use, countries can ensure the responsible and effective implementation of AI technology in the defense sector.