Killer Robots Are Coming to the Battlefield
The proliferation of autonomous weapons systems (AWS), often referred to as ‘killer robots’, is a growing concern on the modern battlefield. As technology rapidly advances, militaries around the world are integrating these autonomous systems into their arsenal. While AWS offer benefits such as augmented decision-making, cost-effectiveness, and reduced collateral damage, they also pose significant risks to international security and stability. The ethical development and deployment of AWS have become a key question for governments worldwide.
Recent discussions at the United Nations General Assembly highlighted the need for human control over AWS decisions. It was agreed that algorithms should not have full authority to determine the killing or harming of humans, regardless of the weapons involved. This acknowledgment led to the adoption of Resolution 78/241 by the UN, with 152 votes in favor and only four against. The resolution emphasized that the UN Charter, international human rights law, and international humanitarian law must apply to AWS matters.
Within these global discussions, different countries hold contrasting views on the use of AWS. Australia, along with Canada, Japan, South Korea, the UK, and the US, endorses lethal AWS but emphasizes the importance of meaningful human intervention and the protection of civilian populations. On the other hand, 29 states support a ban on lethal AWS, while China takes an ambiguous stance, supporting a ban on usage but not development. The key point of contention lies in the ability of AWS to uphold the principles of discrimination, necessity, and proportionality outlined in international humanitarian law.
Australia, as a prominent player in military AI development, has the opportunity to influence discussions on AWS capabilities. AUKUS partners, including Australia, need to establish a dedicated framework for ethical AI in a defense context. Alignment between these partners in their AI policies is crucial for responsible use of military AI, ensuring reliability, accountability, explainability, and human control. By participating in international dialogues and expert meetings, Australia can help shape new rules and norms governing AWS.
To address ethical concerns, Australia should establish its own defense policy specifically for AI-enabled capabilities, separate from civilian AI considerations. Clarity and coordination among AUKUS partners in their AI principles are vital to meeting shared strategic objectives. By continuously revising and strengthening governance frameworks as AWS technologies evolve, Australia can ensure responsible AI military capabilities that align with international obligations.
While there are ongoing debates on the compatibility of AWS with international humanitarian law, outright banning of AWS is increasingly unlikely. Many militaries, including those involved in conflicts such as the Libyan civil war and the Ukrainian-Russian conflict, are already employing AWS. Therefore, it is essential to establish regulations and guidelines that govern the ethical development and deployment of AWS.
Australia must actively engage in multilateral institutions and work with civil society to establish these rules for a world where asymmetric AWS capabilities could dominate the threat landscape. By doing so, we can mitigate the risks associated with authoritarian states misusing AWS in warfare. As the development and deployment of AWS continue to advance, it is crucial to prioritize ethical considerations and maintain a balanced approach to ensure the responsible use of this technology on the modern battlefield.