OpenAI, the company behind the creation of ChatGPT, has recently made a significant policy change regarding the use of its AI tools for military purposes. In the past, OpenAI had imposed a ban on employing their technology for activities related to weapons development, military, and warfare due to concerns about the potential escalation of conflicts. However, the company has now reversed this policy and has already begun collaborating with the Department of Defense.
The modification in OpenAI’s rules took place last week, and it involved the removal of the sentence that prohibited the use of their models for activities with a high risk of physical harm. A spokesperson from OpenAI clarified that while their policy strictly prohibits the use of their tools to harm people or develop weapons, there are national security use cases that align with their mission. They cited their ongoing partnership with the Defense Advanced Research Projects Agency (DARPA) to create new cybersecurity tools as an example.
The decision to allow military use cases has raised concerns among experts who fear that AI technology could potentially contribute to the development of autonomous weapons, also known as ‘slaughterbots,’ capable of killing without human intervention. These experts argue that such advancements could further exacerbate existing conflicts worldwide.
Notably, in 2020, 60 countries, including the United States and China, signed a ‘call to action’ to limit the utilization of AI for military purposes. However, human rights experts pointed out that this agreement was not legally binding and failed to address crucial concerns surrounding lethal AI drones and the potential escalation of conflicts through AI.
Instances of AI technology being employed for military purposes already exist. For example, Ukraine has utilized facial recognition and AI-assisted targeting systems in its conflict with Russia. Additionally, in 2020, Libyan government forces deployed an autonomous Turkish Kargu-2 drone that attacked retreating rebel soldiers, marking the first autonomous drone attack in history.
OpenAI’s Vice President of Global Affairs, Anna Makanju, mentioned that the removal of the blanket prohibition on military use was intended to enable discussions surrounding military use cases that align with the company’s goals. They want to clarify that military applications of AI can have beneficial outcomes.
The use of AI technology in the military sector by major tech organizations has previously stirred controversy. In 2018, thousands of Google employees protested against the company’s involvement in Project Maven, a Pentagon contract using AI tools for analyzing drone surveillance footage. Google decided not to renew the contract following the protests. Similarly, Microsoft employees protested against a contract to provide soldiers with augmented reality headsets.
Notably, prominent technology figures, including Elon Musk, have called for a ban on autonomous weapons, pointing out the potential dangers they pose. They argue that the development of fully autonomous weaponry could lead to a third revolution in warfare, with the first two being gunpowder and nuclear weapons. Once the Pandora’s box of autonomous weaponry is opened, it may become impossible to close it again, according to these experts.
OpenAI’s recent policy change reflects the ongoing debates surrounding the military implementation of AI technology. As the field continues to advance, it is crucial to carefully consider the potential consequences and establish international guidelines to regulate the use of AI in military contexts.