OpenAI Takes New Stance on Military Use of Technology, Raising Concerns Over AI-Powered Weapons
OpenAI, the renowned artificial intelligence research laboratory, recently made significant changes to its policy on military applications of technology, sparking concerns over the possible development of AI-powered weapons. Without any formal announcement, OpenAI quietly revised its usage policies on January 10, lifting a broad ban on the use of its technology for military and warfare purposes.
Under the new policy, OpenAI still prohibits the development of weapons, causing harm to others, or destroying property. The company aims to establish a set of universal principles that are easy to implement and remember. This decision comes as OpenAI’s tools are now being widely used by everyday users around the world, who can also build their own GPTs through the newly launched GPT Store.
However, experts in the field of AI express concerns regarding the vagueness of OpenAI’s policy rewrite, as AI technology is already being deployed in conflicts like the ongoing clashes in the Gaza Strip. The Israeli military has mentioned the use of AI to identify specific targets for bombing within Palestinian territory.
Sarah Myers West, the Managing Director of the AI Now Institute and former AI policy analyst at the Federal Trade Commission, raised questions about OpenAI’s enforcement approach based on the unclear language used in the policy. It remains uncertain how the company intends to ensure compliance with its guidelines.
While OpenAI has not disclosed any concrete plans, this change in their policy may potentially open doors to future contracts with the military. Notably, the increasing popularity of ChatGPT has already piqued the interest of politicians in AI-powered weapons, according to Palmer Luckey, the founder of defense tech startup Anduril.
In conclusion, OpenAI’s decision to relax restrictions on military use of its technology has raised concerns about the potential development of AI-powered weapons. The vague language in the revised policy has prompted experts to question the company’s enforcement strategy. Only time will tell how this shift will impact OpenAI’s involvement with the military and the broader implications for the use of AI in warfare.