Defense Officials Warn AI Models Vulnerable to Exploitation, Not Yet Ready for Full Deployment
(Article Body)
Artificial intelligence (AI) models have been identified as more susceptible to exploitation than previously believed and are not yet suitable for full deployment in the military, according to defense officials. At a symposium hosted by the National Defense Industrial Association, Alvaro Velasquez from the Defense Advanced Research Projects Agency (DARPA) highlighted the ease with which large language models (LLMs) can be attacked. He disclosed that DARPA-funded research successfully bypassed safety measures in LLMs, causing them to provide instructions on creating dangerous items like bombs. These findings raise concerns about the current level of defense against potential threats from AI technology.
The popularity of generative AI tools, which can emulate human-like text creation, has grown significantly in recent years. One such tool is ChatGPT, a model capable of solving problems and generating content upon receiving prompts. The Department of Defense (DOD) began experimenting with generative AI even before ChatGPT’s release, attested by Deputy Secretary of Defense Kathleen Hicks. Hicks, a recipient of the prestigious Wash100 Award three times, mentioned that some DOD units have developed their own AI models, but these are still being tested under human supervision.
Deputy Secretary Hicks expressed her reservations about the maturity of commercially available AI systems driven by large language models. She stated that these systems do not yet meet their ethical AI principles, which are crucial for reliable operational use. However, the DOD has identified over 180 scenarios where generative AI technologies could prove valuable in terms of oversight. These scenarios include accelerating software development, expediting battle damage assessments, and providing verifiable summaries from both open source and classified data sets.
To address the growing use of advanced AI technologies by global adversaries, the DOD released a new AI strategy emphasizing the development of new technologies while safeguarding against theft and exploitation by foreign entities.
Deputy Secretary Hicks clarified that the DOD’s intention is not to promote conflict or seek technological supremacy over other nations using AI. Instead, they aim to deter aggression and protect the interests of the United States, its allies, and partners.
To delve deeper into the Department of Defense’s exploration of AI, the Potomac Officers Club’s 5th Annual Artificial Intelligence Summit offers a platform for experts from the public and private sectors to discuss various topics related to AI in the federal government. The summit provides an opportunity to gain valuable insights into the advancements being made in this field within the defense industry.
In conclusion, defense officials have cautioned against premature full-scale deployment of AI models due to their vulnerability to exploitation. While generative AI tools have gained popularity, there are concerns about the lack of maturity in commercially available systems. The DOD is committed to developing technologies that protect the nation’s advantages without compromising ethical principles. Through ongoing research, oversight, and responsible use, the Department of Defense aims to leverage AI for the benefit of national security.