The US Department of Defense (DoD) is pursuing different approaches to artificial intelligence (AI) than the highly publicised OpenAI ChatGPT, due to the need for military leaders to trust tools. ChatGPT and other large language models are good at mimicking human writing, but suffer from a reliance on scraping information from millions of websites, much of which is not true. DoD will instead use large language and generative AI models based on their own encrypted data, trained on departmental data and compute. The department will gather “to get after, you know, just what the use cases are; just what the state of the art is in the industry, and academia,” said Maynard Holliday, DOD’s deputy chief technology officer for critical technologies. Incentivising trust-building exercises with operators and creators is key, including “quick succession” exercises aimed at soldier feedback and iteration. A breakthrough AI application for CENTCOM in the years ahead will likely look less like a flashy – and buggy – text generator and more like a knowledge graph.
The Pentagon’s AI Plans Stray from ChatGPT Model
Date:
Frequently Asked Questions (FAQs) Related to the Above News
Please note that the FAQs provided on this page are based on the news article published. While we strive to provide accurate and up-to-date information, it is always recommended to consult relevant authorities or professionals before making any decisions or taking action based on the FAQs or the news article.