AI Transformational Model accelerates battle staff decision-making ‘seven-fold’ in Air Force experiment
In the service’s inaugural DASH experiment, coders from both industry and the Shadow Operations Center – Nellis (ShOC-N) spent two weeks building “agentic AI” tools that staff officers then tried out in high-pressure conflict scenarios.


Air Force personnel in civilian clothes test new AI command and control tools in a wargame held in Las Vegas, Nev. (Air Force photo)
WASHINGTON — New AI tools sped up a command staff’s decision-making process seven-fold in a recent wargame in Las Vegas, according to the Air Force.
The experiment, called “Decision Advantage Sprint for Human-Machine Teaming” (DASH), is the first of three DASH wargames planned for this year. They’re exploring different aspects of an Air Force-developed AI called the Transformational Model for Decision Advantage. It’s designed to be upgraded with new features, modules, and “microservices” by both military and private-sector teams, and many of the tools used in the first DASH were custom-coded during the exercise itself.
Officials said in a Monday press release that the digital aids doubled the number of operational “dilemmas” humans were able to address and generated three times as many “valid solutions” to those problems, ranging from striking an enemy target to refueling friendly planes, according to their “initial analysis” of the wargame. And while errors and outright “hallucinations” have been a major concern with AI, the Air Force said this experiment showed an increase in the quantity of decisions without a decrease in quality.
“Machine answer errors were on par with human error, not bad for a week’s worth of coding,” said Air Force Col. Christopher Cannon, head of the Advanced Battle Management System (ABMS) Cross Functional Team, which originally created the Transformational Model. “We demonstrated that machines absolutely helped, software services helped, but we also demonstrated that we can in fact build a software microservice that allows this to happen. … We’re not buying software to display more data: Coders are building software that actively helps operators transform that data into measurably better battle management.”
An Air Force official told Breaking Defense that “the Transformational Model (TM) is intended to transform how the Department of the Air Force, and Joint and Coalition forces in general, modernize C2 [command and control]. The TM seeks to define the military decisions associated with achieving transformations on the battlefield, à la transforming a tank into a non-tank (i.e., pile of rubble), transforming a disadvantageous situation to a dominant position, etc. … Our goal is to deliver C2 and intelligence capabilities that enable our warfighters to achieve decision advantage.”
The Transformational Model began development in early 2022 and showed measurable impacts on military effectiveness in earlier experiments, the Air Force has said in previous news releases.
The latest experiment was held over two weeks in Las Vegas, Nev., home to Nellis Air Force Base and the Shadow Operations Center – Nellis (ShOC-N), a training unit devoted to high-tech, high-stress experimentation. A mixed staff of US and Canadian military personnel first ran through a combat scenario in the conventional manner, without using artificial intelligence. Then they ran through another, similar but not identical wargame where they got to use the AI. That included the Transformational Model and the new tools built on top of it by both Air Force coders from the ShOC-N and multiple industry teams, who were able to participate because the scenarios were kept at the unclassified level.
These ongoing Air Force experiments — and the service’s Advanced Battle Management initiative as a whole — are on the forward edge of a much larger effort by the US military to apply AI, not to build robotic weapons, but to make its human commanders more effective.
AI, CJADC2, And The Transformational Model
While armed drones and AI-driven dogfights capture the headlines, the Pentagon is also investing heavily in AI for HQs, where commanders and their staffs struggle to sort the masses of data required to run a 21st century war. It’s a less glamorous but arguably even more essential application of artificial intelligence for the US military. Commanders need to get up-to-date intelligence, make informed decisions, and transmit timely orders to their frontline troops. The side that can run through this cycle faster than the enemy can effectively lap the slower decision-makers.
The US military’s primary effort to harness AI to aid commanders is the nascent but rapidly evolving all-service command system, Combined Joint All-Domain Command and Control (CJADC2). Rather than replace human commanders and their staffs with AI, the Pentagon’s objective is to augment them with virtual assistants who can rapidly and accurately process the masses of information a modern military operation requires, freeing up the humans to strategize.
The Air Force’s contribution to CJADC2 is led by Cannon’s ABMS team. The idea behind their Transformational Model, in turn, is to “decompose” the chaos and complexity of military operations into distinct “actionable entities” and then propose specific actions to take towards them. Those AI-identified entities can be military units — friendly, enemy, neutral, or unidentified — or even abstractions like relationships and events, the Air Force official told Breaking Defense, and the AI-generated options can be anything from striking a hostile force to resupplying a friendly one. The Transformation Model currently focuses on operations in air, space, and the electromagnetic spectrum (e.g. communications, radar, and jamming), but it could be generalized to cover other “domains” for the land and sea services as well, Air Force officials have said.
The AI employed is also more autonomous than publicly available chatbots, which wait for a user to input a specific prompt before they output an answer. Instead, the Transformational Model uses AI “agents,” software that independently takes the initiative to perform specific tasks previously handled by humans.
But the Air Force isn’t out to build SkyNet, either. The goal is an AI staff officer that generates reports, recommendations, and first drafts of plans, not an AI commander that issues orders to use lethal force. As one of the ABMS team’s officers, Col. Jonathan Zall, put it, the AI is “freeing up human operators to focus on making informed, high-level decisions that require moral judgement.”