‘No human hands’: NGA circulates AI-generated intel, director says
NGA puts a “template” on the products that “literally acknowledges … what you are looking at has not been touched by human hands,” said Director Frank Whitworth. “It’s important [for] combat commanders and the Secretary and the President that they have that knowledge.”


National Geospatial-Intelligence Agency Director Vice Adm. Frank Whitworth spoke at the DoDIIS Worldwide Conference, Dec. 13, 2022, at the Henry B. Gonzales Convention Center in Texas.
WASHINGTON — The National Geospatial-Intelligence Agency is using artificial intelligence so routinely that it’s now created a standardized disclosure to go on AI-generated intel products, according to the agency’s director.
“We actually have now adopted a living, breathing template, a real piece of art, that goes around every [AI-generated intelligence] product, and it says these words, ‘machine-generated GEOINT,’” Vice Adm. Frank Whitworth, a career naval intelligence officer, told the third annual Ash Carter Exchange and AI+ Expo hosted by the nonpartisan Strategic Competitive Studies Project on Tuesday. (GEOINT refers to geospatial intelligence.)
“No human hands actually participate in that particular template and that particular dissemination,” he said. “That’s new, that’s new and different.”
Whitworth suggested that NGA was the first agency among the 18 official members of US Intelligence Community to apply such a warning label on a routine basis — and that such AI-generated intelligence products are now being circulated at the highest levels of the US government.
“I think it’s significant that you now have an entity within the IC that is putting out a template that literally acknowledges, for the purposes of our readership, what you are looking at has not been touched by human hands, okay, that this is 100 percent machine-generated,” he said. “It’s important [for] those combat commanders and the secretary [of defense] and the president that they have that knowledge, so that they can assess and possibly ask additional questions, and that they know also the risk continuum that we’re all operating under.”
While an NGA spokesperson told Breaking Defense they were not able to make public an image of the template or the full text of the disclosures, Whitworth did offer some details. Significantly, NGA isn’t using a single generic stamp on all AI-generated products, but rather a system that tells the reader the type and level of AI involvement. (In everyday terms, it sounds less like an FDA warning label and more like the detailed nutritional information on the side of a package).
“It actually has a matrix as to whether it was the dissemination that was machine generated or it was the exploitation of the image itself,” Whitworth said.
This detailed disclosure is itself machine-generated based on the specifics of the product, the NGA spokesperson told Breaking Defense. “They’re auto-generated based on included information,” the official explained. “This unique product annotation allows consumers to quickly understand the role machines play in intelligence products.”
RELATED: Pentagon developing ‘Responsible AI’ guides for defense, intelligence, interagency — even allies
As striking as this development is, it is an evolutionary moment rather than a revolutionary one, with almost a decade of backstory. As the Global War on Terror ground on and surveillance technology grew ubiquitous, military intelligence began to drown in data, in the form of drone videos, satellite photos, recordings of intercepted communications, and other sources that it had too few humans to look at or listen to. And NGA, with its vast archives of geo-located data covering most of the surface of the planet, has more data than any other agency, Whitworth said.
“That’s a lot of data, more data than any other agency, certainly in the IC and possibly within DoD or even larger,” he said. “So we need some help.”
So the Pentagon pushed to harness then-novel machine-learning technologies to do a first pass and help the humans prioritize. In 2017, then-Deputy Secretary Bob Work created an Algorithmic Warfare Task Force, whose Project Maven AI saw operational use by early the following year. Maven, in turn, was so successful that it gave rise to two different AI toolkits, both in high demand across the military: one for a secure sharing of all kinds of military data, called Maven Smart System — which is run by the Pentagon Chief Data & AI Office —and one specifically for AI analysis of imagery and video — which is run by NGA and known as NGA Maven.
Keeping up with demand for Maven has been a major challenge for NGA, Whitworth said last year. Meanwhile, the agency continues to add new capabilities to Maven and explore other AI tools, some provided by commercial vendors and others uniquely military.
It’s still a step further to allow AI to generate a final product that gets shared outside the agency.
“We’re not fearful about whether it replaces people’s jobs,” Whitworth said. “We’re willing to take the help that AI/ML provides.”
But the AI itself needs human help, he emphasized, not only to double-check its final output but to help train it for what to look for in the first place.
“Humans are going to be so important as coaches and mentors to these models,” Whitworth said. “I sign letters of appreciation for people, in some cases, who have served more than 40 years, who have, I’m just going to say, wisdom. They have a certain intuitive approach to what they do. … Who better than those people, with all that experience, to continually refine these models?”
What’s crucial, the admiral argued, is “humility” about the limitations of both humans and machines.
“I’ve been really thinking about the word humility in two ways,” Whitworth said. “One is the humility as humans to accept that it’s getting so big, that the data is getting so enormous, that we do need to accept some help when it comes to AI. And then there’s also a little bit of AI humility, where when you have a situation like targeting, like warning, like the safety of navigation, you may actually need the model to be humble enough to accept that a human needs to double check.”