Artificial intelligence (AI) is affecting the future of battle. The warfighters’ tools will include capabilities that enable quick, agile decisions and maneuvers with increased accuracy. Airmen, Guardians, Marines, Sailors and Soldiers will be equipped with communications networks and modeling and simulation platforms that rapidly aggregate and analyze data, allowing for real-time decisions. Autonomous systems will operate independent of external control for progressively more extended distances and times.
The United States Department of War (DOW) is working with Allies and Partners and industry to develop and scale AI technologies to counter pacing threats — including China and Russia — and ensure global security. As AI technology rapidly evolves, militaries race to deploy new capabilities while grappling with the potential risks these tools bring to the battlefield. While talk of using AI in weapons can conjure movielike scenarios where mutinous robotic soldiers turn against humankind, defense experts say the greater risk lies in not using the technologies enabled by AI to strengthen defense and deterrence.
U.S. Secretary of War Pete Hegseth has pushed for innovation, lethality and readiness in the DOW’s approach to AI. The goal is to bolster combat effectiveness and operational efficiency.
AI technology is the simulation of human intelligence processed by machines, typically computer systems, by collecting and classifying information that lets the machine perform specific tasks, such as writing a report or transcribing data. Machine learning is a branch of AI that enables computers to learn without external programming. These systems use data to self-train and find patterns or make predictions, growing more accurate over time as the system collects and integrates more data. Examples include predictive text on a mobile phone or an online shopping site that makes suggestions based on a user’s past purchases.

In addition to quicker, more accurate decisions in the battlefield, AI tools can assist the military in other ways. Training platforms, autonomous vehicles, logistics tools, and intelligence and cyber defense capabilities incorporate AI technology. Satellite imagery can use predictive AI tools to help pinpoint locations where security threats may occur, including deployment of nuclear weapons and other weapons of mass destruction.
“Analysis is really great, but it’s mainly retroactive, a forensic capability of looking back in time,” William Marshall, chief executive officer of Planet Labs, a global satellite imagery company, told Defense One magazine in June 2024. Planet’s satellite imagery helped U.S. intelligence officials identify Russia’s buildup of troops, aircraft and weapons in Ukraine’s Crimean Peninsula preceding Russia’s 2022 invasion of Ukraine, Defense One reported. “In principle, generative AI models … can leverage satellite data to predict what is likely to happen: ‘You’re likely to have a drought here that might lead to civil unrest,’” Marshall said.
AI is being used to share data across the joint force and with Allies and Partners. The DOW launched Project Maven in 2017 to develop AI capabilities by leveraging technology to automatically identify targets on the battlefield. The department’s 2018 National Defense Strategy identified AI as one of the key technologies that will ensure the U.S. “will be able to fight and win the wars of the future.” Australia, the United Kingdom and the U.S. have incorporated AI algorithms on multiple systems to enhance data sharing and processing among their anti-submarine, reconnaissance, precision targeting and other capabilities. U.S. Strategic Command (USSTRATCOM) will incorporate AI into its nuclear command, control and communications enterprise.
“Advanced AI and robust data analytics capabilities provide decision advantage and improve our deterrence posture,” said USSTRATCOM Commander Gen. Anthony Cotton. “IT [information technology] and AI superiority allows for more effective integration of conventional and nuclear capabilities, strengthening deterrence.”
It’s important to note that AI will always have human oversight and merely assist with data collection and analysis to gather and integrate information more quickly, develop faster solutions and provide more decision space to USSTRATCOM leadership.

“Everything we do has a human in the loop,” Vice Adm. Brad Cooper, deputy commander of U.S. Central Command, told military leaders, government officials, industry representatives and scholars at a March 2024 conference in Tampa, Florida, presented by the Global and National Security Institute at the University of South Florida. “At the end of the day, decisions are made by humans. It’s the decision-making process that is more vibrantly enabled through AI. We’re able to move at speeds that were previously unimaginable.”
Washington is working to ensure the U.S. stays ahead of competitors in developing AI capabilities. In 2017, China released a strategy detailing its plan to take the global lead in AI by 2030. Less than two months later, Russian President Vladimir Putin announced Russia’s intent to pursue AI technologies.
“I think the global competition has got to be first and foremost in our minds at all times,” U.S. Sen. John Hickenlooper said during a Center for Strategic and International Studies (CSIS) panel in November 2024. He noted that China is investing a “staggering” amount of resources into building the country’s AI capabilities. “But we have a system in this country of entrepreneurship and innovation tied together that, with a collaborative history between that … our institutions of higher learning, our private sector, our military, they’re all … accelerating the advances we make.”
In July 2025, the White House released “Winning the AI Race: America’s AI Action Plan,” in keeping with President Donald Trump’s January executive order to remove barriers to U.S. leadership in AI, including fostering economic competitiveness and bolstering national security. The plan identifies over 90 policy actions across three pillars: Accelerating Innovation, Building American AI Infrastructure, and Leading in International Diplomacy and Security. “Artificial intelligence is a revolutionary technology with the potential to transform the global economy and alter the balance of power in the world,” said David Sacks, the U.S. administration’s AI policy chief. “To remain the leading economic and military power, the United States must win the AI race. … To win the AI race, the U.S. must lead in innovation, infrastructure and global partnerships.”
The plan’s key policies include: partnering with industry to export U.S. technology to Allies and Partners; promoting rapid expansion of data centers and semiconductor manufacturing facilities; and streamlining regulations to speed AI development and deployment.
“Winning the AI race is nonnegotiable,” U.S. Secretary of State Marco Rubio said. “America must continue to be the dominant force in artificial intelligence to promote prosperity and protect our economic and national security.”
The energy challenge
Ensuring timely deployment and dependable availability of AI tools for the defense and other sectors requires a massive buildup of energy infrastructure. Data centers needed to store massive amounts of digital information take up large swaths of land and increase energy use by orders of magnitude. In addition, servers that operate AI software require a lot of water for cooling. There’s a clear environmental impact: Google and Microsoft reported in 2024 that their companies saw double-digit increases in emissions during the past several years, driven by new data center energy use to support AI workloads.
Using an AI tool for a basic text query can require 10 times more energy than a Google search, according to industry experts.
“One query to ChatGPT uses approximately as much electricity as could light one light bulb for about 20 minutes,” Jesse Dodge, a senior research analyst at the Allen Institute for AI in Seattle, told National Public Radio. “So, you can imagine with millions of people using something like that every day, that adds up to a really large amount of electricity.”
Neil Chatterjee, former chairman of the Federal Energy Regulatory Commission, said energy demands for AI tools will need to be met with a mix of traditional and renewable energy sources to keep pace.
“We’ve got to get our demand projections accurate and figure out how much power we will need,” Chatterjee said. “I don’t think we have fully started to wrap our brains around what it will take to meet this coming surge in demand while maintaining not just affordability of power, but reliability as well. We’ve got to make sure that we’ve got a sufficient amount of power not just to support data centers for AI, to win the AI race, but to also make sure that residential consumers in the middle of a really hot summer or a really cold winter don’t have power curtailed.”
Demand for AI capabilities, as well as the energy infrastructure to enable them, is best met through partnerships between government, industry and Allies, said Chris Lehane, vice president of global policy for OpenAI, developer of ChatGPT. He noted that the U.S. has succeeded in developing massive infrastructure projects in the past, including the nationwide interstate system and early development of the internet.
“Today the U.S. is winning but that lead is not guaranteed,” he said during the CSIS panel. Keeping that lead will require progress in multiple areas, including construction permits, investment incentives and reinvigorating nuclear power. “This is really a time where we need to start to think big given what’s at stake. Again, there’s two nations in the world that can build this stuff at scale and it’s the U.S. and China.”
Establishing guardrails
Some leaders worry that emerging AI capabilities will increase risks to safety and stability. In a July 2023 report, United Nations University called for an AI governance framework that will adapt as needed along with technology to avoid risks including bias, privacy and security. AI could be used to target groups or individuals based on faulty data gathered by commercial tools and accessed by government.
“Data is the fundamental issue here. It causes a huge amount of concerns and vulnerabilities, and it’s not being accounted for when we talk about things like traceability broadly,” Meredith Whittaker, president of the Signal Foundation, a nonprofit organization that advocates for private, secure communication, told Defense One magazine. Whittaker is one of three authors of an October 2024 paper published by the AI Now Institute at New York University calling attention to potential risks of AI. Because AI relies on patterns gleaned from public and personal data to perform functions, innocent civilians could erroneously be placed on a target list, according to the paper. The authors urged developers to insulate military AI systems and personal data from commercial models. She called on government to broaden privacy laws and strengthen laws to prevent bad actors from gaining access to commercial datasets that include civilians’ personal information.
In its AI security memorandum, the U.S. government directed DOW officials and the intelligence community to examine how existing policies and procedures affecting privacy and civil liberties can be revised to “enable the effective and responsible use of AI.” The memorandum also calls for other federal agencies to take active steps to uphold human rights, civil rights, civil liberties, privacy and safety.
Adm. Samuel Paparo, Commander of U.S. Indo-Pacific Command, has referred to AI as a force multiplier. It can enhance decision-making, optimize joint force operations and enable autonomous tactical systems, provided a human always oversees its use, he said at the Land Forces Pacific symposium in Hawaii in May 2025.