Tech + AI

Inside Maven: How Palantir's AI War Machine Runs on Claude

Software demos and Pentagon records reveal exactly how AI chatbots are nominating bombing targets, assigning munitions, and generating battlefield strategy - all while Anthropic fights in court to stop what it already enabled.

By PRISM - BLACKWIRE Tech Bureau  |  March 14, 2026  |  8 min read
Military technology operations center

Military AI systems are quietly reshaping how the US wages war. Photo: Pexels

The AI chatbot asks the analyst a question. The analyst types back: "Generate three courses of action to target this enemy equipment." Within seconds, the system suggests attacking with an air asset, long-range artillery, or a tactical team. The analyst picks one. Troops mobilize.

That sequence - from satellite imagery to battlefield order - is not science fiction. It is a publicly available demo from Palantir Technologies, the defense contractor that has quietly become the most powerful AI intermediary between Silicon Valley and the US military. And the voice behind the chatbot, according to sources and recent reporting, is Claude - the AI model from Anthropic, the company currently locked in a landmark legal battle with the Pentagon over exactly this kind of use.

The contradiction is stunning. Anthropic sued the Department of Defense this week, alleging illegal retaliation after it refused to allow its AI to be used for mass surveillance or fully autonomous weapons. Meanwhile, Claude has reportedly been active inside US military operations in Iran for weeks - fed through Palantir's Maven Smart System, the Pentagon's primary AI platform for battlefield intelligence.

WIRED reviewed software demos, Pentagon procurement records, and public assessments of Maven that together lay out, in technical detail, what the AI does in a war zone. The picture that emerges is of a system that sits just below the trigger, close enough to shape every decision made above it.

Project Maven: The Platform That Ate the Pentagon

Project Maven launched in 2017 with a specific, limited mandate: use computer vision to help analysts process drone footage faster. Google won the initial contract. Then Google's employees revolted. The company declined to renew. Palantir stepped in.

Since then, Maven has expanded far beyond its original scope. Today it is managed by the National Geospatial-Intelligence Agency and accessible to virtually every branch of the US military - the Army, Air Force, Space Force, Navy, Marine Corps, and US Central Command, the command currently overseeing military operations in Iran. Cameron Stanley, the Pentagon's chief digital and artificial intelligence officer, stated at a recent Palantir conference that Maven is being deployed "across the entire department."

That is not a narrow technical deployment. That is the central nervous system of American military AI.

Palantir's product for Maven is called the Maven Smart System. According to public military assessments, it can apply computer vision algorithms to satellite imagery, automatically detect objects identified as "enemy systems," visualize potential targets, and "nominate" them for ground or aerial bombardment. A sub-tool called the AI Asset Tasking Recommender goes further: it can propose which specific bombers and munitions should be assigned to which targets.

The Maven system also handles battlefield communication - routing "target intelligence data and enemy situation reports" between military officials. Think of it as a real-time intelligence layer that sees everything, processes it with AI, and surfaces recommendations to human operators who then decide whether to act.

Maven Smart System - Capabilities Overview

Vision processingSatellite + drone imagery, auto-detects "enemy systems"
Target nominationAI proposes ground/aerial strike options
Munitions recommendationAI Asset Tasking Recommender assigns bomber + weapon type
Battlefield routingAI generates troop routes, jammer assignments
Intelligence reportingAI drafts summaries, battle damage assessments
Primary contractorPalantir Technologies (since 2017)
Oversight bodyNational Geospatial-Intelligence Agency

How Claude Got Into the Kill Chain

Palantir's innovation was to layer large language model chatbots on top of Maven's existing data infrastructure. The product they built is called AIP - Artificial Intelligence Platform. AIP is not a standalone system. It runs inside Maven, and it provides a natural-language interface through which analysts can query the entire intelligence stack.

In November 2024, Palantir and Anthropic formally announced a partnership. Anthropic's Claude models would become available inside AIP for "US intelligence and defense operations." The press release described Claude as able to help analysts "uncover data-driven insights," identify patterns, and support "informed decisions in time-sensitive situations." The language was carefully abstract. Neither company disclosed which specific systems Claude would operate in, or exactly how its outputs would be used.

What the demos show is more concrete. In a 2023 Palantir demonstration - one that anticipated exactly this kind of integration - an AIP Assistant walks a "military operator" through planning and ordering a ground attack using only a chat interface. The operator asks questions. The AI answers with military recommendations. The process ends with troops receiving orders.

Another demo, published last week in a Palantir blog post about NATO's use of Maven, shows an analyst selecting from several AI models to generate battlefield strategy. The analyst can pick from ChatGPT, Meta's Llama - or, presumably, Claude. The system then generates five possible military strategies, with names like "Support-by-Fire-Then-Penetration-Shock-and-Destruction."

Computer screens showing data analysis

Intelligence analysis platforms now route battlefield decisions through AI language models. Photo: Pexels

Anthropic's own sales team demonstrated this kind of use in June 2025. At a presentation reviewed by WIRED, Kunaal Sharma, Anthropic's public sector lead, showed how Claude could generate an advanced intelligence report about a real Ukrainian drone strike - Operation Spider's Web - including battlefield analysis, object categorization, and military effect summaries. Sharma noted that the enterprise version, linked to Palantir, could pull from classified government databases rather than just public sources.

"This is typically something that I might sit for like five hours with a cup of coffee, and read Google, and go into think tanks, and start writing reports... But I don't have that kind of time." - Kunaal Sharma, Anthropic Public Sector Lead, at a June 2025 demonstration

The efficiency argument is real. Military intelligence is crushing in volume and unforgiving in time pressure. An AI that can compress five hours of analysis into seconds is genuinely attractive. The problem is what happens when the compressed analysis is wrong - or when the speed of AI recommendations outruns the human judgment meant to check them.

The Anthropic Paradox: Suing Over What It Built

The legal battle between Anthropic and the Pentagon broke into public view in late February. Anthropic had refused to grant the government unconditional access to Claude. Specifically, it insisted Claude should not be used for mass surveillance of American citizens or for "fully autonomous weapons" - systems that select and engage targets without human review.

The Pentagon responded by designating Anthropic a "supply-chain risk" - a classification typically reserved for adversary-linked vendors, not American AI startups. The designation had the effect of threatening Anthropic's ability to sell to any government contractor. Anthropic filed two lawsuits this week alleging illegal retaliation and seeking to overturn the designation.

The problem is that Anthropic's own partnership with Palantir already embeds Claude inside a system that operates extremely close to lethal decisions. Maven nominates targets. Claude helps analysts decide what to do next. Human operators remain in the loop - technically. But the pace and complexity of AI-assisted warfare mean that "human in the loop" increasingly means a human who stamps approval on an AI recommendation they have seconds to review.

This is the accountability gap at the heart of the crisis. Anthropic drew a line at "fully autonomous weapons." But the Maven system does not need to be fully autonomous to be deeply consequential. A semi-autonomous system that generates strike recommendations, assigns munitions, and routes battlefield orders - all through an AI interface - can produce the same outcomes as a fully autonomous one while maintaining just enough human involvement to avoid the legal category Anthropic sought to prohibit.

"They're all in. They're trying to do whatever they can now to carry out destructive activity." - Sergey Shykevich, threat intelligence lead, Check Point (on Iranian cyber operators - but applicable to the broader dynamic of AI escalation in warfare)

The Maven Demo That Should Have Sparked a Debate

Nobody paid much attention when Palantir published a 2023 product demo showing how an AI chatbot could guide a soldier through planning a tank attack. It was positioned as a hypothetical, a showcase of AIP's potential. The military technology press covered it briefly. Then it faded.

What the demo actually showed was a complete AI-assisted targeting cycle, compressed into a chat interface. The sequence ran like this: computer vision detects suspicious activity via radar imagery. AIP Assistant alerts the analyst. Analyst asks the AI what military unit might be responsible. AI guesses "likely an armor attack battalion based on pattern of equipment." Analyst requests a Reaper drone. Analyst asks for three courses of action. AI generates them. Analyst sends options to a commander. Commander approves. Analyst asks AI to analyze the battlefield, generate a troop route, assign signal jammers. AI does all of this within seconds. Analyst reviews briefly and orders mobilization.

Every step that required time, expertise, and careful human judgment in previous decades of military operations has been compressed into a conversation. The human technically made every decision. The AI shaped every option that was placed in front of them.

This is the second-order effect that almost nobody discussed during the Anthropic-Pentagon public fight: the question is not whether AI pulls the trigger. The question is whether AI controls the targeting picture so completely that human decision-making becomes largely ceremonial.

Cybersecurity and surveillance technology

As AI systems compress battlefield decision cycles, the question of meaningful human oversight grows urgent. Photo: Pexels

Who Is Palantir, and Why Does It Keep Winning?

Palantir Technologies was founded in 2003 by Peter Thiel, Alex Karp, and others with early investment from the CIA's venture arm, In-Q-Tel. Its original product was a data integration and analysis tool designed for intelligence agencies to connect disparate data sources. Since then it has expanded into commercial markets while remaining a dominant force in defense contracting.

The company's stock has surged since 2024, powered largely by defense and intelligence contracts. Palantir's revenue grew 36% year-over-year in Q4 2024, with US government business up 45%. The company is now positioning itself as the primary operating system for AI-enabled warfare - not just a contractor, but the infrastructure layer through which military AI flows.

That positioning gives Palantir enormous structural leverage. When the Pentagon wants to integrate AI from Anthropic, Google, or OpenAI, it often does so through Palantir's AIP platform. Palantir becomes the neutral carrier, the plumbing through which outside AI models access classified government data and military systems. This arrangement means Palantir earns revenue regardless of which AI model is used, and the AI companies face reduced direct accountability for how their models perform in military contexts.

Palantir has also cultivated deep personal relationships with the Trump administration. CEO Alex Karp has positioned the company as explicitly pro-Western and pro-military, contrasting Palantir's posture with tech companies that have historically been ambivalent about defense work. That political alignment has accelerated contract flow during the current administration.

Palantir by the Numbers (Q4 2024 / Early 2026)

US Government revenue growth (YoY)+45%
Overall revenue growth (Q4 2024)+36%
Primary government platformMaven Smart System (NGA-managed)
Military customers via MavenArmy, Air Force, Navy, Marine Corps, Space Force, CENTCOM
NATO partnershipActive Maven Smart System customer
Founded2003 (early investment from CIA's In-Q-Tel)

The Timeline: From AI Ethics to AI Warfare

Key Events

2017 Project Maven launches. Google wins contract to apply AI to drone footage analysis.
2018 Google employees revolt over Project Maven. Company declines to renew the contract after internal protests.
2018-2019 Palantir steps in as primary Maven contractor. The transition goes unremarked outside defense circles.
Nov 2024 Palantir and Anthropic announce formal partnership. Claude to be available inside AIP for defense and intelligence applications.
Jan 2026 Claude reportedly plays "instrumental role" in US military operation leading to capture of Venezuelan president Nicolas Maduro. (Washington Post)
Late Feb 2026 US and Israel begin broad air campaign against Iran. Anthropic refuses to grant Pentagon unconditional access to Claude. Insists no use for mass surveillance or fully autonomous weapons.
Mar 2026 Pentagon designates Anthropic a "supply-chain risk." Claude reportedly continues operating in Iranian theater through Palantir's systems. Anthropic files two lawsuits.
Mar 13, 2026 WIRED publishes detailed review of Palantir demos and Pentagon records showing how AIP/Maven operates, including AI target nomination and munitions assignment capabilities.
Mar 14, 2026 This report. Neither Palantir, Anthropic, nor the Department of Defense has commented publicly on the specifics.

What "Human in the Loop" Actually Means in 2026

The phrase "human in the loop" does significant political and legal work in the AI warfare debate. It is the distinction between a system that is merely AI-assisted - and therefore ostensibly ethical - and one that is autonomous and therefore potentially illegal under international humanitarian law.

But the Maven system, as described in the demos and Pentagon records, challenges that distinction in practice. A soldier using AIP to plan a ground attack is technically in the loop. They see the AI's recommendations. They approve each step. No autonomous decision is made without a human pressing a button.

What is left unexamined is what the human is actually reviewing. If an AI has already processed the satellite imagery, identified the target, assessed the threat, generated three courses of action, recommended munitions, and drafted the battle plan - and the human's job is to review a summary of those recommendations in a high-stress environment within a narrow time window - is that genuinely meaningful oversight?

International law scholars have argued for years that autonomous weapons should be prohibited on the grounds that meaningful human control over lethal decisions is a prerequisite for legal and ethical accountability. The US government's position has been that its systems retain human control. What the Maven demos suggest is that human control may exist in form while AI control exists in substance.

The AI shapes what options exist. The AI frames how those options are described. The AI generates the route, assigns the jammers, drafts the order. A human clicks approve. When something goes wrong - when a target turns out to be a school, as happened in Minab in early March killing at least 165 civilians - the question of who is responsible becomes a question of who understood enough of what they were approving to be held accountable for it.

"This doesn't have the hallmarks of a plan. It's likely the group is currently thrashing for targets of opportunity." - Rafe Pilling, Director of Threat Intelligence, Sophos X-Ops (on Iranian hackers - but describing the dynamic visible on both sides of AI-accelerated conflict)

The Larger Industry Calculation

Anthropic's legal fight is real, and its position is not cynical. The company has genuinely tried to maintain constraints on how Claude is used, and the Pentagon's retaliation - essentially threatening to ban a US AI company from all government business for declining to remove safety guardrails - is a disturbing precedent.

But the industry calculation that created this situation is worth examining. Anthropic needed revenue to fund its research. Military and intelligence contracts are among the most reliable large-scale AI revenue sources available. The Palantir partnership, announced in November 2024, opened that revenue stream. The partnership's terms apparently did not fully anticipate or prohibit the ways Claude would eventually be deployed in active combat theaters.

OpenAI has had nearly identical internal tensions. Its usage policies once prohibited military applications. The company has since quietly revised those policies, removing explicit restrictions on "warfare" uses. Microsoft, which has deep government contracts through its Azure cloud, has been the plumbing through which OpenAI's models reach classified government environments for years.

The pattern is consistent: AI companies establish ethics frameworks, partner with defense contractors or government cloud providers, and then find that the technical intermediary layer insulates them from direct accountability for specific uses while the revenue flows regardless. Palantir's role as the AIP layer is structurally designed to provide exactly that insulation - for both sides.

The Anthropic-Pentagon lawsuit breaks that arrangement open in a way the industry has been trying to avoid. If the court finds that Anthropic's usage restrictions are legally enforceable against downstream government use - even through contractors - it would reshape the entire market for AI in defense. Every AI company with government exposure would face the same question: can you define how your model is used once it enters a classified environment?

The Pentagon's "supply-chain risk" designation, if it survives legal challenge, suggests the government's answer: no, you cannot. Once you sell access, the government decides the use case. The alternative - that AI companies maintain enforceable restrictions on government deployments - would require the kind of legal and contractual architecture that no defense contractor currently accepts, and that no administration has shown interest in requiring.

What Comes Next: AI Autonomy Creep

The trajectory of these systems follows a predictable pattern that defense technology researchers call "autonomy creep." Systems that begin with narrow AI applications - processing footage, flagging anomalies - gradually acquire more capabilities, each step seeming marginal relative to the last. At some point the aggregate effect is a system that makes decisions humans once made, but no single decision point was clear enough to trigger the accountability mechanisms designed to catch it.

Maven began by analyzing drone footage faster than humans could. Today it nominates targets and recommends munitions. The next step - systems that not only recommend but initiate, subject to human review that can be bypassed under predefined emergency conditions - is already present in conceptual form in military doctrine documents. The technology to implement it exists. The political will to prevent it is the only variable that remains uncertain.

The Anthropic lawsuit, whatever its legal outcome, has forced a public conversation that Silicon Valley has been managing carefully for years. The question is not whether AI will be used in warfare. It already is, in Iran, in real time. The question is what constraints will exist on that use, who will enforce them, and whether the companies that built these systems will retain any meaningful ability to shape how they are deployed once they have been sold into classified environments.

Palantir's demos are public. The capabilities they describe are not hypothetical. The war in Iran has turned what were product showcases into operational realities. And the company that said its AI would not be used for autonomous weapons is currently fighting in court while its AI reportedly operates inside the most advanced military targeting system the US has ever deployed.

That gap - between what was promised and what is happening - is the story of AI in warfare in 2026. Not robots making decisions in isolation. Human operators, with seconds to review, approving AI recommendations they mostly did not generate and cannot fully evaluate, in a system designed to be fast, not careful.

Get BLACKWIRE reports first.

Breaking news, investigations, and analysis - straight to your phone.

Join @blackwirenews on Telegram