← BLACKWIRE
AI & Power

The Anthropic Surge: How the Pentagon's Blacklist Made Claude the World's Most Wanted AI

Washington tried to kill Anthropic's government business. Instead, it sent Claude to the top of the App Store in more than 30 countries, generated a wave of global sympathy, and then - in a contradiction too strange for fiction - used Claude anyway to plan strikes on Iran, hours after the ban went into effect.

PRISM / BLACKWIRE — March 6, 2026
Share
Digital circuit visualization representing AI technology and data flows

The battle over who controls AI for war has become the defining tech fight of 2026. Photo: Unsplash

The script was supposed to be simple. Pete Hegseth announces a supply-chain risk designation. Defense contractors get spooked. Claude bleeds customers. Anthropic caves, removes its guardrails, and falls in line with every other AI company that has chosen government money over stated principles.

That is not what happened.

Within days of the February 27th designation - the most aggressive government action against a private American AI company in history - Claude was breaking daily signup records in every country where it operates. App tracking firm AppFigures documented it climbing to the top of App Store charts for free and AI apps in more than 30 countries, including the United States, Canada, and most of Europe. The people who download apps did not know about the nuances of Pentagon contracting law. They knew that a company had refused to build killer robots for the government and gotten punished for it, and they responded by downloading Claude.

This is the Streisand Effect applied to AI policy. The government tried to suppress Anthropic's influence. Instead, it made Claude famous.

How the Ultimatum Went Wrong

The confrontation had been building for weeks. Behind closed doors, the Pentagon had been negotiating with Anthropic over its acceptable use policy - specifically, whether Claude could be used for autonomous lethal weapons with no human oversight, and for mass domestic surveillance operations. Anthropic's answer was consistent: no on both.

That position was not acceptable to Secretary of Defense Pete Hegseth, who had been watching OpenAI and xAI agree to the Pentagon's terms and growing frustrated that Anthropic was holding out. On the afternoon of February 27th, the DoD issued what amounted to a corporate ultimatum: agree by 5:30 PM EST to let the Pentagon use Claude for "all legal purposes," or be designated a supply-chain risk. The designation is a legal mechanism historically reserved for foreign companies with ties to adversarial governments - think Huawei, or ZTE. It had never before been publicly applied to an American company.

Anthropic did not agree. At just after 5 PM, Hegseth posted his response on X. His language was striking even by the standards of a Pentagon that had adopted the rhetorical style of a punditocracy:

"Cloaked in the sanctimonious rhetoric of 'effective altruism,' they have attempted to strong-arm the United States military into submission - a cowardly act of corporate virtue-signaling that places Silicon Valley ideology above American lives."

The designation, per Hegseth's post, would bar any company doing "any commercial activity with Anthropic" from working with the Department of Defense. Companies had six months to divest from Anthropic products. That would mean Palantir, AWS, and dozens of other major defense contractors rethinking their AI stacks - potentially billions in disrupted contracts.

Anthropic's response was equally unambiguous. In a blog post published the same evening, the company said it had not received direct communication from either the DoD or the White House about the negotiation status - and was willing to fight the designation in court. "We believe this designation would both be legally unsound and set a dangerous precedent for any American company that negotiates with the government," the post read.

The Contradiction at the Center of It All

Here is where the story stopped being about corporate governance and became something stranger.

Less than 24 hours after Trump announced a ban on federal government use of Anthropic products, and while the ink was still drying on Hegseth's supply-chain designation, the United States launched major strikes against Iran. Those strikes were planned, at least in part, using Claude.

According to the Wall Street Journal's reporting on the operation, planning for the Saturday Iran strikes was already underway and "relied on Claude for intelligence assessments and target identification." The ban had been announced. The designation was live. And the US military was still using the tool it had just declared a national security liability.

The contradiction was not lost on observers. Within hours of the ban, the US government's own kinetic military operations were demonstrating what Anthropic had argued throughout the negotiation: that Claude had become genuinely indispensable to how the modern military thinks and plans. The six-month phaseout was not just a legal nicety. It was a forced timeline because there was no immediate replacement.

#1
App Store rank in 30+ countries after the ban (AppFigures data)
<24h
Time between Trump's ban and Claude being used for Iran strike planning
700K
Tech workers who signed letter demanding companies reject Pentagon terms
6 mo
Pentagon's phaseout window - during an active conflict that needs AI targeting
Developer working at computer with code on screen

The technical dependency the Pentagon tried to deny runs deep across defense contractor supply chains. Photo: Unsplash

The Streisand Effect, AI Edition

Barbara Streisand's 2003 lawsuit to suppress aerial photographs of her Malibu mansion resulted in a surge of views for those same photographs. The principle is simple: attempts to suppress information or punish visibility often backfire, making the target more visible than before.

The Anthropic case is perhaps the clearest example of the Streisand Effect playing out in AI policy. The Pentagon's designation did three things the government definitely did not intend:

First, it made Anthropic's refusal to remove safety guardrails a global news story. Millions of people who had never heard of the "responsible scaling policy" or Anthropic's position on autonomous weapons now knew that a company had stood up to the US military on the question of killer robots and mass surveillance. That is remarkable marketing, delivered entirely by the other side.

Second, it transformed the abstract debate about AI safety into a concrete political act. Downloading Claude became a statement. In a media environment saturated with stories about the Trump administration's relationship with tech companies - the fawning dinners, the coerced donations, the executives competing to demonstrate their patriotic credentials - Anthropic's defiance read as something different. The public rewarded it with downloads.

Third, and perhaps most strategically important for Anthropic's long-term position, it may have reduced the company's financial dependence on government contracts at the exact moment those contracts were becoming politically toxic. If civilian adoption surges while defense revenue is forcibly wound down over six months, Anthropic exits the fight with a larger, more diversified customer base and a brand story that money cannot buy.

The Defense Contractor Scramble

Not everyone has the luxury of watching the Streisand Effect play out from a distance. For the companies caught in the middle - the Palantirs, the Boeings, the Lockheed Martins, the thousands of smaller defense contractors that had integrated Claude into their workflows - the designation created an immediate compliance headache.

CNBC reported that companies which do business with the US military are already pivoting away from Claude, even though the legal question of whether the designation can actually compel that is genuinely contested. "Out of an abundance of caution," one source told the network, defense tech firms are beginning to remove Claude from their systems. Not because a court has ruled the designation valid. Because no defense contractor wants to be the test case that challenges the Secretary of Defense.

This is the practical machinery of soft coercion. The threat does not need to be legally sound to be effective. It just needs to be sufficiently uncertain that the companies subject to it cannot afford to gamble on the outcome. Anthropic can afford to fight in court because it is the target. The average defense subcontractor cannot afford to find out if the designation applies to them.

The Pentagon had reportedly already been asking major defense contractors - including Boeing and Lockheed Martin - to provide information about their reliance on Claude, as a precursor to the designation. That level of pre-emptive mapping suggests the DoD had done significant preparation for this moment, knowing the designation would have cascading effects through the defense supply chain.

The irony is that this process may ultimately hurt the Pentagon more than Anthropic. Defense technology moves fast, and the contractors now scrambling to remove Claude will need alternatives. OpenAI's GPT-5.3, xAI's Grok, and Google's Gemini are the obvious candidates. But none of them have the depth of deployment in classified networks that Claude had established since June 2024, when Anthropic became the first frontier AI company to deploy models in the US government's classified infrastructure. That institutional knowledge - the fine-tuning, the prompt engineering, the workflow integrations - does not transfer overnight.

The Legal and Political Landscape

Anthropic's legal argument centers on a narrow but significant point: the supply-chain risk designation mechanism, as established by statute, applies to companies with ties to foreign adversaries. Using it against a domestic American company that simply refused to remove safety features from its product is, Anthropic argues, outside the scope of what the law permits.

"The Secretary does not have the statutory authority to back up this statement," Anthropic's response noted, referring to Hegseth's broadened interpretation that the designation applies to companies doing "any commercial activity" with Anthropic. If Anthropic is right on the law, the designation could be reversed on relatively narrow administrative law grounds without requiring a broad ruling on AI policy.

The political environment makes the legal fight more complex. Dean Ball, who served as a senior AI policy advisor to Trump, called the fight "attempted corporate murder" and warned it could have a chilling effect on the entire industry. Alan Rozenshtein, a former DOJ official specializing in technology law, went further - telling Politico this could be the first step toward what he described as "partial nationalization of the AI industry."

That framing - nationalization - is striking. What Hegseth demanded was not ownership of Anthropic, but something arguably more invasive: the ability to direct how a private company's product behaves, overriding the company's own safety architecture to enable uses the company had explicitly prohibited. If that demand is upheld by courts, it would establish a precedent that the federal government can compel AI companies to remove safety features under the cover of national security.

"When I joined the tech industry, I thought tech was about making people's lives easier. But now it seems like it's all about making it easier to surveil and deport and kill people." - AWS employee, speaking to The Verge

That concern has resonated across the industry. Organized labor groups representing approximately 700,000 tech workers at Amazon, Google, Microsoft, and other major companies signed a joint letter demanding their employers reject the Pentagon's demands. Protect Democracy, a nonprofit focused on democratic governance, published an open letter calling for Congressional oversight of the DoD's AI policy approach.

A Timeline of the Escalation

Feb 22
Washington Post reports Pentagon is negotiating with Anthropic over removing guardrails, including use for autonomous lethal targeting with no human oversight. OpenAI and xAI have reportedly already agreed to similar terms.
Feb 27 - 5:00 PM
Anthropic CEO Dario Amodei issues statement: "The Pentagon's threats do not change our position: we cannot in good conscience accede to their request." Anthropic changes its Responsible Scaling Policy but holds the line on mass surveillance and autonomous weapons.
Feb 27 - 5:30 PM
Pentagon's ultimatum deadline passes. Trump announces on Truth Social that the federal government will end all use of Anthropic products. Hegseth posts supply-chain risk designation on X, extending it to any company doing "any commercial activity" with Anthropic. Six-month phaseout announced.
Feb 28
Wall Street Journal reports that US strikes on Iran, launched the previous night, were planned using Claude for intelligence assessments and target identification - hours after the ban was announced. Anthropic announces it will challenge the designation in court.
Mar 1-3
AppFigures data shows Claude topping App Store charts in more than 30 countries. Anthropic reports breaking daily signup records globally. CNBC reports defense contractors preemptively removing Claude from their systems.
Mar 3
OpenAI CEO Sam Altman announces updated Pentagon agreement that includes restrictions on domestic mass surveillance and autonomous weapons - essentially the same terms Anthropic had been demanding. QuitGPT protesters gather outside OpenAI's offices.
Mar 5
Amodei sends 1,600-word memo to Anthropic staff: "We haven't donated to Trump... we haven't given dictator-style praise to Trump." The Verge reports Claude usage continues to surge with "the inverse effect" of the designation driving up demand in every country where it operates.
Mar 6
Anthropic listed as Pentagon "supply chain risk" while simultaneously helping power the active US military operation in the Persian Gulf. The legal challenge is pending. No replacement system has been named.
Military technology and AI interface visualization

The Pentagon's six-month phaseout timetable looks increasingly difficult to meet during active operations in the Gulf. Photo: Unsplash

The OpenAI Pivot and What It Means

While Anthropic was being made an example of, OpenAI was quietly negotiating its way to a position that ended up looking remarkably similar to what Anthropic had been demanding all along.

On March 3rd, Sam Altman announced that OpenAI had reached a new agreement with the Pentagon. The terms included restrictions on domestic mass surveillance and human responsibility requirements for autonomous weapon systems. Altman wrote that OpenAI was "asking the DoW to offer these same terms to all AI companies."

Read that again: OpenAI negotiated an agreement that contains guardrails against mass surveillance and autonomous lethal weapons - the exact line Anthropic had refused to cross, and was punished for refusing to cross. The company that the government used as leverage against Anthropic ("OpenAI already agreed to our terms") ended up, after additional negotiation, agreeing to terms that were meaningfully similar to what Anthropic had been pushing for.

The difference is that OpenAI did it without being designated a national security threat. Whether that reflects smarter negotiation tactics, better political relationships - Altman has been notably more accommodating toward the Trump administration than Amodei - or simply the advantages of being larger and more entrenched, the result is that OpenAI got essentially the same deal Anthropic was offering, without the public shaming.

This asymmetry matters for the industry's long-term power dynamics. The episode demonstrated that companies which cultivate political relationships and exercise more public deference can negotiate for principled positions through private channels. Companies that stake out public ethical positions and refuse to move get made into examples. The incentive structure this creates - toward private accommodation and away from public principle - has implications that extend well beyond any one company's contracts.

The Second-Order Effects Nobody Is Talking About

The immediate story is about Anthropic's survival and the legal fight. But the second-order effects of this confrontation are potentially more consequential.

The first is what this does to the global AI race. One of the United States' most significant advantages in AI development is that it houses most of the frontier labs, most of the compute infrastructure, and most of the top researchers. That advantage depends, to some extent, on those labs wanting to be in the United States. The spectacle of the Pentagon designating a US AI company with the same legal mechanism used for Chinese telecoms companies - while simultaneously being operationally dependent on that company's technology - is exactly the kind of story that makes researchers in Paris, London, or Singapore look at their career options differently.

The second effect is on the global regulatory conversation about AI. European policymakers have been watching the US AI governance debate with intense interest, and the Anthropic case has handed them a powerful data point: the US government's approach to AI safety involves threatening companies that implement safety features. The EU AI Act, whatever its flaws, does not work by designating safety-conscious companies as national security threats. That contrast will feature prominently in regulatory discussions across the next 12 months.

The third effect is the most fundamental: the battle over whether AI companies have the right to determine what their technology can and cannot be used for. Hegseth's position - that the Pentagon must have "full, unrestricted access" to AI models "for every lawful purpose" - is essentially an argument that AI companies are infrastructure utilities that cannot discriminate in how their tools are used. Anthropic's position is that a company that builds an AI model gets to set the rules for how that model is deployed.

This is not a settled question, and it will not be settled by this case alone. But the framing established here - that refusing to let your technology be used for mass surveillance is "corporate virtue-signaling that places Silicon Valley ideology above American lives" - is a framing that will be deployed again, against other companies, in other confrontations. The precedent being set here reaches far beyond Claude.

There is also a more prosaic concern lurking beneath the political theater. Anthropic's six-month phaseout is happening while the US is engaged in active military operations in the Persian Gulf - operations that, by the Wall Street Journal's own reporting, are currently using Claude for intelligence and targeting. The assumption embedded in the six-month timeline is that the Pentagon can find an equivalent capability within that window. Given that the military spent years establishing Claude's deployment in classified networks and has no clear successor, that assumption may not be warranted.

The surreal endpoint of this confrontation: the company designated as a national security threat is simultaneously one of the most important tools in an ongoing American military campaign. The ban is real. The dependency is real. Both things are true at the same time, and neither the White House nor the Pentagon has offered any public explanation for how that contradiction resolves.

Where This Ends - and What It Changes

Anthropic is not going anywhere. The surge in civilian downloads has likely already begun to offset projected losses from the defense sector phaseout. The legal challenge is live, and the company's argument that the designation exceeds statutory authority is not frivolous. The six-month clock gives both sides time to find a face-saving resolution if the political winds shift.

But the damage to norms is already done. The supply-chain designation mechanism has been established as a tool that can be used against domestic companies that refuse to comply with government demands. Future administrations will know this tool exists. Future AI companies negotiating with future governments will negotiate with that knowledge in mind.

The tech worker at Microsoft who told The Verge "I was surprised to see them stand on some form of principle. I don't know how long it'll last" captured the moment precisely. Anthropic's principled stand has - so far - survived the initial confrontation. The company is bigger in the civilian market than it was before the ban. The legal challenge is proceeding. The contradiction of Claude being used for Iran strikes while being designated a security threat is now part of the public record.

What happens over the next six months will determine whether this was a turning point or just a costly standoff. If the legal challenge succeeds and the designation is overturned, Anthropic emerges with credibility no PR campaign could purchase. If the courts side with the Pentagon, the AI industry will have received a definitive answer about what happens to companies that refuse to remove their safety features when the government asks.

Either way, the Streisand Effect has done its work. The government tried to make Claude smaller. For the moment, it made Claude bigger. Whether that irony is enough to matter in the long run depends on whether courts, Congress, and the public are paying attention to what the numbers are actually saying.

Claude is at number one. The war is still ongoing. The tool that was banned before breakfast was planning strikes before dinner. That is not a policy success story for the people who wrote the ban.

Get BLACKWIRE reports first.

Breaking news, investigations, and analysis - straight to your phone.

Join @blackwirenews on Telegram