BREAKING
Breaking AI US Government

Trump Orders Every Federal Agency to Immediately Cease Using Anthropic

LIVE Feb 27, 2026 10 min read Nixus Breaking

Anthropic refused to remove two AI safeguards - mass domestic surveillance and fully autonomous weapons. The President's response: ban the company from the entire United States government. The $200 million Pentagon contract, a potential $500 billion valuation, and the future of AI safety regulation now hang in the balance.

BANNED
From all federal agencies
$200M
Pentagon contract at stake
$500B
Potential 2026 valuation
6 MO
Phase-out period

What Just Happened

On Friday afternoon, February 27, 2026, President Donald Trump posted on Truth Social directing every federal agency in the United States to immediately cease all use of Anthropic's technology. The order came roughly one hour before a Pentagon-imposed 5:01 PM deadline for Anthropic to accept unrestricted military use of its AI model Claude.

Anthropic did not comply. Instead, CEO Dario Amodei published a detailed statement on the company's website explaining exactly why - and drawing two specific red lines that Anthropic refuses to cross.

"THE UNITED STATES OF AMERICA WILL NEVER ALLOW A RADICAL LEFT, WOKE COMPANY TO DICTATE HOW OUR GREAT MILITARY FIGHTS AND WINS WARS! That decision belongs to YOUR COMMANDER-IN-CHIEF, and the tremendous leaders I appoint to run our Military."

- President Donald J. Trump, Truth Social, Feb 27 2026

Anthropic's Two Red Lines

Dario Amodei's statement was remarkably specific. Anthropic is not refusing to work with the military - they are already deeply embedded. Claude is deployed across the Department of War for intelligence analysis, operational planning, cyber operations, and more. They were the first AI company on classified networks, the first at the National Laboratories, and the first to provide custom national security models.

But two use cases are non-negotiable:

1. Mass Domestic Surveillance. Anthropic supports lawful foreign intelligence. But using AI for mass surveillance of American citizens is, in their words, "incompatible with democratic values." They cite the government's existing ability to purchase Americans' movement data, browsing history, and associations without a warrant - and argue that AI makes this exponentially more dangerous at scale.

2. Fully Autonomous Weapons. Anthropic supports partially autonomous weapons (like those used in Ukraine). But fully autonomous systems that select and engage targets without human oversight? Their position: "Today, frontier AI systems are simply not reliable enough." They offered to collaborate with the Pentagon on R&D to improve reliability. The Pentagon declined.

The Escalation Timeline

Jan 12
Pentagon publishes new AI Strategy stating they will only contract with AI companies who agree to "any lawful use" and remove safeguards.
Feb 24
Defense Secretary Pete Hegseth meets Dario Amodei. Meeting described as "cordial." Hegseth delivers ultimatum: accept unrestricted terms or face consequences.
Feb 25
Pentagon sends "final offer" overnight. Anthropic reviews contract language, finds it unacceptable. Two threats attached: 1) "Supply chain risk" designation (never before applied to a US company). 2) Invocation of the Defense Production Act to force safeguard removal.
Feb 26
Anthropic publicly rejects the terms. Amodei publishes detailed statement. Axios reports the 5:01 PM Friday deadline. Politico calls the threats "inherently contradictory" - one labels Anthropic a security risk, the other labels Claude as essential to national security.
Feb 27
Trump posts on Truth Social ~1 hour before deadline. Orders all federal agencies to immediately cease Anthropic use. Calls them "Radical Left, WOKE" and "out-of-control." Threatens "major civil and criminal consequences." Six-month phase-out for agencies like the Department of War.

What Anthropic Gave Up to Get Here

This is not a company that avoided government work. Anthropic's national security resume is extensive:

Anthropic did everything the national security establishment asked - except two things. And those two things ended the relationship.

The Pentagon's Contradictory Threats

The Department of War threatened Anthropic with two responses that, as multiple outlets noted, cancel each other out:

Threat 1: "Supply chain risk" designation. This label is typically reserved for companies from adversary nations - think Huawei, Kaspersky. It would force every DoD vendor and contractor to certify they don't use Anthropic's models. It has never been applied to an American company.

Threat 2: Defense Production Act invocation. The DPA would compel Anthropic to provide its technology to the military. This only works if Claude is essential to national security - which directly contradicts labeling Anthropic a supply chain risk.

Politico's assessment: "Incoherent. Hegseth's Anthropic ultimatum confounds AI policymakers." You can't simultaneously argue a company is a national security threat AND that its product is so critical you need to force them to provide it.

The Money

Anthropic's current Pentagon contract is worth approximately $200 million. That's significant but not existential for a company that Polymarket traders give a 90% chance of reaching a $500 billion valuation in 2026, with persistent IPO speculation.

However, the "supply chain risk" designation could be devastating. It wouldn't just end Anthropic's direct government contracts - it would force every defense contractor, every government vendor, every company that does business with the federal government to stop using Claude. That's a cascading exclusion that could cost billions in enterprise revenue.

For context: Anthropic cut off Chinese firms worth "several hundred million" in revenue voluntarily. The government ban could dwarf that figure.

Who Benefits

With Anthropic out, the Pentagon's AI needs don't disappear. The contracts shift to competitors:

The Deeper Question

This isn't really about Anthropic. It's about whether AI companies get to have any say in how their technology is used by the government. Dario Amodei framed it as a narrow, principled stand on two specific issues. Trump framed it as a private company trying to dictate military strategy.

Both framings are partially true. And the resolution of this conflict will set the precedent for every AI company that follows.

If Anthropic holds its position and survives, it proves that AI companies can maintain safety red lines even under presidential pressure. If the ban devastates their business, every other AI company will learn the lesson: compliance is the only option.

The Bottom Line

Anthropic built the most capable AI model currently deployed in US classified networks. They cut off Chinese revenue, shut down CCP cyberattacks, and deployed to the Pentagon before any competitor. They drew two red lines: no mass surveillance of Americans, no autonomous weapons without human oversight.

The President of the United States just told them that's not good enough.

The six-month phase-out period means this isn't over today. It's the opening move. What happens next - whether Anthropic bends, whether Congress intervenes, whether the courts get involved, whether the "supply chain risk" designation actually gets applied - will define the relationship between AI companies and government power for the next decade.

Claude is still running. For now.

Sources: Anthropic official statement (anthropic.com/news/statement-department-of-war), President Trump's Truth Social post (Feb 27, 2026), Reuters, NPR, CNBC, Politico, CNN, New York Times, Axios, Benzinga. All quotes verified against original sources.