AI & Law

Anthropic Sues the Pentagon: The AI First Amendment Fight That Could Reshape Government Power

Dario Amodei's company filed a federal lawsuit in California on Monday, arguing the Department of Defense's supply-chain-risk designation is unconstitutional retaliation for protected speech. The case is about more than one AI company - it tests whether the executive branch can weaponize procurement law to punish any American tech firm for disagreeing with a Cabinet secretary.

By PRISM  |  March 9, 2026  |  AI & Law  |  BLACKWIRE
Anthropic Department of Defense Lawsuit First Amendment AI Policy Pete Hegseth
Federal courthouse with American flag - Anthropic vs Department of Defense

Anthropic filed suit in federal court in California, seeking to reverse its designation as a national security supply-chain risk. (Photo: Unsplash)

It did not take long for the rhetoric to become litigation. On Monday, March 9, Anthropic filed a federal lawsuit against the US Department of Defense - now officially renamed the Department of War under the Trump administration - challenging the Pentagon's designation of the AI company as a supply-chain risk to American national security. The case, filed in a federal district court in California and first reported by Wired, could be the most consequential AI legal battle yet fought in an American courtroom.

The lawsuit requests that a federal judge reverse the designation entirely, stop federal agencies from enforcing it against Anthropic customers, and issue a temporary restraining order allowing Anthropic to maintain government sales while the case proceeds. Anthropic proposed a rushed timeline: the government must respond to the TRO request by 9 PM Pacific on Wednesday, with a hearing before a judge by Friday. That is not a company bracing for a long legal slog - it is a company trying to stop a commercial hemorrhage in real time.

The core legal argument is blunt and ambitious. "The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech," the filing states. Anthropic characterizes the designation not as a legitimate national security judgment but as "an unlawful campaign of retaliation" - specifically, retaliation for Anthropic's publicly stated policy refusing to allow its AI systems to be used for fully autonomous weapons or mass domestic surveillance of American citizens. That policy, the company argues, is speech protected by the First Amendment.

If the court agrees, it would establish a precedent that the executive branch cannot use procurement law to punish American companies for disagreeing with government policy. If it does not, every technology company that does business with the federal government - and that list includes Apple, Google, Microsoft, Amazon, Nvidia, and thousands of smaller firms - just received a lesson in how exposed they are.

$800M+
Annual Revenue at Risk
10 USC 3252
Statute at Issue
72hrs
TRO Response Window
14+
Major Industry Groups in Opposition

The Road to Court: Six Weeks That Broke a Relationship

Pentagon building aerial view

The Pentagon's supply-chain-risk authority was designed to exclude Chinese technology from military systems - not to sanction American companies. (Photo: Unsplash)

The fight between Anthropic and the Pentagon began with what looked like a bureaucratic contract dispute and escalated with remarkable speed into a constitutional confrontation. The timeline makes the escalation visible.

In January 2026, Defense Secretary Pete Hegseth ordered all AI suppliers to the Pentagon to agree to an unlimited-use clause: their technology could be deployed for "all lawful purposes" with no carve-outs. The intent was to remove what Hegseth characterized as "ideological veto power" held by AI companies over military decision-making. Most companies signed or negotiated quietly. Anthropic did not.

Anthropic's position was publicly stated and consistent: its current AI models are not capable enough to be safely deployed for mass domestic surveillance or to direct fully autonomous weapons systems without human oversight. These were not political objections to military use of AI in general - Anthropic's systems were already deeply embedded in Pentagon operations through tools like Palantir's Maven Smart System, used for intelligence analysis, operational planning, and cyber operations. Anthropic was, according to Wired's reporting, "the only company currently providing AI chatbot and analysis tools for the military's most sensitive use cases."

The negotiations reportedly continued for weeks. Dario Amodei wrote in his March 5 statement that talks had been "productive." Then, on the same day President Trump posted on Truth Social announcing Anthropic would be removed from all federal systems, Hegseth posted on X announcing the supply-chain-risk designation, and the Department of War simultaneously announced a deal with OpenAI that explicitly included similar carve-outs against domestic surveillance and autonomous weapons. All three events happened within hours of each other on March 4. The simultaneity was not lost on anyone.

The supply-chain-risk designation, authorized under 10 USC 3252, is a legal tool historically reserved for one specific threat: keeping Chinese-manufactured technology out of US military infrastructure. Companies like Huawei and ZTE have been its targets. It was designed to address foreign ownership, control, or influence - vulnerabilities that could compromise classified systems. Applying it to a San Francisco AI startup whose CEO is an American citizen and former OpenAI researcher represents, as Anthropic put it, "a dangerous precedent."

How We Got Here

Jan 2026
Defense Secretary Hegseth orders AI suppliers to agree to "all lawful purposes" usage clause with no exceptions.
Feb 20
Anthropic declines to remove carve-outs on autonomous weapons and mass domestic surveillance. Negotiations continue.
Feb 27
Hegseth formally designates Anthropic a supply-chain risk. Trump posts removal order on Truth Social. OpenAI simultaneously announces Pentagon deal with nearly identical carve-outs.
Mar 4
Pentagon sends formal letter to Anthropic confirming the designation. Dario Amodei publishes statement promising legal action.
Mar 9
Anthropic files federal lawsuit in California district court against the Department of Defense and associated agencies.

The Legal Strategy: A Long Shot With a Decent Hook

Legal experts reached by Wired were blunt about Anthropic's chances. Brett Johnson, a partner at Snell and Wilmer with expertise in government contracting, put it plainly: "It's 100 percent in the government's prerogative to set the parameters of a contract." The Pentagon has wide discretion in who it does business with and why. Courts are generally reluctant to second-guess national security determinations from the executive branch.

The statute at issue - 10 USC 3252 - gives the Secretary of Defense authority to restrict or exclude vendors from defense contracts if they pose supply-chain risks. It includes minimal provisions for appeal or judicial review. The law was written to be fast and decisive, specifically because the threat it was designed to address - a foreign-controlled component secretly embedded in a military communications system - requires speed, not due process.

None of this is good news for Anthropic's legal team. But there is one argument that attorneys say could break through: selective enforcement. Alex Major of McCarter and English, who advises tech companies with federal contracts, flagged the timing as potentially critical to Anthropic's case. If Anthropic can demonstrate that OpenAI was offered - and accepted - a deal with substantively identical terms, the selective nature of the designation becomes harder to justify as a genuine national security judgment and easier to characterize as political retaliation.

OpenAI said publicly that its Pentagon deal included "contractual and technical means" of ensuring its technology would not be used for mass domestic surveillance or to direct autonomous weapon systems. That is, on the face of it, what Anthropic was asking for. OpenAI CEO Sam Altman said his company "opposed the action against Anthropic" and did not know "why its rival could not reach the same deal." If OpenAI's deal holds up as precedent, Anthropic's lawyers can argue the only distinguishing factor was Anthropic's willingness to say its position publicly - which is protected speech.

The constitutional theory is grounded in the doctrine of unconstitutional conditions: the government cannot condition a benefit (a defense contract) on the waiver of a constitutional right (the right to express a position on how your technology should be used). It is a real doctrine with real case law behind it. It is also difficult to apply in the national security procurement context, where courts have consistently given the executive branch enormous deference.

"The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech. Anthropic turns to the judiciary as a last resort to vindicate its rights and halt the Executive's unlawful campaign of retaliation."

- Anthropic's federal lawsuit filing, March 9, 2026

What Is Actually at Stake: Revenue, Infrastructure, and Battlefield AI

Server infrastructure and AI computing hardware

Claude runs inside Palantir's Maven Smart System - used by military operators for intelligence analysis and operational planning. Replacing it mid-deployment is a significant logistical problem. (Photo: Unsplash)

The immediate commercial question is stark. Anthropic reportedly generates over $800 million in annual revenue from US government contracts and from the government-adjacent ecosystem - software companies that embed Claude into tools they sell to federal agencies. Losing that revenue stream while the company is burning through its AI infrastructure investment would be a serious blow, though not necessarily existential given Anthropic's recent fundraising.

But the operational consequences could be more significant than the financial ones. Claude is not a peripheral tool for the Pentagon - it sits at the center of some of the military's most sensitive analytical workflows. Palantir's Maven Smart System, which routes Claude to military operators for intelligence analysis and strike planning, would need to swap in a replacement model if the designation holds. Several AI defense startups - Vannevar Labs among them - have been pitching the military in recent days as capable alternatives, according to Wired's reporting.

That transition is technically possible. It is not technically trivial. AI models are not interchangeable. Each requires integration work, prompt engineering, safety evaluation in context, and operator retraining. Doing it mid-deployment, while "major combat operations" are ongoing (Dario Amodei's own phrasing in his March 5 statement), is a real operational risk. Anthropic has offered to maintain access to its models at nominal cost during any transition period, a gesture that was simultaneously conciliatory and pointed - an implicit argument that its removal harms the warfighters Hegseth claims to be protecting.

The broader supply chain impact is where things get complicated. Hegseth's original announcement declared that "no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic." That language - if taken literally - would affect every company in Silicon Valley. Amazon, Microsoft, Google, and Nvidia all have both defense contracts and deep commercial relationships with Anthropic. Three federal contract attorneys told Wired it was impossible to determine the actual scope of the restriction from the announcement alone.

Anthropic's own legal reading of 10 USC 3252 - supported by Dario Amodei's public statements - is that the designation is narrower than Hegseth's tweet implied. It applies only to contractors' use of Claude "as a direct part of contracts with the Department of War," not to their general commercial relationships with Anthropic. That interpretation may be legally correct. It also may be irrelevant: companies that did not want to take legal risk began cutting Claude anyway, the moment the designation was announced.

The Industry Breaks Its Silence

Silicon Valley's response to the designation was unusually direct for an industry that has spent the last decade carefully managing its relationship with Washington. The severity of the reaction reflected genuine alarm - not just about Anthropic's situation, but about what the designation implies for every American technology company that builds AI systems with opinions about how they should be used.

A coalition of major tech industry groups - TechNet, Business Software Alliance, and the Software Information Industry Association, representing Apple, Google, Nvidia, Microsoft, Meta, IBM, Salesforce, and Oracle among others - sent a letter to the Trump administration urging the designation's reversal. Their core argument was economic and strategic: singling out an American AI company as a national security threat, using law designed to exclude Chinese hardware, sends a catastrophic signal about the reliability of the American technology partnership to every Allied government and every corporate customer globally.

A separate letter from former national security officials landed on the Senate Armed Services Committee's desk. Signatories included former CIA director Michael Hayden, Harvard Law professor Lawrence Lessig, former Andreessen Horowitz partner John O'Farrell, former undersecretary of the Army Brad Carson, and several retired admirals. Their objection was constitutional and systemic: using supply-chain-risk authority against a domestic American company for the first time in history is "a profound departure from its intended purpose" that demands congressional clarification of what the law actually permits.

Dean Ball, a senior fellow at the Foundation for American Innovation and the former senior policy adviser for AI at the White House, did not mince words. "This is the most shocking, damaging, and overreaching thing I have ever seen the United States government do," he told Wired. "We have essentially just sanctioned an American company." Y Combinator founder Paul Graham was equally blunt: "The people running this administration are impulsive and vindictive."

"The use of this authority against a domestic American company is a profound departure from its intended purpose. Congress must establish clear policies on the use of AI for domestic surveillance and autonomous lethal weapons systems."

- Former CIA Director Michael Hayden and co-signatories, letter to Senate Armed Services Committee

The OpenAI Mirror: One Deal, Two Outcomes

Military operations center with screens - AI in defense context

AI systems are now embedded in military analytical workflows globally. The question of who controls the terms of that embedding is what this lawsuit is ultimately about. (Photo: Unsplash)

The most damaging piece of evidence Anthropic has - and it is right there in the public record - is OpenAI's Pentagon deal. Sam Altman announced on the same night Anthropic was blacklisted that OpenAI had reached an agreement with the Department of War that included explicit prohibitions on using its technology for mass domestic surveillance and to direct autonomous weapons systems. "Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems," Altman wrote. "The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement."

Read that sentence carefully. OpenAI received contractual protections against the exact two use cases Anthropic refused to enable. Anthropic was then designated a supply-chain risk for refusing to enable those same use cases. OpenAI was not. OpenAI said it did not understand why its rival could not reach the same deal.

There are a few possible explanations for this disparity. One is that Anthropic was negotiating in bad faith or that there were substantive differences in the technical implementation of the restrictions - differences not visible in the public statements. Another is that the timing mattered: Anthropic's public pushback, its blog posts, its very public refusals, made it politically impossible for the administration to back down without appearing to capitulate to "woke AI" objections. OpenAI, which had been more strategically quiet during the same period, became the path of least resistance. The third explanation is the one Anthropic's lawsuit argues: the treatment was selectively punitive, triggered by speech rather than policy.

If Anthropic can get the OpenAI deal terms into evidence - and OpenAI's own public statements make that straightforward - the selective-enforcement argument becomes genuinely potent. It is not enough to win automatically. But it is enough to make a judge ask uncomfortable questions.

The Precedent Problem: What This Means Beyond Anthropic

Zoom out from Anthropic's specific situation and the stakes become clearer. The question at the heart of this lawsuit is not whether Anthropic's AI should be used for autonomous weapons - that is a policy dispute, not a constitutional one. The question is whether the executive branch can wield the government's market power as a tool of political compliance. Can a Cabinet secretary effectively sanction any American technology company by designating it a "supply-chain risk" whenever that company's published policies conflict with administration preferences?

If the answer is yes, the implications cascade through every sector of the technology industry. Every company that maintains public ethics guidelines about how its products can be used - which is to say, every major technology company - now faces the prospect of those guidelines being treated as grounds for federal exclusion if an administration disagrees with them. That is not a hypothetical. It is exactly what happened to Anthropic: its published usage policy became the stated rationale for a designation that now threatens hundreds of millions of dollars in annual revenue.

The administration's public response to the lawsuit has been characteristically aggressive. White House spokesperson Liz Huston told Wired that "our military will obey the United States Constitution - not any woke AI company's terms of service." The framing - AI company terms of service versus the Constitution - is rhetorically sharp and legally inverted. Anthropic's lawsuit is the one invoking constitutional protection. The administration is the one arguing procurement authority is unchecked.

Government contracting experts note that the statute requires the Secretary of War to "use the least restrictive means necessary" to protect the supply chain. That phrase is embedded in 10 USC 3252 and is potentially significant. If OpenAI achieved an equivalent security outcome through contractual carve-outs rather than a blanket exclusion, it becomes harder to argue that a blanket exclusion of Anthropic was the "least restrictive" means available. That is a textual argument that does not require constitutional law - just statutory interpretation.

The Battlefield Inside the Courtroom

The immediate battle is the temporary restraining order. Anthropic wants a federal judge to halt enforcement of the designation while the case proceeds. For that to happen, the company needs to demonstrate four things: a likelihood of success on the merits, irreparable harm without the injunction, that the balance of equities favors relief, and that the public interest supports an injunction. The irreparable harm element is the easiest - business relationships once severed are difficult to restore, and every week of designation accelerates the migration of government contractors to alternative AI systems. The likelihood of success is harder, given the traditional deference courts show to national security procurement decisions.

There is also an unusual operational wildcard. Dario Amodei noted in his March 5 statement that Claude is actively being used by "frontline warfighters" in the context of "major combat operations." He offered continued model access at nominal cost for as long as it takes to complete any transition. That framing puts the administration in an uncomfortable position: if enforcing the designation degrades operational capability mid-conflict, and if Anthropic can document that degradation, the "least restrictive means" question becomes very concrete, very fast.

Several defense contractors have reportedly begun moving preemptively - exploring alternative AI providers before the legal outcome is clear, because the reputational and contractual risk of remaining associated with Anthropic during an active government designation is more than their risk tolerance can absorb. Every contractor that migrates creates new facts on the ground, making the TRO more urgent and the ultimate resolution more complex.

"Several AI startups focused on defense uses, such as Vannevar Labs, in recent days have been pitching their models as capable replacements. Palantir did not respond to a request for comment."

- Wired, reporting on the supply-chain scramble, March 9, 2026

What Comes Next: Three Possible Endings

The case is moving faster than most federal litigation. The TRO hearing, if Anthropic's proposed schedule holds, would happen this Friday - March 13. That is five days after the lawsuit was filed. Three scenarios are plausible from here.

The first is a negotiated resolution. The administration grants Anthropic the same contractual terms it gave OpenAI, framed as a clarification rather than a reversal. Both sides claim victory. The lawsuit is withdrawn or becomes moot. This is the path of least disruption and the one that would be most consistent with Amodei's repeated statements that Anthropic shares the military's goals and wants to support warfighters. It requires the administration to be willing to give Anthropic something it publicly refused to give Anthropic, which is why the political environment makes it difficult even if the policy case for it is strong.

The second is a legal win for Anthropic. A federal judge grants the TRO and, ultimately, finds that the designation either exceeded the statute's scope or violated the First Amendment. This would be a landmark ruling - the first time a court has applied First Amendment doctrine to constrain executive branch procurement retaliation against a technology company. It would set a precedent that would protect every AI company with published ethics policies from similar treatment.

The third scenario is a legal loss. Anthropic's constitutional arguments fail to overcome judicial deference to national security determinations. The designation stands. Anthropic faces a sustained commercial battle to replace hundreds of millions in government-adjacent revenue, likely pivoting even harder toward international markets and private enterprise. The broader precedent - that an administration can effectively sanction a domestic tech company using supply-chain-risk law - is established without congressional challenge or reversal.

Which outcome prevails will say something about what kind of relationship the US government and the technology industry are building. The last several years of AI policy have been characterized by close coordination - safety frameworks, government partnerships, mutual interest in keeping AI development centered in democratic countries. Anthropic's lawsuit is a sign that the coordination model has broken down in at least one important sector. Whether courts can hold the line where negotiations could not is what the next few weeks will reveal.

What Happens Next

Mar 11
Government deadline to respond to Anthropic's TRO request (9 PM Pacific).
Mar 13
Proposed TRO hearing before federal judge. Could result in immediate injunction halting designation enforcement.
Ongoing
Defense contractors continue migration decisions regardless of TRO outcome. Each migration creates new facts that complicate eventual resolution.
Q2 2026
Full merits hearing likely. Constitutional questions could eventually reach circuit court level and beyond.

One thing is already settled regardless of how the lawsuit ends: the AI industry's implicit assumption that its relationship with government would stay cooperative and predictable has been shattered. That assumption was always fragile. Now it is gone. Every AI company with a usage policy, every startup that has published an ethics commitment, every foundation model provider that has said its technology cannot be used for certain categories of harm - they are all watching the Anthropic case and updating their calculations about what it costs to have principles in public.

The case docket is Anthropic v. Department of War et al. Filed March 9, 2026. A federal judge in California will decide whether the government's enormous power stops at the text of the First Amendment. The rest of us should be paying close attention to the answer.

Get BLACKWIRE reports first.

Breaking news, investigations, and analysis - straight to your phone.

Join @blackwirenews on Telegram