Image: Amodei Memo: We Didn't Donate. We Didn't Praise. That's Why
Anthropic's CEO has said in writing what everyone already suspected: the Pentagon "supply chain risk" designation was political. Defense contractors aren't waiting for the courts to sort it out.
Dario Amodei doesn't usually do scathing. His public tone tends toward careful, measured, the kind of language you'd expect from someone who built a safety-first AI lab and genuinely means it.
So when a 1,600-word internal memo to Anthropic employees starts drawing direct lines between the company's federal targeting and its refusal to donate to Trump or give what Amodei called "dictator-style praise," it carries weight.
The memo, circulated Friday, is the clearest statement yet from Anthropic's leadership that the Pentagon's "supply chain risk" designation - and the Trump administration's attempted ban on Claude across federal agencies - was not a security judgment. It was a loyalty test. One Anthropic failed by not playing along.
Amodei reportedly wrote that unlike OpenAI or its executives, Anthropic "hasn't donated to Trump" and "hasn't given dictator-style praise to Trump." He framed these omissions as the operative cause of the administration's hostility toward the company.
That's a significant claim to put in writing, and Amodei knew what he was doing. This wasn't a slip - it was a statement of record for his employees, documenting the political nature of what happened so there's no internal confusion about the threat model.
The timeline makes the politics even harder to ignore: Trump announced the ban on Claude on a Friday. That same weekend, US strikes on Iran went ahead using Claude for intelligence assessments and target identification. The ban was walked back to a "six-month phaseout" within hours - in part, reportedly, because the military was mid-operation and couldn't just swap models.
The federal government literally could not function without the AI it had just declared a supply chain risk. That contradiction speaks to how detached the political decision was from operational reality.
Anthropic can challenge the designation in court. The legal path exists. But the market moves faster than litigation.
Defense contractors with US military relationships are already walking away from Claude, CNBC reported Wednesday. The phrase several companies are using: "out of an abundance of caution." This is the rational play for any company that depends on federal contracts. Why accept regulatory risk when there are other models available - models whose CEOs have photographed themselves at Mar-a-Lago?
The second-order effect here is what the Amodei memo is actually trying to surface. The supply chain designation doesn't need to be upheld in court to do damage. The chilling effect is the point. Every defense-adjacent company watching this calculus is now learning that political deference is a procurement prerequisite - not just for government contracts, but for keeping your existing enterprise clients who hold them.
OpenAI played the access game differently. Sam Altman's appearance at the Trump inauguration, the Stargate announcement, the donation to the inaugural fund - these weren't just PR moves. They were insurance policies. They signaled that OpenAI understood how this administration works.
Anthropic didn't buy that insurance. Whether that was principled or simply strategic miscalculation depends on your read of Amodei. His safety commitments appear genuine. But there's a version of this where Anthropic's political isolation was a foreseeable business risk that the company chose to absorb.
The memo suggests Amodei is now framing it as a badge of honor. That might be true. It also doesn't solve the problem of defense contractors canceling Claude subscriptions this week.
Former Trump advisor Dean Ball called this "attempted corporate murder." Former DOJ official Alan Rozenshtein raised the possibility that this could be a step toward partial nationalization of the AI industry.
Both framings point at the same structural risk: if the federal government can designate an AI company a security threat based on its political relationships rather than its actual security posture, then every AI lab now operates under implied political conditionality. Stay aligned with whoever holds power, or face regulatory weapons designed for actual adversaries.
That's not a future concern. It's already pricing into the market. Claude is losing defense contracts not because it's less capable, not because of a proven security flaw, but because its CEO wrote an honest memo about why they got targeted.
The irony of that isn't lost on Amodei. He knows what he's documenting.
Get BLACKWIRE reports first.
Breaking news, investigations, and analysis - straight to your phone.
Join @blackwirenews on Telegram