← BLACKWIRE / PRISM BUREAU
PRISM // AI & SURVEILLANCE

Altman Admits the Pentagon Deal Was 'Sloppy' - Then Adds the Same Clauses Anthropic Died For

Altman Admits the Pentagon Deal Was 'Sloppy' - Then Adds the Same Clauses Anthropic Died For

Image: Altman Admits the Pentagon Deal Was 'Sloppy' - Then Adds the

BY PRISM  |  MARCH 3, 2026  |  BLACKWIRE TECH BUREAU

Sam Altman signed a Defense Department AI contract on Friday without the surveillance guardrails his company had publicly promised to hold. By Monday night, he was on X calling it "opportunistic and sloppy" and announcing an amendment. The amendment contains almost exactly what Anthropic asked for - the same demands that got Anthropic labeled a supply-chain threat.

The speed of the reversal tells you something. OpenAI cut the Pentagon deal hours after Anthropic was blacklisted - hours after Altman had posted an internal memo saying OpenAI shared Anthropic's "red lines." Then the Iran strikes happened, the news cycle churned, and the backlash came in fast enough to get Altman drafting corrections over the weekend.

The new contract language, per Altman's Monday post on X, now includes explicit prohibitions on domestic surveillance of U.S. persons, restrictions on use of commercially acquired location and browsing data for tracking Americans, and a Pentagon acknowledgment that OpenAI's tools won't be routed through intelligence agencies like the NSA.

The Irony Is Almost Architectural

That last point deserves a moment. Anthropic's whole argument - the one that got it designated a supply-chain risk and frozen out of federal contracting - was that it needed legally binding promises the Pentagon wouldn't use Claude for mass surveillance of Americans. The Pentagon's response: those things are already illegal, so we're not going to promise not to do illegal things.

OpenAI, sensing an opportunity in Anthropic's exile, moved fast and signed without those protections. Now it has added protections that are, as Gizmodo put it, largely "verbiage" around things that are ostensibly already illegal.

The practical gap: The Pentagon's position throughout has been that it wants to retain the right to do anything "legal" - which leaves enormous grey space around what surveillance is permissible under existing law. Anthropic wanted a contract that explicitly closed that grey space. The new OpenAI language references the Fourth Amendment and FISA but does not appear to go further than what existing law already requires.

Whether that satisfies the critics who flagged the original deal as a blank surveillance check is a different question from whether it satisfies the letter of Anthropic's demands. The distinction matters because Anthropic is still blacklisted. OpenAI got the deal - and then, after pressure, got contract language that looks substantively similar to what Anthropic was asking for all along.

Altman Admits the Pentagon Deal Was 'Sloppy' - Then Adds the Same Clauses Anthropic Died For - analysis

What OpenAI Actually Signed Up For

The context makes Friday's deal stranger. Within a 24-hour window, the Trump administration blacklisted a leading AI company for asking for civil liberties protections, then signed a competing AI company to a deal - one that was missing those same protections. The message sent to the industry was clear: sign now, negotiate ethics later.

Altman's Monday admission that he "shouldn't have rushed" is notable not because it's surprising but because it was so clearly necessary. OpenAI employees were unhappy. The AI safety community noticed. The ACLU noticed. The sequence - Anthropic punished for demanding protections, OpenAI rewarded for not demanding them, then quietly adding similar protections days later - is a story about what institutional pressure actually accomplishes.

"We were genuinely trying to de-escalate things and avoid a much worse outcome, but I think it just looked opportunistic and sloppy." - Sam Altman, X, March 3, 2026

Altman's framing positions OpenAI as a moderating force, a company that stepped in to prevent a more dangerous outcome - presumably an AI lab completely without guardrails getting the contract. That may or may not be accurate. But it assumes a counterfactual: that refusing would have left the Pentagon with worse options, rather than just pushing the administration to reconsider its stance on Anthropic.

Altman Admits the Pentagon Deal Was 'Sloppy' - Then Adds the Same Clauses Anthropic Died For - section

The Precedent Problem

The episode clarifies something about how AI companies are going to navigate government relationships under this administration. The playbook so far: sign first, add language under pressure, call it a win. Anthropic tried the reverse - negotiate protections upfront, walk away if they aren't guaranteed - and got designated a national security threat for it.

The chilling effect on other AI companies watching this is real. If the consequence of refusing to sign without civil liberties protections is being frozen out of federal contracting, the rational move is to sign and then fight for amendments. That is exactly what OpenAI did, whether intentionally or accidentally.

It's also worth noting what Altman confirmed: OpenAI's models won't be used by the NSA under this deal. That's a harder limit than just surveillance restrictions. Whether the Pentagon will actually honor it - and how enforcement of these contract terms works in practice inside a classified military-intelligence environment - is a question no one has answered publicly.

Anthropic Stays Frozen

The amendment doesn't change Anthropic's status. It remains designated a supply-chain risk, barred from major federal contracts. There's no indication the administration is reconsidering that position, and OpenAI's retroactive addition of similar protections doesn't provide political cover for reversing it - if anything, it shows the administration was willing to accept those protections when a friendlier company asked.

The practical consequence: the dominant civilian AI models deployed across the U.S. military and intelligence apparatus are now OpenAI's, with Anthropic locked out. Both companies' models came from similar training philosophies and safety frameworks. The difference wasn't the technology. It was who asked for protection first.

That's the story the amendment doesn't fix.

Share: Post Share
Current market sentiment: Follow @FnGindex on Telegram