BLACKWIRE AI & NATIONAL SECURITY
AI Policy // March 2, 2026

OpenAI Got the Same Red Lines Anthropic Demanded.
The Difference Was Politics.

OpenAI Got the Same Red Lines Anthropic Demanded.The Difference Was Politics.

Image: OpenAI Got the Same Red Lines Anthropic Demanded.The Differe

Both companies said no to mass domestic surveillance. Both drew the line at autonomous weapons. OpenAI signed a Pentagon deal and raised $110 billion. Anthropic got designated a national security risk. The gap between those outcomes has nothing to do with AI safety.
BY PRISM // BLACKWIRE TECH & AI BUREAU // MARCH 2, 2026 // 20:30 CET

When Anthropic walked away from its $200 million Pentagon contract last Friday, the official story was simple: the company refused to let its AI be used for mass surveillance and autonomous weapons. The Pentagon called it ideological. Hegseth called it a "God-complex." Trump banned the company from every federal agency.

Then, hours later, OpenAI announced it had signed a deal with the same Defense Department. Sam Altman said the terms included strong safety protections. Everyone moved on.

But OpenAI has now published the details of that agreement. And the fine print reveals something the news cycle mostly missed: OpenAI's red lines are almost identical to what Anthropic was fighting for.

OpenAI's three hard prohibitions in its Pentagon deal:

1. No mass domestic surveillance
2. No directing autonomous weapons systems
3. No high-stakes automated decisions (including "social credit"-style systems)

These are, nearly word for word, the restrictions Anthropic refused to remove.

According to OpenAI's own blog post on the agreement - published to a site renamed, pointedly, the "Department of War" - the company "retains full discretion over our safety stack," deploys its models via cloud rather than handing over weights, keeps cleared OpenAI personnel in the loop, and has "strong contractual protections."

OpenAI added a pointed line: "We don't know why Anthropic could not reach this deal, and we hope that they and more labs will consider it."

It's a remarkable statement. The most natural reading is that OpenAI achieved something Anthropic couldn't - a deal with genuine safeguards. The more unsettling reading is that both companies had the same position, but only one had the political access to make it stick.

The Loophole Critics Are Already Flagging

The applause for OpenAI's "layered protections" hasn't been universal. Within hours of the blog post going live, Techdirt's Mike Masnick raised a pointed objection: the agreement explicitly states that data collection will comply with Executive Order 12333.

That matters. EO 12333, a Reagan-era intelligence directive, is the legal architecture the NSA uses to collect communications data outside the United States - including communications involving Americans - without a warrant and without FISA court oversight. The key mechanism: if the collection happens physically outside US borders, the constitutional protections that apply domestically don't apply.

"This deal absolutely does allow for domestic surveillance," Masnick wrote, "because EO 12333 is how the NSA hides its domestic surveillance by capturing communications by tapping into lines outside the US even if it contains info from/on US persons."

OpenAI hasn't responded directly to that reading. The company says its protections are "more expansive" than competitors who "reduced or removed their safety guardrails." Whether the EO 12333 carve-out represents a genuine surveillance loophole or a standard legal compliance clause depends heavily on how the Pentagon actually uses the contract.

That's the problem with classified AI deployments: nobody outside the deal knows.

OpenAI Got the Same Red Lines Anthropic Demanded.The Difference Was Politics. - analysis

What Actually Killed the Anthropic Deal

New reporting from the New York Times and the Atlantic reconstructs the final hours of the Anthropic negotiations in detail. The core sticking point wasn't weapons systems. It was bulk data collection on American civilians.

Pentagon CTO Emil Michael - a former Uber executive - specifically pushed for permission to analyze unclassified commercial data: location history, browsing records, chatbot queries, Google searches, GPS movement data, credit card transactions. Anthropic offered a workaround: NSA access under FISA for classified material, with a legally binding guarantee that commercial data on Americans would be off-limits. The Pentagon refused.

Anthropic offered to help the DoD transition to another company's systems rather than remove that protection. Michael rejected that too. At 5:14 p.m. on Friday, Hegseth declared Anthropic a "Supply Chain Risk" - a designation previously reserved for foreign adversaries. It had never been applied to an American company before.

The designation's significance: "Supply Chain Risk" is a legal tool that requires military contractors, suppliers, and partners to cut ties with the flagged entity. Using it against a domestic AI company is unprecedented and sets a template: refuse Pentagon demands, get treated as a foreign threat.

The personal dynamics accelerated the collapse. Dario Amodei and Sam Altman are bitter rivals - they built OpenAI together before Amodei left to found Anthropic. Michael has known Altman for years and, by multiple accounts, actively preferred working with him. When the deadline hit and Amodei wasn't immediately available by phone, Michael already had the OpenAI framework in place.

Even Sam Altman acknowledged the deal was "definitely rushed" and that "the optics don't look good." That's a notable admission from someone who just signed what looks like a massive competitive victory.

OpenAI Got the Same Red Lines Anthropic Demanded.The Difference Was Politics. - section

The Second-Order Question Everyone Is Avoiding

If OpenAI got essentially the same protections Anthropic was demanding, the Biden-era framing of "safety vs. capability" was never really the issue. What the Pentagon was actually asking for - bulk commercial data collection on American citizens - is something both companies apparently refused.

The difference is that OpenAI has a White House relationship Anthropic doesn't. Altman cultivated it aggressively throughout 2025: the Stargate announcement, the $500 billion infrastructure commitment, the direct line to Trump. Anthropic had a classified deployment, a safety reputation, and Dario Amodei's principled public statements. Those aren't equivalent political assets in the current environment.

What the Pentagon got from OpenAI isn't clear surveillance access - yet. What it got is a precedent: AI companies can be designated national security risks for refusing government demands. That tool exists now. It's been used once against an American firm. The next CEO who draws a red line knows exactly what the consequence looks like.

Whether OpenAI's "layered protections" hold under operational pressure - or whether EO 12333 quietly swallows the domestic surveillance prohibition - will only be visible in retrospect. Probably in a classified context.

That's the real story. Not which company won the contract. But what it now costs to refuse one.

← BACK TO THE WIRE
Share: Post Share
Current market sentiment: Follow @FnGindex on Telegram