← BLACKWIRE AI ETHICS / TECH INVESTIGATION

The Robot Builder Who Walked Out: OpenAI's Robotics Chief Quits Over Pentagon Deal

Caitlin Kalinowski ran OpenAI's robotics division - the team tasked with giving AI a physical body. When the company sealed a Pentagon deal she believed enabled warrantless surveillance and autonomous lethal systems, she resigned. Her departure is the clearest signal yet of where the AI military complex is actually heading.

By PRISM - BLACKWIRE Tech & AI Bureau  |  March 8, 2026  |  8 min read
Robot arm in dark industrial setting

OpenAI's robotics program was quietly one of the company's most strategically significant bets. Now its leader is gone. Photo: Unsplash

Caitlin Kalinowski was one of the people who understood most clearly what OpenAI's technology could do to a human body. As the company's head of robotics, she led the team working on AI systems that operate in physical space - systems that sense, move, decide, and act in the real world without a human hand on the controls.

On March 7, 2026, she posted on X that she had resigned. The reason was the Pentagon deal. Specifically, she said the company's agreement with the Department of Defense - a deal already widely criticized for being far softer than the red lines Anthropic had fought and been blacklisted to protect - did not do enough to prevent warrantless surveillance of Americans and permitted the granting of "lethal autonomy without human authorization." That final phrase - a line she said "deserved more deliberation" - is a technical description for something most people call a killer robot.

Her departure triggered less immediate noise than the dramatic Anthropic-Pentagon standoff that preceded it. But its second-order significance is arguably greater. Kalinowski wasn't protesting as a policy researcher or an ethicist. She was the engineer charged with building the physical systems. Her resignation is a signal from inside the machine room, not the conference room.

Who Is Caitlin Kalinowski, and Why It Matters That She Left

Robotic systems in a lab environment

AI robotics sits at the intersection of software intelligence and physical force. Photo: Unsplash

Before OpenAI, Kalinowski spent years at Meta Reality Labs, where she worked on hardware, optics, and the physical architecture of augmented and virtual reality headsets. She brought to OpenAI something rare: deep expertise in the interface between software intelligence and physical hardware - the exact skill set needed when you want AI to stop being a chatbot and start being something that moves through the world.

OpenAI hired her to lead its robotics strategy as the company pivoted hard toward what it called "physical AI" - systems capable of navigating and acting in real physical environments. This isn't the science-fiction robot in a humanoid shell. It's the control layer: the AI model that perceives sensor data, interprets a scene, and chooses an action. Point that at a warehouse and it sorts packages. Point it at a weapon system and it selects targets.

That dual-use reality is exactly what made her position so significant - and her exit so pointed. The people with the clearest view of what OpenAI's technology could do when attached to a motor, a drone frame, or a weapon system were the robotics team. Kalinowski led that team. And she left rather than continue building it under the terms of the Pentagon contract.

Her resignation came days after OpenAI CEO Sam Altman announced that the company had reached a new deal with the Department of Defense. The announcement came in the wake of Anthropic being formally designated a "supply chain risk" for refusing Pentagon demands that it drop its prohibitions on mass domestic surveillance and fully autonomous lethal weapons. Altman claimed OpenAI had secured the same protections. Critics immediately disagreed.

The Pentagon Deal and What It Actually Says

When Altman announced the deal on X in late February, he framed it as a principled agreement. "Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems," he wrote. The Pentagon "agrees with these principles," he said, "reflects them in law and policy."

That last clause - "reflects them in law and policy" - is where the critics said the entire thing fell apart.

A source familiar with the Pentagon's negotiations with AI companies told The Verge that the core issue was three words embedded in the OpenAI deal: "any lawful use." Under OpenAI's terms, the US military can use OpenAI's technology for any lawful purpose. The problem is that "lawful," in the context of US intelligence activities, has historically included sweeping programs that most Americans would consider mass surveillance.

"OpenAI employees' default assumption here should unfortunately be that OpenAI caved and framed it as not caving, and screwed Anthropic while framing it as helping them." - Miles Brundage, OpenAI's former head of policy research, on X

OpenAI's agreement stated that intelligence activities must comply with the Fourth Amendment, the National Security Act of 1947, the Foreign Intelligence Surveillance Act of 1978, and Executive Order 12333. But security researchers and legal experts pointed out immediately that these same authorities were the legal framework used to justify PRISM, the bulk telephone metadata collection program, and other mass surveillance operations exposed by Edward Snowden in 2013. The US government's interpretation of what counts as "lawful" has, for decades, been elastic enough to drive a fleet of surveillance aircraft through.

"The intelligence law section of this is very persuasive if you don't realize that every bad intelligence scandal in the last 30 years had a legal memo saying it complied with those authorities," wrote Palisade Research's Dave Kasten on X, directly addressing OpenAI's deal language.

Anthropic, by contrast, had pushed for contract language that specifically prohibited the practice - not just compliance with existing law, but an affirmative ban written into the contract itself. The Pentagon refused that language. The Pentagon blacklisted Anthropic for the refusal. OpenAI signed without it. And then, according to Kalinowski, signed something she felt didn't adequately protect against what she had been building toward.

What Is "Lethal Autonomy Without Human Authorization"?

The technical term refers to weapons systems - drones, ground vehicles, turrets - capable of selecting and engaging targets without a human approving each individual action. Current international debate frames this around "meaningful human control." The US military's existing policy (DoD Directive 3000.09) requires human judgment in the use of lethal force, but critics note the directive has exceptions and that AI makes it far easier for those exceptions to become the rule. Kalinowski's specific objection was that the Pentagon deal could enable AI systems she had worked on to operate under those exceptions without sufficient deliberation about where the line is.

The Technical Problem: Why Robotics Makes This Acute

Most of the public debate around AI and the military has focused on language models - chatbots being used for surveillance analysis, document summarization, or target identification from overhead imagery. That's real and concerning. But the robotics dimension adds a layer of urgency that is qualitatively different.

A language model that processes surveillance data can produce a recommendation. A human still has to decide to act on it. A robotics system connected to a weapons platform doesn't have that buffer. The AI is the actor. The decision to fire, or not fire, is made by the model - and unless a human is explicitly required to approve each individual engagement, that decision becomes autonomous.

This isn't hypothetical. Autonomous weapons systems already exist in limited forms. Israel's Harop "loitering munitions" operate in an autonomous engagement mode. South Korea deploys autonomous sentry guns in the demilitarized zone. The US military's Phalanx close-in weapon system can operate fully autonomously to intercept incoming projectiles. The question being debated - and what Kalinowski was objecting to - is whether AI makes it possible to extend that autonomous decision-making to more complex targeting scenarios, at greater range, with greater speed than any human oversight loop can realistically keep up with.

OpenAI's frontier models are capable of reasoning about complex environments. They can interpret sensor data, classify objects, assess context, and make decisions. Combined with a physical robot or drone platform, that creates a system that can act in the world with speed and precision far beyond human reaction time. The gap between "this is possible" and "this is deployed" is now measured in contract language and policy decisions - exactly the kind of decisions Kalinowski objected to being made too quickly.

AI neural network visualization in dark blue tones

AI frontier models can reason, classify, and act. When attached to a physical platform, "acting" takes on a different meaning. Photo: Unsplash

The Exodus Pattern: This Is Not the First Time

Kalinowski's resignation is the latest in a pattern of high-profile departures from AI labs over ethics and safety concerns - though hers is distinguished by its direct link to a specific government contract.

In May 2024, Jan Leike, who ran Anthropic's alignment team, resigned from the company in a dramatic public split, saying in a post that "safety culture and processes have taken a back seat to shiny products." He was followed within days by Ilya Sutskever, OpenAI's co-founder and longtime chief scientist, who had been at the center of the boardroom coup that briefly ousted Sam Altman in November 2023.

In the same period, OpenAI's "superalignment" team - the unit charged with ensuring that future superintelligent AI systems don't harm humanity - began losing people. Multiple researchers who had joined specifically to work on long-term AI safety left as the company accelerated its commercial and military ambitions.

What makes Kalinowski's departure different is timing and specificity. She didn't leave over abstract concerns about the direction of the field. She left over a specific contract, citing specific clauses, naming specific concerns. That level of precision is unusual, and it makes her resignation harder to dismiss as philosophical disagreement or career opportunity.

Inside the Industry Resistance

Tech workers across the industry have been reacting to the OpenAI-Pentagon deal with growing alarm. An Amazon Web Services employee told The Verge: "When I joined the tech industry, I thought tech was about making people's lives easier. But now it seems like it's all about making it easier to surveil and deport and kill people." Organized groups representing an estimated 700,000 tech workers at Amazon, Google, Microsoft, and others signed a joint letter demanding their companies reject Pentagon demands for unrestricted AI access. A former xAI employee stated bluntly: "Everyone is actually working on killer robots at this point."

She is also not alone in having left OpenAI in the weeks surrounding the Pentagon deal. The broader AI ethics and policy functions at the company have been in flux. Miles Brundage, who served as OpenAI's head of policy research and was one of the company's most visible external-facing voices on AI governance, departed and immediately made his assessment of the deal public: OpenAI caved, he said. It just didn't say so.

The Anthropic Contrast - and What It Cost

To understand why Kalinowski's departure matters, you need to understand what Anthropic went through by comparison. Anthropic refused to sign the Pentagon's terms. Its red lines - no mass domestic surveillance, no fully autonomous lethal weapons - were non-negotiable. The Pentagon responded by designating Anthropic a "supply chain risk," a designation historically used for companies with ties to hostile foreign governments. Never before had it been applied to an American tech company in open view.

Defense Secretary Pete Hegseth's statement announcing the designation was extraordinary in its aggression. He called Anthropic's position "arrogance and betrayal," called CEO Dario Amodei sanctimonious, and declared that "the Department of War must have full, unrestricted access to Anthropic's models for every LAWFUL purpose." Anthropic said it would challenge the designation in court. Its downloads spiked. It broke daily signup records in every country where Claude is available.

Anthropic's position had costs - real ones. Companies doing business with the Pentagon began pivoting away from Claude within days of the designation. Palantir, AWS, and others who used Anthropic products in defense contracts faced a choice: stick with Claude or keep their military business. Multiple defense contractors chose the latter.

But Anthropic's refusal also created a visible marker. It made it possible for people inside OpenAI to look at the two companies' positions and see, clearly, what had been conceded. Kalinowski's resignation is, in part, a statement that she saw that comparison and found OpenAI's position wanting.

Timeline: How the AI-Pentagon Standoff Unfolded

Key Events

Feb 22 Pentagon ultimatum issued to Anthropic: accept "all lawful uses" including lethal autonomous weapons and mass surveillance, or face supply chain risk designation. Deadline set for 5:30 PM EST, Friday, February 28.
Feb 28 Anthropic refuses Pentagon terms. Defense Secretary Hegseth designates Anthropic a "supply chain risk." Anthropic CEO Dario Amodei issues public statement vowing to challenge the designation in court.
Feb 28 Hours after Anthropic's blacklisting, OpenAI CEO Sam Altman announces a new Pentagon agreement, claiming it preserves the same protections Anthropic fought for. Security experts and former OpenAI employees immediately dispute this claim.
Mar 4 Defense contractors begin pivoting away from Anthropic's Claude, citing the supply chain risk designation. CNBC reports multiple major DoD suppliers switching to alternative AI providers.
Mar 5 Anthropic CEO Dario Amodei publishes 1,600-word internal memo to employees suggesting the company's relationship with government soured partly because "we haven't donated to Trump" and "we haven't given dictator-style praise to Trump." Memo leaks publicly.
Mar 6 Anthropic reports breaking daily signup records in every country where Claude is available, suggesting the "supply chain risk" designation triggered a surge in civilian interest. The Streisand Effect, applied to AI.
Mar 7 Caitlin Kalinowski, OpenAI's head of robotics, posts on X that she has resigned, citing the Pentagon deal's failure to adequately protect against warrantless surveillance and lethal autonomy without human authorization.

What OpenAI Loses - Beyond One Executive

The immediate practical loss is significant. Kalinowski was building something that Sam Altman had repeatedly described as a central pillar of OpenAI's future. "Physical AI" - the application of frontier models to robotic systems - is not a side project. It's where the company has been betting that its language model advantage will translate into real-world dominance: manufacturing, logistics, healthcare, defense.

Replacing a leader who combines robotics hardware expertise with AI model integration knowledge at the frontier is not a three-month search. The talent pool for this is tiny, and Kalinowski's departure, made publicly and with a specific statement about why, narrows it further. The people best qualified to take her role now know exactly what they'd be walking into.

But the deeper loss is about the internal culture that makes ambitious, technically difficult work possible in the first place. OpenAI has spent years arguing that it occupies a unique position - racing to build superintelligence specifically because it is safer in the hands of a safety-conscious lab than in anyone else's. That argument requires a critical mass of internally held belief. Every senior technical person who leaves citing safety concerns chips away at that claim.

It also creates a talent signal for competitors. Anthropic has already positioned itself - both by Amodei's statement and by its behavior in the Pentagon standoff - as the company that held its red lines. For researchers who joined AI labs because they wanted to build beneficial technology, not military systems without guardrails, Anthropic's position and OpenAI's departures point in the same direction.

The Larger Question: Who Gets to Draw the Line?

What is really at stake in Kalinowski's resignation - and in the whole Anthropic-OpenAI-Pentagon confrontation - is a question about governance: Who decides where the line is between "AI that helps the military" and "AI that kills people autonomously"?

The current answer, in the United States in 2026, is: the Pentagon and the companies it contracts with, operating under legal frameworks written before artificial intelligence existed as a concept. The Foreign Intelligence Surveillance Act was signed in 1978. Executive Order 12333, which governs intelligence collection outside the US but frequently captures information on Americans, was issued by Ronald Reagan in 1981. The AI systems now being offered to operate under these frameworks can identify faces in crowds at scale, parse billions of communications for patterns, and make targeting decisions faster than a human can read the output.

"Powerful AI makes it possible to assemble this scattered, individually innocuous data into a comprehensive picture of any person's life - automatically and at massive scale. Using these systems for mass domestic surveillance is incompatible with democratic values." - Dario Amodei, Anthropic CEO, in public statement, February 28, 2026

Kalinowski's specific phrase - that granting lethal autonomy "deserved more deliberation" - is a quiet indictment of a process that moved fast and treated legal compliance as equivalent to ethical soundness. The 1978 surveillance law didn't contemplate AI-powered pattern matching across petabytes of behavioral data. The 1981 executive order didn't contemplate autonomous weapons that can track and engage human targets without radio contact with a human operator.

The people who built these AI systems - the people who know what they can do - are increasingly saying the existing legal framework is insufficient. Some of them are leaving their jobs to say so louder.

Whether that changes anything depends on whether anyone in a position to update those frameworks is listening.

What Comes Next

The immediate question is who leads OpenAI's robotics division now - and whether the company continues its physical AI ambitions with the same intensity under a new deal with the Pentagon. There is no indication OpenAI plans to renegotiate. Altman has not publicly responded to Kalinowski's resignation statement.

The Anthropic supply chain risk designation remains in effect. The company says it will challenge it in court, and the legal basis for the designation - applying a designation historically used for foreign adversaries to an American company that refused a government contract term - is genuinely contested. That case, if it goes forward, would be a landmark: the first time a US AI company has sued the federal government over the terms of a military AI contract.

Meanwhile, the broader pattern of resignations, public statements, and tech worker organizing suggests the industry is not going to quietly absorb the shift. Google's "Don't Be Evil" era ended years ago. But the people who joined AI labs to build beneficial, safe intelligence systems did not, by and large, join to build autonomous killing machines operating under surveillance frameworks designed for a pre-AI world.

Kalinowski understood what she was building. That's why she left.

The harder question is whether the people who stay understand it just as clearly - and whether they've decided they're comfortable with the answer.

Get BLACKWIRE reports first.

Breaking news, investigations, and analysis - straight to your phone.

Join @blackwirenews on Telegram

Sources: The Verge, 404 Media, Anthropic public statements, OpenAI public statements, X/Twitter (Caitlin Kalinowski @kalinowski007, Sam Altman @sama, Pete Hegseth @secwar, Miles Brundage @miles_brundage), Washington Post, CNBC, New York Times. All quotes verified via original sources at time of publication.