The Last Red Line
The Pentagon is threatening wartime emergency powers to strip safety features from Claude. This is how Anthropic's safety-first culture became a military dependency, and now, a political target.
Claude is indispensable.
That’s the only conclusion you can draw from what the Pentagon is doing and why they are standing ground.
When you have every frontier AI lab in the country OpenAI, Google, Meta, xAI, at al bending over backwards to give the Department of Defense whatever it wants, with no restrictions, no red lines, and the Pentagon is still chasing the one company that won’t fully comply, the leverage equation becomes obvious. They need Claude more than Claude needs them.
On February 24, 2026, Defense Secretary Pete Hegseth sat across from Anthropic CEO Dario Amodei at the Pentagon and delivered an ultimatum: sign a document granting the military unrestricted access to Claude for “all lawful purposes” by 5:01 PM on Friday, February 27, or else.
Amodei didn’t sign.
What both sides understand is that this is actually what’s being decided. Which is whether a private company gets to draw any red lines at all on how the most powerful AI systems are used in warfare.
What the Pentagon Wants
The demand from the Pentagon: replace Anthropic’s specific usage restrictions with blanket “any lawful use” language. Hegseth’s January 9, 2026 AI Acceleration Strategy mandates this language in all DoD AI contracts within 180 days.
Every other major lab has agreed.
What Anthropic currently permits is broad. Claude is approved for intelligence analysis, foreign surveillance, targeting with human oversight, offensive cyber operations, document review, strategic planning, language translation, and decision support in time-sensitive combat situations. Pentagon officials acknowledge Claude outperforms competitors in several of these categories, particularly offensive cyber, where it reportedly leads the field.
What Anthropic refuses are two specific things.
First: fully autonomous weapons systems. Those that select and engage targets without meaningful human supervision.
Second: mass domestic surveillance of American citizens.
That’s it. Two red lines. Everything else is on the table.
Hegseth’s team frames the restrictions as corporate overreach.
The confrontation escalated in February after reports that Claude was used during the January 3 raid that captured Venezuelan President Nicolás Maduro. Pentagon officials accused Anthropic of questioning the operation through its partner Palantir. Anthropic flatly denies this.
At the February 24 meeting, he used a Boeing analogy: when the government buys a plane, Boeing doesn’t get a say in how it’s flown. A senior Pentagon official told reporters the demand “has nothing to do with mass surveillance or autonomous targeting” because “there’s always a human involved.”
But “any lawful use” means exactly what it says.
“The only reason we’re still talking to these people is we need them and we need them now. The problem for these guys is they are that good.”
How Safety Built the Moat
Claude is the only frontier model currently operational on classified U.S. military networks. Not OpenAI. Not Google. Not xAI. Claude got there first, through the Palantir–AWS partnership, with Impact Level 6 accreditation, in October 2024. It has been operational in classified environments for sixteen months.
xAI’s Grok signed its first classified access deal on February 23, 2026, just one day before the Hegseth meeting. It will take months to operationalize. OpenAI’s classified access is still being negotiated. Google’s Gemini is live on GenAI.mil but only for unclassified systems.
So why did the safety-first company beat everyone into classified environments?
I would argue that it’s because the safety research is the capability.
Think about what Anthropic’s safety team actually works on: reducing hallucinations, understanding failure modes, building interpretability tools, making model behavior predictable under adversarial conditions. Now think about what a military intelligence analyst needs: low hallucination rates when synthesizing classified reports, reliable behavior under novel inputs, auditable reasoning chains, consistent responses that don’t go off the rails with adversarial data.
Anthropic’s Constitutional AI approach produces something unusually valuable in classified settings: a model that is deeply steerable. In consumer or enterprise contexts, this means Claude follows usage policies. In classified contexts, it means Claude can be configured to follow specific operational rules, handle information compartmentalization, and maintain consistent behavior within tightly defined parameters.
Military users don’t want a model that does whatever it wants. They want a model that reliably does what they tell it to do, within boundaries they define. A model trained on constitutional principles is architecturally better at this than a model trained purely to maximize helpfulness.
There’s a deeper argument here.
The use-case restrictions that Anthropic stands by and their models’ technical reliability come from the same DNA. You can’t maintain a culture of careful thinking about AI failure modes while simultaneously telling your team you don’t care how the model is used. It’s a cultural argument, not a technical one, and it’s harder to prove empirically. But the departure of Mrinank Sharma, who led the Safeguards Research Team, on February 9, and the reported internal disquiet among engineers about Pentagon work, suggest the culture is real and the risk of degrading it is not theoretical.
What Happens if Anthropic Refuses?
Hegseth put three specific threats on the table at the February 24 meeting. They escalate in severity and in how unprecedented they are.
1. Contract cancellation
The $200 million CDAO prototype contract gets terminated. This is the conventional option. Financially it represents just a small fraction of Anthropic’s $14 billion in annual revenue. But it would mean losing classified network access that took over a year to build, and the institutional relationships that come with it.
2. Defense Production Act invocation
The DPA gives the president authority to force a private company to serve military needs, and to override the company’s decisions about its own products.
Every previous DPA invocation compelled companies to produce more of something, like more steel during Korea, more ventilators during COVID, more vaccine doses, more EV battery minerals. This would compel a company to produce differently, i.e. to remove safety features from an existing product. This has no precedent whatsoever.
Even Dean Ball, a former senior Trump White House AI policy adviser and no friend to Anthropic’s politics, called this overreach. The DPA threat, he argued, is unnecessary when willing alternatives exist, and would amount to the government saying “if you disagree with us politically, we’re going to try to put you out of business”.
3. “Supply chain risk” designation
This is the nuclear option. Normally reserved for foreign adversaries, like Chinese telecom firms, Russian software companies, this designation would effectively blacklist Anthropic from all federal contracting, not just defense. It would also require every company doing business with the Pentagon, including Microsoft, Google, and Amazon, to sever ties with Anthropic entirely.
This move would signal to every technology company in the country that maintaining safety restrictions the government dislikes carries existential risk.
Who Decides
As Alex Rozenshtein from Lawfare framed it: the rules governing how the military uses the most transformative technology of the century are being set through bilateral negotiations between a defense secretary and a startup CEO.
I wrote last year about the emerging pattern of government interference with technology companies, and the ways in which political pressure, regulatory threats, and procurement leverage are being used to reshape how technology works, not just where it’s sold. This confrontation is that pattern reaching its logical extreme. This is the government threatening to invoke wartime emergency powers to force a company to remove safety features from its AI.
What I keep coming back to is the structural irony at the center of this. The safety culture that the Pentagon is trying to override is the same culture that produced the model they can’t replace. The careful thinking about failure modes, the investment in reliability, the institutional discipline around constraints is the very reason why Claude is on classified networks and Grok isn’t. This is why the Pentagon is issuing ultimatums instead of simply switching vendors.
Friday will tell us where they stand.



