Pentagon AI Deals Exclude Anthropic
TECH

Pentagon AI Deals Exclude Anthropic

52+
Signals

Strategic Overview

  • 01.
    On May 1, 2026, the Department of War announced classified-network AI agreements with seven leading firms — OpenAI, Google, Nvidia, Microsoft, AWS, SpaceX, and Reflection — with Oracle added shortly after, while Anthropic was pointedly excluded.
  • 02.
    The approved vendors will deploy frontier AI on Impact Level 6 (secret) and Impact Level 7 (most highly classified) networks for data synthesis, situational understanding, and warfighter decision-making.
  • 03.
    Anthropic was designated a 'supply chain risk' on February 27, 2026 — the first time the label has ever been applied to an American company — after refusing to allow Claude to be used 'for all lawful purposes' without its red lines on autonomous weapons and mass domestic surveillance.
  • 04.
    Pentagon CTO Emil Michael confirmed Anthropic remains blacklisted, but separated the company from its Mythos model, which the NSA was reportedly already using despite the DoD ban.

Deep Analysis

A Label Built for Huawei, Aimed at a U.S. Lab

The Pentagon's most consequential move isn't who it picked — it's the tool it used to punish who it didn't. The 'supply chain risk' designation is an authority historically reserved for foreign adversary firms like Huawei, intended to keep equipment from hostile states out of sensitive networks. On February 27, 2026, Defense Secretary Pete Hegseth applied that authority to Anthropic, with formal letters dated March 3. According to Mayer Brown's contracting analysis, this was the first time the label has ever been turned on an American company, and the cascade is doing exactly what the label was designed to do against a foreign threat: cut the company out of the supply chain entirely.

The mechanics matter. Hegseth's directive bars any contractor, supplier, or partner doing business with the Pentagon from engaging in any commercial activity with Anthropic. CNBC and Military Times reporting documents defense-tech firms already dropping Claude to preserve their Pentagon revenue, and reporting that the Pentagon was asking Boeing and Lockheed Martin to assess their reliance on Claude reads as the Defense Production Act-style coercion it functionally is. The May 1 multi-vendor announcement, in which OpenAI, Google, Nvidia, Microsoft, AWS, SpaceX, Reflection and (shortly after) Oracle all received approval for IL6 and IL7 classified-network deployment, is the back half of the same play: lock Anthropic out of the highest-value AI revenue stream in defense, and let competitors fill the vacuum.

The Mythos Paradox: One Government, Two Postures

If Anthropic is genuinely a national-security supply chain risk, the NSA didn't get the memo. Axios reported on April 19 that the NSA was already using Anthropic's Mythos Preview model, and on May 1 Pentagon CTO Emil Michael went on CNBC and effectively confirmed the split: Anthropic-the-company is still blacklisted, but Mythos-the-model is 'a separate national security moment.' That phrasing isn't a reconciliation — it's an admission that a model the DoD has formally branded as too risky to source is, in fact, valuable enough that another arm of the same government is quietly using it.

This contradiction is more than embarrassing — it's a legal liability. Lawfare's analysis explicitly cites the NSA-Mythos arrangement as evidence that the supply chain finding is pretextual: the very capability the government is supposedly protecting itself from is the one its premier signals-intelligence agency wants. For Anthropic's lawyers, the paradox is gold. For Pentagon contractors trying to comply with Hegseth's no-commercial-activity rule, it's chaos: they're forbidden from doing business with a vendor whose flagship product the NSA is actively evaluating. The official posture is unified; the actual government is not.

The Red Line That Triggered the Standoff

Strip away the procurement language and the dispute reduces to two carve-outs Anthropic refuses to drop: fully autonomous weapons and mass domestic surveillance. Hegseth's February 24 ultimatum demanded that Claude be usable 'for all legal purposes.' Anthropic, in its own public statement, said its acceptable-use policy excludes precisely those two categories — and that it would not bend. CBS News and Anthropic's own filings confirm that the original July 2025 $200M contract had been signed under the company's AUP; the new demand was a unilateral attempt to renegotiate that floor.

The public response surfaced the political stakes faster than Washington did. Coverage of Anthropic's refusal drew tens of thousands of upvotes on r/news, with the dominant frame being a private company drawing a moral line the government wouldn't — commenters openly said they were switching from ChatGPT to Claude in protest. CNN's broadcast version, framed as Pentagon shunning Anthropic, reinforced the same narrative. Whatever the final legal verdict, Anthropic has already converted the standoff into a brand position no rival can match: the AI lab that wouldn't say yes to autonomous kill decisions or warrantless mass surveillance, even at the cost of $200M and a Pentagon relationship.

The Courtroom Math: Why Outside Lawyers Think the Designation Collapses

On the surface, the litigation looks split. Judge Rita Lin in the Northern District of California granted Anthropic a preliminary injunction on March 26, 2026, in a 43-page ruling that described the government's actions as 'retaliatory' for Anthropic's autonomous-weapons and surveillance restrictions. Two weeks later, on April 8, a D.C. Circuit panel denied Anthropic's request for a stay, citing 'weighty governmental and public interests' — leaving the designation in force pending full appellate review. That's how Hegseth could still execute the May 1 exclusion despite Lin's ruling.

Lawfare's Alan Rozenshtein, joined by Howard's Michael Endrias, argues the appellate posture is misleading: a stay denial is procedural, not a merits judgment. Their read is that the supply chain authority was statutorily designed for adversarial-state risk, not for punishing a U.S. vendor's contract terms, and that contemporaneous public statements from Trump and Hegseth — the 5:01 p.m. deadline, the agency-wide directive, the immediate no-business-with-Anthropic order — make pretext easy to prove. 'The statute wasn't built for this, the facts don't support it, and the courts will say so,' Rozenshtein writes; Endrias frames it more bluntly as 'designation as political theater: a show of force that will not stick.' If they're right, the May 1 vendor lineup is durable but the legal basis isn't, and the Pentagon's leverage erodes the moment a higher court reaches the merits.

Historical Context

2025-07-01
Anthropic signed a roughly $200M Pentagon contract; Claude became the first frontier model approved for use on classified U.S. government networks, with the Pentagon agreeing to abide by Anthropic's acceptable use policy.
2025-12-01
Pentagon launched GenAI.mil internal AI platform with Google Gemini; reached more than 1.3 million personnel and tens of millions of prompts within five months.
2026-02-24
Hegseth gave Amodei a Feb. 27, 5:01 p.m. deadline to allow unrestricted use of Claude 'for all legal purposes'; the meeting ended without agreement.
2026-02-27
Trump directed federal agencies to cease using Anthropic; Hegseth designated Anthropic a supply chain risk.
2026-03-03
DoW formally notified Anthropic of the supply chain risk designation by letter — the first time the label was applied to a U.S. company.
2026-03-09
Anthropic filed parallel lawsuits in N.D. Cal. and the D.C. Circuit seeking injunctive relief against the supply chain designation.
2026-03-26
Granted a preliminary injunction in a 43-page ruling describing the government's actions as 'retaliatory' for Anthropic's autonomous-weapons and surveillance restrictions.
2026-04-08
A three-judge panel denied Anthropic's request for a stay, citing 'weighty governmental and public interests' and leaving the designation in effect.
2026-04-19
Axios reported the NSA was using Anthropic's Mythos Preview despite the DoD blacklist, complicating the official posture.
2026-05-01
Announced classified-network AI agreements with OpenAI, Google, Nvidia, Microsoft, AWS, SpaceX, Reflection (Oracle added later), explicitly excluding Anthropic.

Power Map

Key Players
Subject

Pentagon AI Deals Exclude Anthropic

DE

Department of War / Department of Defense

Awarded the multi-vendor agreements for IL6/IL7 classified networks and continues to enforce the Anthropic supply chain risk designation; sets the rules every defense contractor must follow.

EM

Emil Michael (Pentagon CTO)

Public face of the new AI strategy; framed the multi-vendor approach as risk management against single-vendor reliance while signaling separate evaluation of Anthropic's Mythos model.

PE

Pete Hegseth (Defense Secretary)

Issued the supply chain risk designation and the directive barring any contractor from doing commercial business with Anthropic, effectively forcing defense-tech firms to choose between Claude and the Pentagon.

AN

Anthropic / Dario Amodei

Excluded from the agreements; suing in two federal courts to overturn the designation while offering models to the national-security community at nominal cost — but refusing to drop autonomous-weapons and mass-surveillance restrictions.

OP

OpenAI, Google, Nvidia, Microsoft, AWS, SpaceX, Reflection, Oracle

Approved AI vendors on classified Pentagon networks under the new agreements, consolidating their position in defense AI as Anthropic is locked out.

NA

National Security Agency (NSA)

Reportedly evaluating and using Anthropic's Mythos Preview despite the DoD blacklist, exposing an internal U.S. government contradiction that complicates the official posture.

FE

Federal courts (N.D. Cal. and D.C. Circuit)

Judge Rita Lin granted a preliminary injunction characterizing Pentagon actions as 'retaliatory'; a D.C. Circuit panel then denied Anthropic's stay, leaving the designation in force pending litigation.

Source Articles

Top 5

THE SIGNAL.

Analysts

"Anthropic remains a supply chain risk, but Mythos is a distinct national-security capability the government wants to evaluate; the DoW will avoid single-vendor reliance. Quote: 'The Mythos issue ... is a separate national security moment.'"

Emil Michael
Chief Technology Officer, U.S. Department of War

"The supply chain designation is unlikely to survive judicial review because the underlying statute and facts don't support it, and contemporaneous Trump and Hegseth statements expose the action as pretextual. Quote: 'The statute wasn't built for this, the facts don't support it, and the courts will say so.'"

Alan Z. Rozenshtein
Associate Professor of Law, University of Minnesota; Research Director, Lawfare

"The designation reads as political theater rather than a defensible national-security action. Quote: 'This is designation as political theater: a show of force that will not stick.'"

Michael Endrias
J.D. candidate, Howard University School of Law (co-author, Lawfare analysis)

"The company will continue serving the national-security community within its acceptable-use policy — including offering models at nominal cost — but will not relax its prohibitions on fully autonomous weapons and mass domestic surveillance. Quote: 'Anthropic will provide our models to the Department of War and national security community, at nominal cost.'"

Anthropic (corporate statement)
AI developer, party to the dispute
The Crowd

"Anthropic rejects latest Pentagon offer: 'We cannot in good conscience accede to their request'"

@u/drippymoudy47071

"Pentagon moves to designate Anthropic as a supply-chain risk"

@u/Logical_Welder346711868

"Scoop: Pentagon takes first step toward blacklisting Anthropic"

@u/Brilliant_Version34410768
Broadcast
Pentagon taps 7 tech companies for classified AI, shuns Anthropic

Pentagon taps 7 tech companies for classified AI, shuns Anthropic

US strikes deal with tech giants to deepen AI military ties | DW News

US strikes deal with tech giants to deepen AI military ties | DW News

The Hidden Cost of OpenAI's Pentagon Deal? Trust.

The Hidden Cost of OpenAI's Pentagon Deal? Trust.