Anthropic, an artificial intelligence company from the United States, is in an ongoing conflict with the Department of Defense. Anthropic was founded in 2021 by Dario Amodei. He left OpenAI because he thought OpenAI’s fast business growth compromised AI safety.
Anthropic’s flagship model is Claude, which is extremely strong in coding, prioritizes responsible development, and embeds strong ethical constraints to prevent misuse.
In 2024, Anthropic achieved a milestone as the first AI company permitted to deploy Claude on classified US government networks, as well as participating in other government collaborations. As a result, the firm forwent substantial revenue from Chinese military-linked entities. Anthropic also advocated strict export controls on advanced GPUs to preserve American AI leadership.
Anthropic’s contracts with the DoD, because of their responsible AI commitment, contained two non-negotiable “red lines”: Anthropic’s AI cannot be used in 1) mass surveillance of US citizens, 2) fully autonomous weapons.
In early 2026, new Defense Secretary Pete Hegseth issued a directive requiring all DoD contracts to permit “all lawful purposes” without additional corporate restrictions. Hegseth issued an ultimatum to Amodei: remove the red lines by February 27 or face contract termination and designation as a supply chain risk. Amodei refused, reaffirming Anthropic’s contributions to the US government while upholding its principles.
On February 27, President Donald Trump ordered all federal agencies to stop using Anthropic tech, and the DoD labeled Anthropic a “supply chain risk,” blocking military contractors from working with them. Two hours later, OpenAI’s Sam Altman announced a DoD agreement deploying their models on classified networks, incorporating similar safety principles against domestic surveillance and autonomous weapons. But a key difference: Anthropic’s “Constitutional AI” hardwires rules into training so safeguards are unremovable, while OpenAI’s measures are more flexible, deferring violation determinations to the DoD.
As of writing, Palantir remains silent, though the fight started partly because of Palantir. Anthropic deploys Claude through Palantir’s platforms on classified networks. Reports say Claude was used in the operation to capture Venezuela’s Maduro. Anthropic asked Palantir whether any red lines were violated, which angered the DoD, which saw it as interference. Now, with Anthropic blacklisted, Palantir must switch to other AI providers like OpenAI or xAI.
Allen Au is a tech startup founder, AI architect, and YouTuber