It began with a $200 million contract and a handshake. It ended with presidential bans, federal lawsuits, and a Silicon Valley civil war that nobody saw coming. The explosive collapse of Anthropic’s partnership with the U.S. Department of Defense is more than a corporate drama—it is a mirror held up to an entire industry that, within the span of just a few years, has quietly dismantled the ethical guardrails it once built with great fanfare.
To understand how we got here, you have to go back to 2018.

The Age of Conscience: When Tech Said No to War
In June 2018, more than 4,000 Google employees signed a petition. Their demand was unambiguous: “a clear policy stating that neither Google nor its contractors will ever build warfare technology.” The petition was directed at Project Maven, a Pentagon initiative that used machine learning to analyze drone footage. The protest worked. Google declined to renew its Maven contract and codified its position into a set of published AI Principles—a document that explicitly ruled out developing weapons or enabling mass surveillance.
It was, at the time, a remarkable act of corporate conscience. Tech workers were not just writing code; they were drawing ethical lines. And their employers were listening.
OpenAI, too, maintained a strict prohibition on military and warfare applications of its models. The company presented itself as a builder of beneficial AI, not a defense contractor. These were not just marketing positions—they were written commitments, baked into usage policies and corporate charters.
That was then.
The Great Reversal
The first domino fell quietly. In January 2024, OpenAI removed its explicit ban on military and warfare uses from its usage policy. The revision went largely unnoticed at the time, buried in fine print while the world was dazzled by the generative AI gold rush. Then, in February 2025, Google made a far more dramatic move: it stripped out of its code of conduct the promise not to develop AI for weapons or mass surveillance—the very commitment that had defined its post-Maven identity. The company framed the decision around geopolitical competition, arguing that democratic nations needed to lead in AI development and that working with allied governments was part of that mission.
In practice, what it meant was this: seven years of ethical positioning, gone in a blog post.
Microsoft, meanwhile, acknowledged in May 2025 that it had been selling advanced AI and cloud computing services to the Israeli military since the start of the Gaza conflict. Palantir, Anduril, and Shield AI had never pretended otherwise—they were proudly defense-first from the start. By the summer of 2025, the lines between Silicon Valley and the Pentagon had not merely blurred; they had effectively dissolved.
This was the landscape into which Anthropic walked when it signed its own $200 million Pentagon contract in July 2024.
Anthropic’s Gamble and the Contract That Detonated
The deal seemed promising at first. Anthropic’s Claude became the first AI model cleared for deployment in the military’s classified networks, a distinction that spoke to its technical sophistication. Executives were bullish. The company described the award as opening “a new chapter.” Behind the scenes, it was quietly building a public sector business projected to reach billions in annual revenue.
But Anthropic was not like the others. Founded by former OpenAI researchers who left over concerns about safety, the company had built its entire brand around responsible AI development. Its usage guidelines carried strict prohibitions against facilitating violence, designing weapons, and enabling mass surveillance. These were not public-relations flourishes — they were the company’s founding reason for being.
When the Pentagon, now with new leadership under Defense Secretary Pete Hegseth, began reviewing its AI contracts, it discovered something it did not like. Anthropic had embedded two hard limits into its agreement. Claude could not be used for fully autonomous weapons systems—those capable of making lethal decisions without a human in the loop. And it could not be used for domestic mass surveillance of American citizens.
The military called these restrictions unacceptable.
The Ultimatum
On February 24, 2026, Defense Secretary Pete Hegseth issued Anthropic an ultimatum: agree that Claude could be used “for all lawful purposes”—a deliberately expansive phrase—or forfeit the contract. The deadline was set for 5:01 p.m. on February 27.
Anthropic’s CEO Dario Amodei did not blink. In a public statement, he said the company’s two narrow exceptions had not blocked “a single government mission to date” and argued that fully autonomous AI weapons were simply not reliable enough to be trusted with life-or-death decisions. “Frontier AI systems are simply not reliable enough to power fully autonomous weapons,” he wrote, adding that such systems “cannot be relied upon to exercise the critical judgment that our highly trained, professional troops exhibit every day.”
The Pentagon’s chief technology officer, Emil Michael, fired back, calling Amodei a “liar” with a “God complex” in a post on X. When the deadline passed without agreement, President Trump ordered all federal agencies to cease using Anthropic’s products. Hegseth designated Anthropic—an American company founded in San Francisco—a “supply chain risk,” a label historically reserved for adversarial foreign entities like Chinese state-linked firms.
The penalty was severe. Defense vendors and contractors were required to certify they did not use Anthropic’s models. Across its entire business, the government’s actions threatened to reduce Anthropic’s 2026 revenue by multiple billions of dollars. On March 9, Anthropic filed two federal lawsuits, calling the designation “unprecedented and unlawful.”
The OpenAI Pivot — and Why It Raised More Questions Than It Answered
Within hours of Trump’s ban on Anthropic, OpenAI stepped into the void. CEO Sam Altman announced that his company had struck a deal with the Defense Department to provide its models for classified networks. He insisted OpenAI had negotiated safeguards: no mass domestic surveillance, no autonomous weapons, and human oversight required for use-of-force decisions.
The speed of the announcement raised eyebrows immediately. Even OpenAI later conceded the rollout looked “sloppy and opportunistic.” Altman told staff in an all-hands meeting that Defense Secretary Hegseth would hold “ultimate authority” over how the Pentagon used the contract—a formulation that satisfied few critics.
The Electronic Frontier Foundation called OpenAI’s language “weasel words.” Former military officials told journalists they did not believe the surveillance restrictions were real. Brad Carson, a former undersecretary of the Army, said bluntly: “I don’t believe that provision is in the contract.”
The contrast with Anthropic was stark. One company had demanded binding written commitments and walked away when it couldn’t get them. The other had reached a deal the same afternoon—and was asking the public to take its word for it.
The Workers Revolt — Across Company Lines
What followed was remarkable. Almost a thousand employees from Google and OpenAI—rival companies that compete fiercely in the market—signed an open letter declaring, “We Will Not Be Divided.” The letter urged their employers to resist Pentagon pressure to abandon ethical limits on AI. More than 30 employees from OpenAI and Google DeepMind, including Google Chief Scientist Jeff Dean, filed an amicus legal brief in personal capacities supporting Anthropic’s lawsuit.
Inside Google, over 100 AI researchers sent a letter to Chief Scientist Jeff Dean asking the company to prohibit military use of its Gemini model for domestic surveillance or autonomous lethal targeting. OpenAI’s hardware leader Caitlin Kalinowski, who had overseen AR glasses development at Meta before joining OpenAI, resigned over the company’s Pentagon deal, citing ethical concerns.
The letter’s framing was pointed. “They’re trying to divide each company with fear that the other will give in,” it read. “That strategy only works if none of us know where the others stand.”
In an unexpected coda to the drama, Anthropic’s Claude app surged to the number one spot on the Apple App Store the day after the Pentagon announced it would terminate the contract—overtaking OpenAI’s ChatGPT for the first time. Whatever the public thought of Anthropic’s balance sheet problems, many users appeared to be voting with their downloads.
The Deeper Question: Who Controls the Machine?
At its core, this dispute is not really about one contract or one company. It is about a question that the AI industry has been avoiding for years: when a technology company sells its product to a government, does it retain any moral or contractual authority over how that product is used?
The traditional answer in defense contracting is no. You sell the shovel; you don’t control where the hole is dug. But AI is not a shovel. It is a system that reasons, advises, and — in the Pentagon’s envisioned future — potentially decides. The company that built it understands its capabilities and failure modes better than any government bureaucrat. And those failure modes, in a military context, are measured in human lives.
Anthropic’s position — that frontier AI is not yet reliable enough to make lethal decisions autonomously — is not a radical one. It is shared, at least in public, by OpenAI and Google. The difference is that Anthropic demanded it be written into law, while its competitors settled for assurances.
Foreign Policy magazine noted that at home, the stakes are not abstract. ICE agents have reportedly used AI-powered facial recognition to track and intercept immigration suspects. The government’s track record as a custodian of AI surveillance tools gives real-world texture to what had seemed like hypothetical concerns.
What Comes Next
Talks between Anthropic and the Pentagon reportedly reopened in early March, even as the lawsuits were being filed. The situation remains fluid. Legal experts believe Anthropic has a strong case — the supply chain risk designation was designed by Congress for foreign adversaries, not domestic firms that simply disagree with their contracting terms.
Congress, too, has begun to stir. Several lawmakers have urged the Pentagon to reconsider, warning that blacklisting one of America’s most advanced AI companies hands a gift to China and other competitors. The amicus brief filed by rival companies’ employees made precisely this argument: that punishing Anthropic would chill innovation and damage America’s standing in the global AI race.
But the deeper disruption may be cultural. For a decade, the tech industry tried to have it both ways — celebrating its role in national security while maintaining a carefully curated image of benevolent neutrality. The Anthropic-Pentagon battle has made that pose impossible. Companies will now have to choose, with unusual clarity, what they stand for.
The workers seem to understand this better than the executives. And in an industry built on the premise that intelligence can be aligned with human values, that may be the most consequential data point of all.
