The week ending February 28, 2026 will likely be studied in business schools for a generation. Within a single Friday afternoon, President Donald Trump banned a major American AI company from all federal contracts, the Pentagon designated that same company a national security risk on par with foreign adversaries, and its biggest competitor swept in to seal the deal. By Sunday morning, Claude — Anthropic’s flagship AI model — had leapfrogged ChatGPT on Apple’s App Store charts, driven there by a wave of consumers rallying behind the underdog. It was, by any measure, an extraordinary few days.
How It All Fell Apart: Anthropic and the Pentagon’s Breaking Point
The roots of the fallout stretch back months. Anthropic, the San Francisco-based AI safety company founded by Dario Amodei and former members of OpenAI, had been in negotiations with the Department of War over a contract to deploy its Claude AI models on classified military networks. The company had been awarded a $200 million Pentagon contract last July — a sign, at the time, of a promising partnership.
But the negotiations had been grinding toward a wall. Anthropic’s position was clear and consistent: it would support all lawful national security uses of its technology except two — mass domestic surveillance of American citizens, and fully autonomous lethal weapons systems. These weren’t abstract concerns. Anthropic argued that without explicit contractual prohibitions, its technology could be weaponised in ways that no current law was designed to prevent.
The Pentagon’s counter-position was equally firm: the military needed the ability to use AI for “all lawful purposes,” full stop. Officials maintained that existing law already prohibited mass surveillance and autonomous weapons, and that demanding additional contractual carve-outs amounted to a private company trying to override military command authority.
The deadline was set for 5:01 p.m. on Friday, February 27. Anthropic did not sign.
What followed was swift and severe. Defense Secretary Pete Hegseth posted a statement declaring Anthropic a “supply-chain risk to National Security” — a designation historically reserved for foreign adversaries like Chinese state-backed firms. Every defense contractor in the United States was now barred from doing business with Anthropic. Minutes later, President Trump took to Truth Social with characteristic force: “I am directing EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic’s technology. We don’t need it, we don’t want it, and will not do business with them again!”
He wasn’t finished. In a follow-up post, Trump warned: “Anthropic better get their act together, and be helpful during this phase out period, or I will use the Full Power of the Presidency to make them comply, with major civil and criminal consequences to follow.”
The language from senior Pentagon officials was no less charged. Hegseth accused Anthropic of trying to “strong-arm the United States military into submission,” calling the company “sanctimonious” and “arrogant.” A senior Pentagon official told Axios that the conflict had become personal at the top: “The problem with Dario is, with him, it’s ideological. We know who we’re dealing with.” Trump was even more blunt in a Truth Social tirade, calling Anthropic an “out-of-control, Radical Left AI company run by people who have no idea what the real World is all about.”
Anthropic Fights Back: “We Are Patriotic Americans”
Anthropic’s response was defiant but measured. In an exclusive interview with CBS News, CEO Dario Amodei pushed back hard against the characterisation of his company as ideologically driven or obstructionist.
“We are patriotic Americans,” Amodei said. “Everything we have done has been for the sake of this country, for the sake of supporting US national security. Our leaning forward in deploying our models with the military was done because we believe in this country.”
Amodei called the government’s actions “retaliatory and punitive,” and Anthropic issued a formal statement pledging to challenge the supply-chain risk designation in court. The company called the move “legally unsound” and warned it would set a “dangerous precedent for any American company that negotiates with the government.”
Anthropic also disputed the claim that it had received fair notice of the final terms. “We have not yet received direct communication from the Department of War or the White House on the status of our negotiations,” the company said, adding that the new contract language proposed by the Pentagon would allow its safety guardrails to be “disregarded at will.”
Earlier that Friday, ahead of the deadline, senior members of the Senate Armed Services Committee had privately urged both Hegseth and Amodei to extend negotiations and work with Congress to find a resolution. That plea went unanswered.
OpenAI Moves In — and Immediately Draws Fire
Within hours of the ban on Anthropic, OpenAI CEO Sam Altman posted on X that his company had “reached an agreement with the Department of War to deploy our models in their classified network.” The announcement was swift, the timing was impossible to ignore, and the backlash was immediate.
The central question that critics asked — loudly — was this: if OpenAI maintained the same safety red lines as Anthropic, how had it managed to reach an agreement the same day Anthropic was ejected? Altman’s own blog post stated that OpenAI’s deal prohibited domestic mass surveillance, autonomous weapons systems, and “high-stakes automated decisions.” Those positions sounded remarkably similar to what Anthropic had been insisting upon for months.
In a statement OpenAi said, “Yesterday we reached an agreement with the Department of War for deploying advanced AI systems in classified environments, which we requested they make available to all AI companies” source: X
Altman, to his credit, did not hide from the awkwardness. In an “ask me anything” session on X over the weekend, he acknowledged the optics bluntly: the deal had been “definitely rushed,” and he admitted “the optics don’t look good.”
So why do it? “We really wanted to de-escalate things, and we thought the deal on offer was good,” Altman wrote. “If we are right and this does lead to a de-escalation between the DoW and the industry, we will look like geniuses, and a company that took on a lot of pain to do things to help the industry. If not, we will continue to be characterised as rushed and uncareful.”
In a memo to his own team — the substance of which became public — Altman framed it as a matter of principle over optics: “This is a case where it’s important to me that we do the right thing, not the easy thing that looks strong but is disingenuous. But I realize it may not ‘look good’ for us in the short term, and that there is a lot of nuance and context.”
Critics were not entirely convinced. Tech publication Techdirt argued that OpenAI’s contract language, which ties data collection to Executive Order 12333, “absolutely does allow for domestic surveillance” because the order governs how the NSA legally collects communications — including those of American citizens — by intercepting data outside US borders. OpenAI disputed the characterisation, with a spokesperson stating: “Publicly available information can only be used by the military for defense and intelligence purposes if it’s tied to authorised national-security missions.”
Altman also took the unusual step of defending Anthropic directly, calling the supply-chain risk designation an “extremely scary precedent” for any American company that negotiates in good faith with the government.
The Fallout: Market Signals and an Industry on Edge
The market didn’t wait for legal arguments to play out. By Saturday, Anthropic’s Claude had overtaken OpenAI’s ChatGPT in Apple’s App Store rankings — a striking consumer signal that the public’s sympathies may lie with the company that held the line, not the one that closed the deal.
Claude has overtaken OpenAI’s ChatGPT in Apple’s App Store rankings, Source: X
Within the AI industry, employees at both OpenAI and Google were reportedly calling on their leadership to speak out against the Anthropic ban. The episode has opened a broader debate about how AI companies should engage with government contracts, and what happens when safety principles collide with political pressure.
The core tension — between a government that wants AI tools without conditions and companies that believe some conditions are non-negotiable — is unlikely to be resolved by any single contract. As Altman himself observed on X: “I think Anthropic may have wanted more operational control than we did. We and the DoW got comfortable with the contractual language, but I can understand other people would have a different opinion here.”
Anthropic, for its part, is heading to court. The company has vowed to challenge the supply-chain risk designation, calling Hegseth’s authority to extend the ban beyond Pentagon contractors legally questionable. It has framed the fight not just as a matter of corporate interest, but of constitutional principle: “We believe this designation would both be legally unsound and set a dangerous precedent for any American company that negotiates with the government.”
What is clear is that the rules of engagement between Silicon Valley and Washington have shifted, possibly permanently. The question of who gets to decide how AI is used in war — a technology company’s ethics board, or a military chain of command — is no longer hypothetical. It is now a live legal and political battle, playing out in real time.
For Anthropic, the next move is the courtroom. For OpenAI, it is proving that its rushed deal holds up to scrutiny. And for the rest of the AI industry, the lesson may be simpler: in Washington right now, the politics of AI are moving faster than anyone’s safety guidelines.

