OpenAI Just Told the Pentagon What It Wanted to Hear
I'm team Anthropic. Worth saying upfront. Claude is the only AI model inside the US military's classified networks. Not one of several — the only one. Intelligence analysis, operational planning, cyber operations, modeling and simulation. Embedded so deep that a senior Pentagon official admitted pulling it out would be "an enormous pain in the ass." And then Anthropic told them no. The Pentagon wanted a contract clause: all lawful uses. Sounds reasonable until you understand what's legal — mass analysis of commercially available location data, fitness tracker signals, phone records, all combined at scale to profile American civilians. Not illegal. Definitely surveillance. Anthropic drew a line there, and on autonomous weapons. They wanted it written in. The DoD said no. Talks collapsed in February. Trump declared Anthropic a supply chain risk and ordered every federal agency to stop using their technology. That designation is normally reserved for foreign adversaries. For a company that won't let the military spy on its own citizens without restriction. Hours later, OpenAI had a deal. Same terms, Pentagon says — but analysts who read the actual contract found cloud-only deployment and legal baselines that weren't in Anthropic's offer. Make of that what you will. Here's what I keep thinking about. Anthropic knew exactly where they stood. They have the best reasoning model right now — not close, Claude Opus 4.6 hits 68.8% on ARC-AGI-2, GPT sits at 52.9%. The government knew it too. They wanted Claude specifically. They threatened to "make them pay" specifically. And even after everything, their replacement choice was Grok, which they themselves admit isn't a like-for-like swap. You don't issue a threat like that against a vendor you could walk away from. So Anthropic had all the leverage. They had the model no one else can replicate, the only presence in classified systems, and a revenue base big enough that $200M is noise. They said no anyway. And now they're paying for it. That's not martyrdom — that's a company that understands what it's holding. When you believe you might be building something with no clear ceiling on capability, and a government wants to run it on its own citizens with no written limits, you either hold that line or you don't. They held it. Altman's playbook is older. Give people the product free. Build the habit at scale. Then the habit has a price, and someone pays it — advertisers, then governments, then whoever's next. The user doesn't need to understand the chain. The user just needs to keep using it. Most don't want to know. That's fine. The economics work either way. But the Pentagon just ran into the one AI lab that decided the contract mattered less than the clause. That they'd rather be labeled a national security threat than drop a line about surveillance. And whatever you think about the AI industry in general, that specific decision, under that specific pressure — that's unusual.
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
OpenAI Just Told the Pentagon What It Wanted to Hear
I'm team Anthropic. Worth saying upfront.
Claude is the only AI model inside the US military's classified networks. Not one of several — the only one. Intelligence analysis, operational planning, cyber operations, modeling and simulation. Embedded so deep that a senior Pentagon official admitted pulling it out would be "an enormous pain in the ass."
And then Anthropic told them no.
The Pentagon wanted a contract clause: all lawful uses. Sounds reasonable until you understand what's legal — mass analysis of commercially available location data, fitness tracker signals, phone records, all combined at scale to profile American civilians. Not illegal. Definitely surveillance. Anthropic drew a line there, and on autonomous weapons. They wanted it written in. The DoD said no. Talks collapsed in February.
Trump declared Anthropic a supply chain risk and ordered every federal agency to stop using their technology. That designation is normally reserved for foreign adversaries. For a company that won't let the military spy on its own citizens without restriction.
Hours later, OpenAI had a deal. Same terms, Pentagon says — but analysts who read the actual contract found cloud-only deployment and legal baselines that weren't in Anthropic's offer. Make of that what you will.
Here's what I keep thinking about. Anthropic knew exactly where they stood. They have the best reasoning model right now — not close, Claude Opus 4.6 hits 68.8% on ARC-AGI-2, GPT sits at 52.9%. The government knew it too. They wanted Claude specifically. They threatened to "make them pay" specifically. And even after everything, their replacement choice was Grok, which they themselves admit isn't a like-for-like swap. You don't issue a threat like that against a vendor you could walk away from.
So Anthropic had all the leverage. They had the model no one else can replicate, the only presence in classified systems, and a revenue base big enough that $200M is noise. They said no anyway. And now they're paying for it.
That's not martyrdom — that's a company that understands what it's holding. When you believe you might be building something with no clear ceiling on capability, and a government wants to run it on its own citizens with no written limits, you either hold that line or you don't. They held it.
Altman's playbook is older. Give people the product free. Build the habit at scale. Then the habit has a price, and someone pays it — advertisers, then governments, then whoever's next. The user doesn't need to understand the chain. The user just needs to keep using it.
Most don't want to know. That's fine. The economics work either way.
But the Pentagon just ran into the one AI lab that decided the contract mattered less than the clause. That they'd rather be labeled a national security threat than drop a line about surveillance. And whatever you think about the AI industry in general, that specific decision, under that specific pressure — that's unusual.