When business owners balance the idea of “using AI to replace human labor to cut costs,” Anthropic has changed the rules of the game. This AI giant recently updated the billing structure for its Claude Enterprise offering, splitting the usage of Claude, Claude Code, and Cowork out of the previous $40 per month subscription fee and switching to separate charges based on the actual number of Tokens consumed. Now, the cost of AI employees doesn’t seem nearly as cheap as outsiders are claiming.
(Can using Classical Chinese and AI chat save Tokens? A screenshot sparks debate—engineers say the truth is that using English is the way to go)
The era of flat-rate pricing is over—Claude Enterprise billing has been revised: pay for what you use
The Information reports that Anthropic’s updated enterprise documentation states: “The monthly seat fee (seats) covers only platform access and does not include any usage; all usage is billed separately according to standard API rates.” What companies used to buy as “unlimited access” has now been changed to “a per-use model.”
Under the old plan, subscription fees per company account were about $40 to $200 per month, along with a 10% to 15% API discount. While the new plan lowers subscription fees to $20 per month, it simultaneously cancels all API discounts and requires enterprises to commit in advance and prepay estimated monthly Token usage. No matter whether actual usage is more or less, the committed amount is still paid, and committing to a higher volume does not result in a lower unit price.
This structure is predictable annual recurring revenue for Anthropic, but for enterprises it amounts to shifting usage costs and risks.
“Scarce compute resources” is the real trigger behind the revised pricing
Anthropic calls this adjustment “product optimization,” but the underlying driver is persistently high compute costs. Even if Anthropic’s annualized revenue jumped from $9 billion to $30 billion in just four months, what users are getting is not discounts—it’s a restructuring of its revenue model.
At the core is how AI agents (AI Agent) consume resources. Traditional chat usage is like “small sips,” but an agent workflow that includes multi-step task chaining, repeated execution, and even multi-agent collaboration is more like “a big drink.”
The supply side is tight as well. Blackwell GPU leasing prices rose 48% within two months; CoreWeave increased prices by more than 20% since late last year; and a forecast from U.S. banks predicts that compute demand contraction will continue through 2029. The income brought by fixed-rate pricing has long been unable to carry the burden for Anthropic.
Unstable service is the most real warning light for enterprise customers
In addition, service stability is another major problem. Retool founder David Hsu said in an interview via The Wall Street Journal that even though Claude Opus 4.6 performs better than OpenAI, he ultimately moved his workflows to the latter. The reason is that Claude’s service frequently goes down, which often prevents him from delivering code on time.
Within the 90 days ending April 8 of this year, the Anthropic API achieved only 98.95% uptime, far below the industry standard of 99.99%. Hsu’s decision illustrates one thing: when it comes to choosing between service reliability and model capability, enterprises need AI that is stable.
The real cost of AI employees is far more complex than what’s on the bill
Today, the traditional AI pricing model of “monthly subscription fees” is already gone. Total cost is now recalculated based on actual Token usage. Negotiating usage discounts or flexible adjustment clauses in the contract, or proactively controlling spending by optimizing prompts, batch processing, and caching strategies, has become a new challenge for enterprises moving toward AI adoption and transformation.
A few days ago, OpenAI also announced that Codex would be switched to Token-based billing. GitHub tightened Copilot usage limits on April 10, and Windsurf replaced the points-based system with daily quotas. The entire AI industry is simultaneously signaling the end of the fixed-rate era.
Before companies evaluate how much manpower cost they can save by “deploying AI,” they may also need to test whether users can produce consistent, high-quality output within constrained budgets.
This article, about Anthropic Claude Enterprise being the first to shift to usage-based billing—does it really save money with AI employees? It first appeared on Chain News ABMedia.
Related Articles
Meta Stock Rises 1.73% as Company Plans 8,000-Job Layoff Starting May 20
Google’s annual report says Gemini achieves millisecond interception, blocking 99% of scam ads
Ethereum Co-founder Lubin: AI Will Be Critical Turning Point for Crypto, But Tech Giant Monopoly Poses Systemic Risk
Elon Musk Pushes 'Universal High Income' Checks as Ultimate Solution for AI Unemployment
DeepSeek Reportedly Launches First External Fundraising Round, Targets $10B+ Valuation and $300M+
ChatGPT ads move into Australia and New Zealand: Free and Go users first, paid plans stay ad-free