CEO da Anthropic atira: Os contratos da OpenAI com o Pentágono são todas mentiras, Altman disfarça-se de embaixador da paz

動區BlockTempo

Anthropic Chief Executive Amodei internal memo leaks, directly accusing OpenAI CEO Altman of being “a complete lie,” as the two AI giants clash over military contracts.
(Background: Is Sam Altman despicable? After supporting Anthropic, recently blacklisted by Pentagon, he now calls on OpenAI to secure US Department of Defense contracts)
(Additional context: The Wall Street Journal reports: Trump’s targeting of Iran’s Khamenei relies on Claude AI positioning, with OpenAI taking full control of Pentagon systems)

Table of Contents

Toggle

  • A final ultimatum ignites the conflict
  • OpenAI’s “pragmatic” approach
  • The real battlefield revealed by the memo
  • Immediate reactions from users and the market

One

A leaked internal memo from Anthropic CEO Dario Amodei to staff directly criticizes competitor OpenAI CEO Sam Altman, calling his statements “a complete lie,” and dismisses the latest Pentagon deal as “security show.”

The two most influential AI companies globally are tearing into each other over a matter that could determine the future direction of AI for the next decade.

A final ultimatum ignites the conflict

The trigger for this clash stems from Anthropic’s original $200 million military contract. Through a partnership with Palantir, Anthropic’s Claude AI has been deployed on classified military networks.

However, in late February, the situation rapidly escalated. The Pentagon issued a final ultimatum to Anthropic: Remove all AI usage restrictions, allowing “any lawful purpose” unrestricted access, or face contract termination and blacklisting by February 27.

CEO Amodei publicly refused, stating he “cannot in good conscience” accept these terms, and drew two red lines:

  • First, ban autonomous weapons systems: AI must not make final targeting decisions on the battlefield
  • Second, ban large-scale domestic surveillance: no development of mass surveillance tools for US citizens

Further reading: Trump aims to completely ban Anthropic! Refuses to modify Claude’s “killing” restrictions

The retaliation was swift and fierce. Hours after the refusal, the Trump administration listed the company as a “supply chain risk” (a label usually used for foreign adversaries), effectively banning it from all federal contracts, and branding it as “radical leftist, woke, and a national security threat.”

OpenAI’s “pragmatic” approach

Just hours after Anthropic’s blacklisting on February 28, Altman announced that OpenAI had reached an agreement with the Department of Defense. In an official blog post, OpenAI stated the contract includes the same “red line” protections: restrictions on autonomous weapons, domestic mass surveillance, and key automation decisions.

But the devil is in the details. OpenAI’s contract allows “all lawful purposes,” unlike Anthropic’s explicit bans. OpenAI explained: “In our interactions, the Department of Defense clearly stated that large-scale domestic surveillance is illegal and has no plans for it.”

Critics immediately pointed out the problem: laws change. Actions deemed illegal today may become permissible tomorrow, making the “lawful purpose” clause in the contract inherently fragile.

The real battlefield revealed by the memo

In the leaked memo, Amodei offers a blunt assessment of the public relations war:

I believe attempts to manipulate public opinion are ineffective; most see OpenAI’s dealings with the Department of Defense as suspicious, and view us as heroes.

He also directly criticizes Altman’s motives:

The main reason they accept and we reject is that they care about appeasing employees, while we truly care about preventing misuse.

According to TechCrunch, Amodei further accuses Altman of “posing as a peacemaker and dealmaker.” Facing overwhelming criticism, Altman admitted at an all-hands meeting that this decision could have severe brand consequences, but defended it as a complex yet correct choice for national security.

Immediate reactions from users and the market

As the controversy unfolds, users are actively voting with their actions. Recently, ChatGPT downloads on OpenAI surged; meanwhile, Claude app downloads from Anthropic increased significantly.

Anthropic chose to refuse and bear the consequences—losing federal contracts and government relations; OpenAI chose to cooperate with restrictions—risking user trust and brand reputation. Both choices are logical but come with costs.

What’s truly concerning is that this debate exposes a deeper issue: in an era of rapid militarization of AI, the gap between “legal” and “right” is widening.

Ver original
Isenção de responsabilidade: As informações contidas nesta página podem ser provenientes de terceiros e não representam os pontos de vista ou opiniões da Gate. O conteúdo apresentado nesta página é apenas para referência e não constitui qualquer aconselhamento financeiro, de investimento ou jurídico. A Gate não garante a exatidão ou o carácter exaustivo das informações e não poderá ser responsabilizada por quaisquer perdas resultantes da utilização destas informações. Os investimentos em ativos virtuais implicam riscos elevados e estão sujeitos a uma volatilidade de preços significativa. Pode perder todo o seu capital investido. Compreenda plenamente os riscos relevantes e tome decisões prudentes com base na sua própria situação financeira e tolerância ao risco. Para mais informações, consulte a Isenção de responsabilidade.
Comentar
0/400
Nenhum comentário