Wall Street Journal: After Trump issued the Anthropic ban, the US and Israel still rely on Claude for airstrikes in Iran

CryptoCity

The Wall Street Journal reports that despite Trump’s ban, the U.S. military still used Claude during airstrikes on Iran. Due to refusing unrestricted military use, Anthropic lost a $200 million contract and was listed as a “supply chain risk.” The Pentagon has now shifted to working with OpenAI.

Did the U.S. and Israel still rely on Claude after Trump’s ban?

Recently, the U.S. and Israel conducted airstrikes on Iran, and President Trump ordered the government to stop using Anthropic’s Claude AI, drawing international attention. However, according to the Wall Street Journal, hours after Trump’s ban, the U.S. military used Claude during an airstrike on Iran.

Sources reveal that last Friday, the Trump administration instructed federal agencies to cease cooperation with the company and asked the Department of Defense to consider it a potential security risk. But military commands, including CENTCOM, still used Anthropic’s Claude AI models for operational support, assisting with intelligence analysis, target identification, and battlefield simulations.

The U.S. government’s ban on Claude stems from contract negotiations breaking down, with Anthropic refusing to allow government overrides of security protocols, permitting defense officials to use its AI for military purposes without restrictions in any legal scenario.

Why did Anthropic clash with the Pentagon?

Previously, Anthropic secured multi-year contracts worth up to $200 million with the Pentagon, along with several other major AI companies. Through collaborations with Palantir and Amazon Web Services, Claude was authorized for classified intelligence and operational processes.

The Wall Street Journal notes that Claude was involved in early military operations, including a January mission in Venezuela that led to the arrest of President Nicolás Maduro, who was forcibly taken to the U.S. and claimed innocence during trial.

  • Related report: Maduro’s U.S. trial claims innocence! UN holds emergency meeting on Venezuela incident, see each country’s stance

However, tensions escalated when U.S. Defense Secretary Pete Hegseth demanded Anthropic allow unrestricted military use of Claude. CEO Dario Amodei refused, stating certain applications are moral bottom lines the company will never cross, even if it means losing government contracts.

As a result, the Pentagon began seeking alternative vendors and reached an agreement with OpenAI to deploy ChatGPT models on classified military networks.

OpenAI Takes Over Military Contracts, Sparks Questions

After signing a deal with the U.S. military, OpenAI faced public backlash. Sreemoy Talukdar of Firstpost commented that Anthropic previously stated it would not violate core principles regarding domestic mass surveillance and autonomous weapons systems, leading the war department under Trump to halt cooperation.

But now, OpenAI CEO Sam Altman claims that the war department agreed to the same security principles, sparking debate over contract standards.

Image source: X OpenAI takes over U.S. military contracts, raising questions

Anthropic Becomes First U.S. Company Listed as a Supply Chain Risk

Anthropic is currently embroiled in a dispute with the White House after refusing to allow unrestricted military use of Claude. Defense Secretary Pete Hegseth publicly designated Anthropic as a “supply chain risk.”

This makes Anthropic the first U.S. company openly labeled as a “supply chain risk,” a designation usually reserved for companies with direct ties to foreign adversaries.

Following this listing, the government will have the authority to require all contractors working with the military to prove their work does not involve Anthropic’s products. In response, Anthropic plans to legally challenge the supply chain risk designation and states:

“This is neither legally justified nor safe for any U.S. company negotiating with the government. It sets a dangerous precedent, and no matter how much the Department of Defense intimidates or punishes us, it won’t change our stance against mass domestic surveillance or fully autonomous weapons.”

Further reading:
National Security vs. Ethics: Anthropic refuses to remove Claude’s safety guardrails, clashes with U.S. Department of Defense

View Original
Disclaimer: The information on this page may come from third parties and does not represent the views or opinions of Gate. The content displayed on this page is for reference only and does not constitute any financial, investment, or legal advice. Gate does not guarantee the accuracy or completeness of the information and shall not be liable for any losses arising from the use of this information. Virtual asset investments carry high risks and are subject to significant price volatility. You may lose all of your invested principal. Please fully understand the relevant risks and make prudent decisions based on your own financial situation and risk tolerance. For details, please refer to Disclaimer.
Comment
0/400
No comments
Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate App
Community
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)