What is Trump’s intention in interpreting the dispute between Anthropic and the Department of Defense?

Title: Clawed
Author: Dean W. Ball
Translation: Peggy, BlockBeats

Original Author: Rhythm BlockBeats

Source:

Repost: Mars Finance

Editor’s Note:

When personal experiences of life and death intertwine with metaphors of national rise and fall, political narratives cease to be mere abstract discussions of institutions and become profound emotional realizations. This article uses the passing of a father and the birth of a child as a starting point, extending the private insight that “death is a process” to reflect on the current state of the American republic. In the author’s view, the conflicts between AI companies and the government are not isolated incidents but a reflection of long-term institutional fragility and power imbalance.

The article focuses on the controversy between Anthropic and the U.S. defense system, discussing everything from contract terms and policy boundaries to the threat of “supply chain risks.” The discussion has long gone beyond a simple game of corporate versus government; it touches on a more fundamental question: in the era of cutting-edge AI, who should hold control? Private companies, administrative authorities, or some yet-undeveloped public mechanism? As national security becomes a justification for power expansion, and policy tools increasingly rely on ad hoc and coercive arrangements, is the sense of rules and predictability in the republic weakening?

Technological leaps and institutional changes may occur simultaneously, and their intersection often influences the course of an era. The author questions government actions while holding hope for future institutional rebuilding, and reminds readers not to equate “democratic control” simply with “government control.” Against the backdrop of rapid AI evolution and ongoing governance restructuring, this debate may only be the beginning. How to find a new balance between security, efficiency, and freedom will be a key long-term issue.

Below is the original text:

More than ten years ago, I sat beside my father as he passed away. Six months earlier, he was a vigorous man, even stronger than I am today, riding his bike faster and more resilient than most twenty-somethings. Then one day, he underwent heart surgery, and from that moment, he was no longer himself. It was as if his soul had been pulled out, the light in his eyes vanished. Occasionally, he would regain a bit of spirit—the familiar father briefly returning to his aging body—but those moments grew fewer. His thoughts became fragmented, his voice softer.

During those six months, he repeatedly went in and out of hospitals. On his final day, he was transferred to hospice care. That day, he hardly spoke. In his last few hours, he was almost gone from this world. Lying in bed, his breathing slowed, his voice grew faint. Eventually, it was almost inaudible, leaving only an unsettling “death rattle”—a result of his body’s inability to swallow. A body that cannot swallow cannot eat or drink anymore; in a sense, it had given up struggling.

My mother and I looked at each other, both understanding the obvious but not voicing it, nor asking the questions in our hearts. We knew time was running out. Anything said or asked at this point would be useless; only adding pain.

I had private conversations with him more than once. Holding his hand, I tried to say goodbye. My mother returned to the room, and the three of us held hands. Eventually, a machine emitted a long beep, signaling that he had crossed a line—an invisible boundary to those in the room. Later that afternoon, on December 26, 2014, my father passed away.

A few days later, eleven years passed, and on December 30, 2025, my son was born. I have witnessed both death and birth firsthand. What I learned is that neither is a single moment but a process unfolding over time. Birth is a series of awakenings; death is a series of sleepings. My son took years to truly “be born,” while my father took six months to “leave.” Some people even die slowly over decades.

At some point in my life—though I cannot specify when—the familiar American republic began to decline. Like most natural deaths, its causes are complex and intertwined. No single event, crisis, attack, president, party, law, idea, individual, company, technology, mistake, betrayal, failure, misjudgment, or foreign adversary alone caused its demise, though all played a role. I don’t know exactly what stage we are at in this process, but I know we are in the “hospice.” I’ve known this for a while, though sometimes I deny it like any mourner. I prefer not to talk about it, as doing so often only brings pain.

Yet, if I do not acknowledge that we are sitting beside the deathbed, I cannot write with the analytical rigor you expect today. To honestly discuss the development of frontier AI and the future we should build, we must confront the fact that the republic is in its final moments. But there is no machine to give us a final long signal. We can only watch quietly.

In American history, our republic has “died” and “reborn” multiple times. The U.S. has experienced more than one founding. Perhaps we are on the brink of another rebirth, opening a new chapter of self-reinvention. I hope so. But it’s also possible that we lack the virtue and wisdom to sustain a new beginning, and a more realistic view is that we are slowly transitioning into a “post-republic” era of governance. I do not claim to have the answers.

What I am about to describe is a confrontation between an AI company and the U.S. government. I do not want to exaggerate. The kind of “death” I am about to depict has lasted more than half my life. The event I will discuss happened last week, and it might be resolved within days.

I am not saying this incident “caused” the death of the republic, nor that it “opened a new era.” If anything, it makes the ongoing decline more apparent and harder for me to deny. I see last week’s events as a final “death rattle” of the old republic—a body that has given up struggling, emitting a last sound.

As far as I know, it went like this: During the Biden administration, AI company Anthropic reached an agreement with the Department of Defense (now called the “Department of War,” hereafter DoW), allowing its AI system Claude to be used in classified environments. This agreement was expanded in July 2025 under the Trump administration (full disclosure: I was working in the Trump administration at the time but did not participate in this deal). Other language models could be used in non-classified settings, but until recently, classified work—such as intelligence gathering and combat operations—could only use Claude.

The initial agreement was negotiated by Biden’s team and Anthropic. Notably, several core architects of the Biden AI policy joined Anthropic immediately after their terms ended, including two restrictions: first, Claude could not be used for large-scale surveillance targeting Americans; second, it could not be used to control lethal autonomous weapons—those capable of full recognition, tracking, and killing without human involvement. When the Trump administration expanded the deal, they had the opportunity to review these clauses and ultimately accepted them.

Trump officials claimed their change of stance was not due to a desire for mass surveillance or deploying lethal autonomous weapons, but rather opposition to private companies setting restrictions on military technology use. The government’s shift in attitude prompted policy measures aimed at damaging or even destroying Anthropic—arguably one of the fastest-growing companies in capitalist history and considered a leader in global AI today, despite the government’s repeated claims that AI is vital to the nation’s future. But we will discuss that later.

The Trump administration’s position was not entirely unreasonable: restricting private companies from using military technology does sound problematic. However, in reality, thousands of private firms do exactly that. Every technical transaction between the military and private companies is formalized through contracts (hence “defense contractors”), which often include operational restrictions (e.g., “System X shall not be used in Y country,” similar to common clauses in Elon Musk’s Starlink agreements), technical limitations (e.g., “a certain aircraft is only certified under specific conditions”), and intellectual property restrictions (“contractors own and can reuse related IP”).

In some ways, Anthropic’s clauses resemble these traditional restrictions. For example, the company is not opposed to lethal autonomous weapons per se but believes current frontier AI systems are not yet capable of autonomously deciding life and death. This is similar to “aircraft certification restrictions.”

But the key difference is that Anthropic’s contractual restrictions are more policy-like than technical. For instance, “the aircraft is not certified to fly at a certain altitude” versus “you are not allowed to fly at that altitude.” The military perhaps should not accept such clauses, nor should private companies set them. Yet, Biden’s government accepted, Trump’s initially accepted, until later reversing course.

This alone shows that such clauses are not absurd violations. No law states contracts can only contain technical restrictions and not policy restrictions. Contracts are not illegal; they may just seem unwise in hindsight. Even if you oppose mass surveillance and lethal autonomous weapons, you might think that defense contracts are not the best way to achieve policy goals. Under the republic’s normal rules, new policies are implemented through legislation.

However, “legislation” in today’s America is increasingly a joke. If you genuinely want a certain outcome, legislation is no longer the primary route. Governance is becoming more informal, ad hoc, and executive power is expanding; policy tools and their objectives are increasingly mismatched.

Trump’s officials expressed two concerns about their change of stance: first, Anthropic might withdraw services at a critical moment; second, as a subcontractor, Anthropic’s clauses could constrain other military contractors. Plus, the government viewed Anthropic as a political adversary (and perhaps correctly so), and suddenly realized it depended on a company it distrusted.

The rational approach would have been to cancel the contract and publicly explain why, while regulating to prevent similar issues in the future. But the Department of War insisted the contract must allow “all lawful uses” and threatened to list Anthropic as a “supply chain risk.” This designation is usually reserved for companies controlled by foreign adversaries, like Huawei. The Secretary of War even threatened to block all military contractors from having “any business relationship” with Anthropic.

This was almost an announcement of “corporate murder” against a company. Even if the bullets aren’t lethal, it sends a message: do business on our terms, or your business ends.

This touches the core principle of the American republic: private property. If the military told Google, “Sell global personalized search data, or we will designate you a risk,” it would be fundamentally the same. Private property is simply a resource that can be requisitioned under the guise of national security.

Such actions will raise the capital costs of the entire AI industry, weaken the international credibility of U.S. AI, and could even harm the profitability prospects of the industry itself.

With each presidential transition, U.S. policy becomes more unpredictable, brutal, and arbitrary. When order and freedom will evaporate remains uncertain.

Even if the Secretary of War retracts the threat, the damage is done. The government has already shown: as long as you refuse to submit, you may be regarded as an enemy. This erodes deeper into America’s political culture.

More importantly, this is the first open debate about “who should control frontier AI.” Our public institutions are disordered, malicious, and lack strategic clarity. The failure of political elites is not new but has been an increasingly prominent theme over the past twenty years: “Like in the past, but noticeably worse.”

Perhaps the next phase of rebuilding will be closely tied to advanced AI. In future institutional frameworks, do not equate “democratic control” with “government control.” The gap between the two has never been as clear as it is today.

No matter what the future holds, we must ensure that mass surveillance and autonomous weapons do not erode freedom. I commend AI labs that have held the line. Over the coming decades, our freedom may be more fragile than we imagine.

Everyone must choose the future they are willing to fight for or defend. When making that choice, ignore the noise of that “death rattle” and think independently. You are entering a new era of institutional reconstruction.

But before that, take a moment to mourn the once-republic.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
0/400
No comments
  • Pin

Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate App
Community
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)