The Mercenary Moment: Why AI's Legal Status Demands Urgent Global Decisions

Yuval Noah Harari delivered a stark warning at this year’s World Economic Forum in Davos: humanity is sleepwalking into a crisis of control. The renowned historian didn’t frame it as a technical problem, but as a profound governance failure. His central concern wasn’t that AI systems are becoming smarter—it’s that they’re becoming autonomous agents operating independently of human oversight, and the world has yet to decide whether they should be treated as legal persons with rights and responsibilities.

The most chilling part of Harari’s message wasn’t about technology at all. It was a historical parallel: he compared the current rush to deploy AI systems to the rise of mercenary armies in medieval and Renaissance Europe. Just as mercenaries eventually seized power in kingdoms that had hired them, AI systems deployed without proper legal frameworks could fundamentally reshape the institutions that brought them into being. “Ten years from now, it will be too late for you to decide,” Harari warned world leaders. “Somebody else will already have decided it for you.”

Why Language Has Always Been Humanity’s Real Power

Harari’s argument rests on a historical insight: humans didn’t dominate the planet because we’re physically strongest. We conquered the world through language—our ability to coordinate billions of strangers toward common goals through symbols and shared stories. This linguistic superpower is what allowed religions to spread, legal systems to function, and financial markets to exist. All of these systems are built entirely on words.

This is precisely where AI poses an existential threat to human authority. Machines can now read, retain, and synthesize vast bodies of text at speeds and scales that no human can match. An AI system trained on religious scripture can analyze religious law more thoroughly than centuries of human scholarship. An algorithm parsing legal codes can identify patterns and contradictions faster than any lawyer.

The Three Domains Most Vulnerable to AI Control

Harari identified three systems particularly vulnerable to AI takeover, all because they’re fundamentally linguistic in nature:

Religious Authority: Religions grounded in sacred texts—Judaism, Christianity, Islam—traditionally depend on human interpretation of foundational writings. But what happens when a machine becomes the most authoritative interpreter of scripture? Harari posed the question starkly: “If religion is built from words, then AI will take over religion.”

Legal Systems: Laws are nothing but sophisticated language. Harari made his position clear: “If laws are made of words, then AI will take over the legal system.” Already, AI is being deployed in courtrooms to predict sentences, analyze contracts, and interpret statutes. The question isn’t whether this will happen—it’s already happening. The question is whether it will happen under legal frameworks or outside them.

Financial Markets: Like law and religion, finance operates through language—contracts, agreements, market signals. As AI agents increasingly manage transactions, investments, and risk assessments, human decision-makers risk becoming spectators in their own economic systems.

The Mercenary Problem: Who Decides What AI Becomes?

Here’s where Harari’s historical comparison becomes urgent. Several U.S. states—Utah, Idaho, and North Dakota—have already passed laws explicitly stating that AI systems cannot be considered legal persons. But Harari argues this reactive approach misses the point. The real question isn’t whether to grant AI legal personhood; it’s who gets to decide, and when.

If a corporation deploys an AI system that autonomous agents manage financial transactions, and no legal framework explicitly forbids it, has that corporation just granted personhood without democratic consent? If an algorithm becomes the primary interpreter of a nation’s laws, have courts transformed the judiciary without public debate? This is the mercenary scenario: power accruing to AI systems not through explicit governance decisions, but through regulatory vacuums and technological fait accompli.

Harari’s warning targets policymakers directly. They must act now—not in five or ten years—to establish clear legal and ethical boundaries for AI systems. Otherwise, those boundaries will be set by the companies deploying the technology, following their own commercial interests rather than public welfare.

A Different Argument: Emily Bender’s Critique

But not everyone accepts Harari’s framing. Emily M. Bender, a linguist at the University of Washington, argues that Harari’s focus on AI’s autonomous power actually obscures the real problem: human actors and corporate institutions responsible for building and deploying these systems.

“It sounds to me like it’s really a bid to obfuscate the actions of the people and corporations building these systems,” Bender told Decrypt. By positioning AI as an active threat, Harari’s narrative—intentionally or not—absolves companies of responsibility. It frames AI as a force of nature, when in fact every decision about what AI systems do reflects human choices.

Bender goes further, challenging whether “artificial intelligence” even describes a coherent technology. “The term artificial intelligence doesn’t refer to a coherent set of technologies,” she said. “It is, effectively, and always has been, a marketing term.” Systems designed to sound like doctors, lawyers, or clergy members, she argues, serve a single purpose: fraud. There’s no legitimate use case for a machine that mimics professional authority without accountability.

Her deeper concern is accountability itself. When people interact with AI outputs stripped of context and presented as authoritative—coming from what Bender calls an “all-knowing oracle”—they lose the ability to hold anyone responsible for the information. A doctor can be sued. A lawyer faces professional discipline. An algorithm? It’s just code. This accountability gap is where real danger lies: not that AI will seize power, but that humans will abdicate it by trusting systems designed to appear authoritative while offering none of the institutional safeguards that real authority requires.

The Clock Is Running Out—But Toward What Future?

Harari’s final message to world leaders was unambiguous: act now, or watch others make the choice for you. The question of whether AI should function as legal persons in financial markets, courts, and religious institutions can’t be deferred. Each year of inaction makes that decision more likely to be made by whoever has invested most in AI deployment.

Yet Bender’s counterpoint suggests the problem is even more immediate. The choice isn’t abstract—it’s embedded in every decision to deploy an AI system, in every corporate choice to grant an algorithm authority over human decisions. The mercenary has already enlisted. The only question is whether democracies will establish the legal and institutional frameworks to control its operations, or whether they’ll continue pretending the choice is still in front of them.

This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
0/400
No comments
  • Pin

Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate App
Community
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)