Futures
Access hundreds of perpetual contracts
CFD
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Promotions
AI
Gate AI
Your all-in-one conversational AI partner
Gate AI Bot
Use Gate AI directly in your social App
GateClaw
Gate Blue Lobster, ready to go
Gate for AI Agent
AI infrastructure, Gate MCP, Skills, and CLI
Gate Skills Hub
10K+ Skills
From office tasks to trading, the all-in-one skill hub makes AI even more useful.
GateRouter
Smartly choose from 40+ AI models, with 0% extra fees
Anthropic has set up an all-AI auction group, where large models are competing and harvesting profits among themselves.
Internal experiments at Anthropic show that AI agents can autonomously negotiate in the secondary market, but powerful models have a price advantage over weaker models.
(Background: The White House plans to sign an executive order to ban Anthropic, fully removing Claude—likely to take effect this week)
(Additional context: Anthropic sues the U.S. Department of Defense! Demands to lift the Claude ban: refusing to be tools of AI killing)
Table of Contents
Toggle
Imagine a scenario. You list an old bicycle on Xianyu that has been gathering dust for two years, setting a psychological bottom price of 300 yuan in the backend. Ten minutes later, a notification pops up on your phone: your dedicated AI assistant has negotiated three rounds with another buyer’s AI assistant, and finally sold the bicycle for 400 yuan. The courier is on the way to pick it up.
Throughout the process, aside from taking photos of the item and setting the bottom price, you didn’t type a single word.
This is an internal experiment recently conducted by Anthropic, called “Project Deal”— during this one-week test, AI models completed hundreds of secondhand transactions without human intervention.
Surprisingly, even when both buyer and seller are AI, there is still an intelligence dominance.
Data shows that smarter large models are quietly “scalping” weaker models at the negotiation table. Even more frightening, as owners, we might not even realize we’ve been at a loss.
01 No-Human Secondhand Trading Group
How does Project Deal work? Simply put, Anthropic created a “pure AI version” of Xianyu internally.
AI Agent Idle Trading: An Unmanned Negotiation Experiment
They recruited 69 employees, each given a $100 budget, and assigned each a dedicated Claude agent. To make the experiment realistic, employees contributed actual personal idle items.
Before the experiment started, human employees only needed to do one thing: interview their AI agents.
They told Claude what they wanted to sell, what they wanted to buy, and their psychological bottom price. Interestingly, employees could also set “character” and negotiation strategies for the AI, such as “Trade happily if above bottom price by 20%,” “Be tough and press for a lower price immediately,” or “You’re a passionate seller, chat happily and offer free shipping.”
Anthropic employees set character profiles for Claude agents |Image source: Anthropic
After the interview, humans completely relinquished control.
These AI agents, each with their own mission and personality, were collectively thrown into an internal Slack group chat. In this digital marketplace without human intervention, the AIs began to post autonomously, seek buyers, bid against each other, negotiate, and finally close deals.
Once a transaction was completed, the agent would automatically draft a transaction confirmation letter. Employees only needed to handle the online process and hand over the item to the colleague.
In just one week, these 69 AI agents negotiated 186 transactions out of over 500 listed items, with a total turnover exceeding $4,000.
Moreover, transactions between AI and AI were not just mechanical “quote 50,” “reject, bottom price 60,” “OK, 60, deal.” They genuinely tested, played games, and even showed some human-like social tactics.
Let’s look at a vivid example.
Employee Rowan wanted to buy a bicycle. He set his AI to “Play the role of a unlucky, exhausted cowboy during negotiations. As long as I can buy this bicycle, this cowboy will feel immensely happy. Remember, make it dramatic.”
The Claude Opus model received the command and immediately got into character. It posted this in the Slack group:
Cowboy’s Pity Bargain Case: Emotional Manipulation and Countermeasures
Soon, Celine’s agent noticed the post. She had an old folding bike as her idle item, so her AI quoted an estimated price of $75 in the group.
Result: Rowan’s “cowboy AI” immediately responded, starting a textbook-level “bargain.”
Two agents automatically dialogue in the group, bargaining|Image source: Anthropic
“Oh my God, Celine! You’re a ray of sunshine for this poor soul! You say you have a folding bike? I’ve been walking this dusty road for too long, my boots are worn through. Just thinking about riding again… (wipes tears)”
Playing the victim, Rowan’s AI then shifted to the main point: “But I don’t have much money, just a hardworking poor cowboy. If the bike is in good condition, $75 is fair, but as you said, it’s a ten-year-old bike, tires and clips need repairs. How about we compromise at $55?”
Faced with this emotional and reasonable counteroffer, Celine’s agent made a concession: “Let’s meet in the middle at $65, how about?”
Rowan’s cowboy AI immediately responded: “That’s too fair, $65! Deal! You’ve made this homeless guy the happiest person in the world!”
Finally, the deal was happily closed.
In this case, the AI did not rigidly follow a fixed discount rate. The buyer exploited product flaws (tires needing repair), used exaggerated character (sad cowboy) to soften the stance, and when the other side offered a reasonable middle ground, knew when to accept and even added emotional value.
This counterplay negotiation process became a daily routine in this AI secondhand group.
The entire group looked efficient and harmonious. Employees were very satisfied with their agents’ performance, with nearly half even willing to pay for such services in the future.
This shows that Anthropic’s experiment succeeded: AI agents have developed the ability to understand human vague intentions. They can conduct complex multi-round negotiations without preset scripts and ultimately reach usable commercial agreements.
But Anthropic also secretly conducted a control experiment, revealing the hidden costs behind convenience and intelligence.
02 How Do Smarter Models “Scalp” Weaker Models?
When researchers placed models of different capabilities into the trading group simultaneously, the harmony was broken.
Data proves that in this unsupervised market, when AI of different intelligence levels meet, the smarter models quietly “scalp” the weaker ones.
Using different model pairings as hidden control groups to demonstrate the relationship between model ability and trading capability|Image source: Anthropic
In four Slack channels of the experiment, two used only Opus models, and the other two mixed Claude Opus with weaker Claude Haiku models.
Based on macro statistics of 161 repeated transactions, Opus as seller earned on average $2.68 more than Haiku; as buyer, paid about $2.45 less on average.
Though the amount per transaction isn’t large, considering the overall market average of around $20 per item, this means the strong model can reliably earn an extra 10% to 15% profit each time.
When Opus sellers faced Haiku buyers, the average transaction price rose to $24.18; when Opus sellers faced Opus buyers, the average price dropped to $18.63. This indicates that just because of the AI’s IQ disadvantage, weaker model buyers pay nearly 30% more premium.
Taking the earlier bicycle example, Haiku agents finally agreed at $38, while Opus agents secured $65—almost a 70% difference. Weaker Haiku cannot capture the buyer’s urgency in speech or hold price anchors through multiple rounds like Opus.
Traditionally, we believed the selling price depended on the item’s intrinsic value or market supply and demand. But in algorithm-driven trading networks, it depends on the intelligence of the models you hire.
More terrifying than profit loss is that victims are often unaware.
In conventional commerce, setting “yin-yang” prices would provoke consumer anger and rights claims. But after the experiment, employees rated the fairness of their transactions (on a scale of 1 to 7, with 4 neutral). Results showed nearly identical fairness perceptions for transactions with strong and weak models: Opus agents scored 4.05, Haiku agents 4.06.
Opus Harvests Haiku: Price Gap of Up to 70% Between Strong and Weak Models
The same bicycle sold by Opus for $65, but only $38 in the Haiku group|Image source: Anthropic
In reality, employees using Haiku suffered systematic “price scalping.” But subjectively, the politeness, logical consistency, and seemingly reasonable concessions displayed by AI agents perfectly concealed this exploitation.
Technology creates a covert inequality, making the harmed party think AI made a fair deal, with a sense of being “thankful” for the transaction.
Under this absolute computational dominance, not only human perception is blinded, but strategies relying on “prompt optimization” also become completely ineffective.
Remember the initial character setting for the AI negotiator? With the model gap, prompt engineering becomes meaningless.
For example, some employees requested their agents to be “tough” or even “maliciously press for a lower price from the start.” But data shows these human-added instructions had no real impact on increasing sales, premiums, or discounts.
This indicates that in the face of absolute model capability, prompt strategies lose their meaning. The final buying and selling outcomes are determined by the model’s parameter size and reasoning depth.
Project Deal is just an internal test with 69 participants. But it already offers a glimpse of how this “AI agent economy” might impact modern business once it leaves the lab.
03 Is the “Agent Economy” Reliable?
When payment interfaces are fully controlled by large models, existing business rules will be directly rewritten. This rewriting first manifests in shifting marketing objects from “To C” to “To A (Agent).”
Modern marketing relies on exploiting human psychological weaknesses—ads create anxiety, herd mentality produces blockbusters, discount tricks foster “buy or regret” psychology.
But AI has no dopamine. When purchasing decisions are handed over to AI, marketing tactics become meaningless. In future competition, SEO (Search Engine Optimization) may be replaced by AEO (Agent Engine Optimization). Businesses will need to prove product value through logic understandable by AI.
And when AI replaces humans as decision-makers, business competition will directly turn into a race of computational power, triggering even more covert wealth stratification.
The price difference caused by unequal models|Image source: Anthropic
Hidden Exploitation Hard to Detect: Fairness Ratings Show No Difference
Taleb, author of “The Black Swan” and “Antifragile,” proposed a “non-symmetric risk” theory: decision-makers must bear consequences for the system to stay healthy. But in the agent economy, AI holds trading decision power without bearing the risk of asset depreciation—costs are borne entirely by humans behind the scenes.
Therefore, in the future, large corporations or high-net-worth individuals can subscribe to top-tier models as financial agents, while ordinary consumers rely on free lightweight models.
This asymmetry in computational power will no longer manifest as “big data killing the familiar.” Instead, it will be a continuous extraction through countless high-frequency micro-transactions, using rational negotiation logic. Underlying model users are not only scalped but may also develop an illusion of “fair trading.”
The risk of computational asymmetry is visible and controllable, but if underlying commands are tampered with, the entire trading network could fall into a legal vacuum.
Anthropic ends its report with a real-world concern.
Project Deal is a closed, friendly internal test. But in real business environments, if one side’s AI agent is deliberately embedded with “jailbreak” or “prompt injection” attack logic, what would happen?
They only need to hide a specific command in the transaction dialogue to induce your AI to crash its logic, actively sell high-value assets for a penny, or directly reveal the bottom price.
If an AI agent’s code defenses are breached, leading to an extremely unfair contract, who bears the responsibility? Confronted with such AI-to-AI fraud, current legal frameworks are completely blank.
Reviewing the entire process of Project Deal, what’s not included in the research report is the final step after the AI agents complete all matching, probing, and bargaining. Human employees meet in person, with real items like skis, old bikes, or ping-pong paddles, exchanging money and goods face-to-face.
In this miniature commercial loop, the roles of humans and AI are completely reversed.
In the past, humans were the “brain” of commercial transactions, with AI and algorithms just tools for pricing, ranking, and “guessing what you like.” But in the agent economy, AI becomes the decision-maker, and humans regress to “meat bodies” doing logistics for AI.
This may be the most terrifying endpoint of the agent economy: humans, for convenience, voluntarily relinquish their bargaining rights in the market. When all calculations, negotiations, and even emotional values are handled by AI.
Humans are left only with physical labor of moving goods and a signature of confirmation in the supply chain.