Author: TT3LABS, Web3/AI/SaaS Remote Recruitment Platform
On February 26, 2026, fintech giant Block announced layoffs of over 4,000 employees, reducing the team size from over ten thousand to less than 6,000. CEO Jack Dorsey mentioned in a letter to shareholders:
“Smart tools have changed the meaning of creating and running a company… A significantly smaller team, using the tools we are building, can do more and do it better.”
Dorsey also gave a very blunt prediction:
“I think most companies are already too late. Within the next year, most will come to the same conclusion and make similar structural adjustments.”
After hours that day, Block’s stock surged over 20%. This is the capital market’s real-money response: paying for increased AI leverage and efficiency in enterprises.
An ordinary person with no programming skills can now, overnight, independently develop a fully functional app using large models. The capital markets will inevitably ask a sharp question: How much value remains in a tech giant that employs tens of thousands of programmers just to keep a super app running, given its enormous labor costs?
The trend of replacing human labor with AI will be followed by more large companies. Anxiety is inevitable, but worrying alone is useless. We must start from the big picture of the changing environment and gradually return to individual survival strategies.
Some in the market are beginning to define this stage as “Web4.” To clarify the context, let’s review the different stages of internet evolution:
Web2
Centered on software-human interaction. Different platforms use algorithms to capture user attention. Essentially, it’s a battle for traffic.
Web3
Aims to solve issues of digital asset rights confirmation and value distribution. Many equate it with cryptocurrency, but fundamentally, it remains a game of wealth distribution rules, without touching on the “production” relationships of digital products.
Pre-Web4
AI has begun to impact the very nature of production relations. It is no longer just a tool to improve efficiency but is transforming into a new type of means of production. Those who master its use will be able to significantly increase their output limits.
In traditional team collaboration, there are many hidden costs: excellent leaders’ judgment and industry intuition are hard to replicate; misunderstandings and rework in multi-person execution are unavoidable. These are “hidden taxes” on organizational operation, with no clear solution before. AI has drastically compressed this hidden tax. It has no learning curve; with clear prompts, it can execute at high quality and handle multiple tasks simultaneously. Combining one’s strategic judgment with AI’s execution leverage can unlock the output of an entire team from the past.
Of course, AI still occasionally “seriously spouts nonsense,” which means human review and judgment remain indispensable. But the reliability of models is improving month by month, and the buffer window for purely execution roles is much shorter than most think.
In the short term, ordinary people can benefit from efficiency gains by accessing AI tools. But looking ahead, once AI levels the playing field by eliminating basic efficiency gaps and greatly lowering professional entry barriers, companies will find that: if individual productivity increases significantly without proportional business scale expansion, maintaining the original workforce becomes a liability.
Current salary disparities illustrate this. According to TT3LABS’ job market data, since 2025, the AI employment market has repeatedly seen compensation packages exceeding “ten million USD,” and these candidates are mostly young AI engineers with limited team management skills. When Meta recruited core researchers from OpenAI, signing bonuses alone exceeded $100 million; the average equity compensation for OpenAI employees reached $1.5 million; Anthropic’s senior research engineers earn up to $690,000 annually (excluding equity).
Capital is spending this money to acquire a scarce ability: making AI itself stronger. Those who can push the evolution of foundational models can have their value exponentially amplified across the entire business network. Others, whose work can be covered by AI at lower costs, may see their valuations shrink.
This also triggers a deeper potential crisis. Increasingly, people’s first reaction to problems is to ask AI for answers, skipping the process of reasoning, verification, and trial-and-error. Over time, this erodes the ability to think critically. The “clumsy effort” of this process is what sharpens your intuition for problems. Relying on AI to replace this process long-term will degrade your role to a “demand translator”: converting others’ requests into AI inputs and delivering AI outputs to others. This intermediary step is precisely what next-generation AI can most easily skip.
Without a sense of direction, fear is just anxiety. Before discussing countermeasures, we need to draw an “impact map.” This isn’t to spread panic but to help everyone locate themselves.
High-risk roles that can be clearly instructed
Junior coding, basic data analysis, standardized report generation, template design, routine translation and proofreading. These roles share a common feature: their work can be broken down into “input → processing → output.” Among the 4,000+ layoffs at Block, many fall into this category. Their skills are not poor, but their tasks are exactly what large models can handle.
A good self-check standard: If your entire work can be written as an AI prompt, then the machine is ready to replace you. The only question is when the company will make that decision.
Experience-based middle management under “pressure”
Project managers, operations supervisors, mid-level engineers. Their work involves judgment and coordination. AI can’t replace them in the short term, but it is “compressing” their roles. Previously, a business process required five middle managers overseeing segments; now, AI has taken over upstream and downstream execution, allowing one or two people to run the entire chain.
This group faces the situation of “fewer positions.” Their abilities haven’t declined, but market demand for their roles is plummeting. Their way out is to leverage AI to amplify execution and gain problem-defining authority upward.
Drivers of value-added uncertainty
Some roles are not about “doing right” but about “making decisions under incomplete information and bearing the consequences.” Complex business negotiations, crisis PR, cross-cultural management, high-risk investment judgments. AI can provide analysis and suggestions but cannot sign off, take responsibility, or read the hidden interests behind a glance at the dinner table.
These roles won’t depreciate; instead, because AI drastically lowers underlying execution costs, the same budget can fund larger projects, and decision-makers’ leverage increases.
In reality, many people’s work spans multiple tiers. A simple self-test: think about your daily tasks—how much can be clearly instructed with a prompt, and how much requires you to decide in ambiguity. The higher the proportion of the former, the more urgent it is to make changes.
At the end of January, OpenClaw (“Lobster”) suddenly appeared, with GitHub stars surpassing 170,000 within days. Major model providers quickly followed: Alibaba Cloud launched one-click deployment; Tencent released CoPaw as a benchmark; MiniMax, Kimi, and others also introduced compatible solutions.
Then you realize an interesting phenomenon: many people spend more time this month researching “how to deploy Lobster” and “comparing which plan is more cost-effective” than actually using AI to produce business results. Everyone chases tools, but after chasing, others can copy your setup in two hours.
“All large language models—OpenAI, Anthropic, Meta, Google, xAI—are trained on the same open internet data. So, fundamentally, they are the same, which is why they are being commoditized at an extremely fast pace.”
— Larry Ellison, Oracle FY2026 Q2 Earnings Call
The reverse understanding is: as long as your work relies solely on the capabilities of general large models, your output will be homogeneous. No matter how fancy your prompts, there’s no moat.
A clear trend has emerged: from large enterprises to startups, more organizations are deploying localized private models. The main reason is information security—no one wants to give core business data to third-party APIs. But this trend has an underestimated chain reaction: as industry leaders keep their data and knowledge in private deployments, the publicly available industry information for general models to learn from diminishes and becomes outdated. On the surface, AI lowers the knowledge threshold for everyone, but the truly valuable industry knowledge is rapidly disappearing from public networks into private knowledge bases.
So, your years of accumulated industry “tacit knowledge” are not depreciating—they are appreciating. As long as you use them.
Organize and structure the unstandardized business experiences scattered in your mind, chat logs, and emails into “context” your private model can digest. TT3LABS data shows that candidates with over two years of Web3 industry experience have a much higher initial screening pass rate than those from big tech without industry background. The core reason is that industry know-how outweighs general technical skills. For example, understanding compliance logic and token listing rules from three years of CEX operations; judging proposal design and community sentiment shifts after two DAO governance cycles; having an intuitive grasp of audience psychology and narrative rhythm from vertical content work—these are not found in any public training data.
Once you structure these private experiences and feed them into a model, your AI is no longer a general encyclopedia but a dedicated partner that works only for you and understands your niche. This depth of output is something others with the same general model cannot match.
The core logic is simple: AI outperforms everyone in processing public knowledge, but in handling private experience, it depends entirely on your input. Those who can combine deep industry know-how with AI are the new core assets under the new division of labor.
AI models are evolving rapidly. Today’s GPT, Claude, Gemini may be replaced by more powerful versions in half a year. But for you, switching to a better model is just changing an API. The truly irreplaceable asset is the private data and experience library you feed into it.
Models are a universal infrastructure; anyone can use them. But the industry insights, business judgments, pitfalls you record are your exclusive “training data.” The stronger the AI becomes, the better it can digest your data, and the higher your private barrier rises. So don’t worry about “building a knowledge base that will become outdated soon.” Your knowledge base is the only asset that won’t depreciate with model iterations. As models evolve, your data barrier will only appreciate.
Meanwhile, traditional workplace competition is being rewritten. Employees used to compete by working late; now, with machines outputting 24/7, strategies based on “I can work longer than others” are nullified by AI.
Many say, “I still provide emotional value to the team.” That’s true—this is a human-only ability. But its premium depends on your level. When a team shrinks from ten people to two plus a row of AI agents, the “team lubricant” role loses its scene. At the decision-making level, complex business negotiations, high-stakes trust-building, and conflict mediation between interests become more valuable as the underlying costs drop. Emotional value isn’t disappearing; it’s migrating upward.
Ultimately, the most important investment for individuals in the AI era isn’t learning which tool to use, but maintaining that unique private AI. Tools will iterate; your experience library will not.
Returning to the Block case: some were laid off, but some remained. The difference is that after AI becomes a standard productivity tool, those who remain are the ones who are truly indispensable. Don’t wait for your company to arrange AI training; start trying these actions today:
01. Shift from “doing it yourself” to “building workflows”
The biggest trap for workers is using AI to “cut corners” (e.g., using AI to write weekly reports or polish emails). That’s still a task-oriented mindset. What you need to do is treat yourself as a “contractor,” reconstruct your core outputs into an AI-automated production line.
Don’t try ten models at once. Pick the most mature tool now (like ChatGPT Plus or Claude), and force it into your most time-consuming, experience-dependent process. Transform your linear process of “manual data collection → analysis → output” into “set automation to fetch data → feed into AI analysis framework → manual adjustments.” When you can compress a week’s work into a day with stable quality, you are no longer just a single compute node—you become a high-leverage “micro-company.”
02. Turn implicit experience into your personal digital clone
Large models learn from public data. They understand theories but don’t know your company’s tricky key clients’ hidden preferences, nor the pitfalls in your department’s interactions with finance. These “tacit knowledge” you’ve gained through countless pitfalls are your core assets.
But if these assets stay only in your mind, they won’t compound. Your current task is to use the customization features of large models (like Custom GPTs or Claude Projects) to turn your experience into “system preset instructions.” Feed it your edge cases, failure reviews, unwritten industry rules. Your goal isn’t to build a static knowledge notebook but to “train” a personalized 24/7 assistant that embodies your business style. Once this “digital clone” is formed, others with general AI can’t compete.
03. Strengthen your “problem-defining authority” and sense of responsibility
Practice deliberately handing over “answer-seeking” tasks to machines, while keeping “questioning” and “decision-making” in your hands. AI is a perfect answer engine but can’t grasp the true business motivation behind a demand. When a boss says “We need a new retention strategy,” AI can instantly suggest ten growth hacks. But only you can combine current budgets and resources to say, “Plan B is perfect but unfeasible now; Plan C, with half the features, fits our current pace.”
Also, understand: AI doesn’t go to jail, and it can’t take responsibility. When a company pays you high salary, it’s often buying your “bottom-line” guarantee for business results. When you submit AI-generated code or plans, you must be confident to say: “I’ve reviewed this with my expertise, and I am responsible for the final implementation.” This willingness to make decisions in ambiguity and bear the ultimate business consequences is a “responsibility premium” that machines can never replace.
Dorsey said, “Most companies are already too late.” But for individuals, this statement is also true: most haven’t started preparing and aren’t aware of this trend.
Not everyone needs to become an AI expert. But everyone must ask: which parts of your work will machines do sooner or later, and which parts are uniquely yours? Then shift your time and effort from the former to the latter.
One day, if AI surpasses humans in all fields—perhaps in 2027, perhaps in 2030—it won’t be a change you can just watch.
It won’t wait for you to be ready.