Author: Sleepy.txt
In 2016, The New Yorker published a profile on Sam Altman titled “Sam Altman’s Destiny.” That year, he was 31 and already the president of Y Combinator, Silicon Valley’s most powerful incubator.
The article included a detail: Altman liked speed racing, owned five sports cars, and enjoyed chartering planes. He told reporters he carried two bags, one of which was a ready-to-escape survival kit.
He also prepared firearms, gold, potassium iodide (for nuclear radiation), antibiotics, batteries, water, Israeli Defense Forces-grade gas masks, and even owned land in Big Sur, California, with plans to fly there for refuge at any moment.
Ten years later, Altman became the person most dedicated to creating doomsday scenarios and selling the Ark. He warned the world that AI would destroy humanity, yet he was actively accelerating that process; he claimed he wasn’t in it for money, yet built a $2 billion personal investment empire; he called for regulation, while kicking out anyone trying to slow down progress.
Rather than a paranoid lunatic or a cunning con artist, he is best understood as the most standard and successful product of Silicon Valley’s giant machine. His “destiny” is to turn collective human anxiety into his scepter and crown.
Altman’s business model can be summed up in one sentence: packaging a business as a holy war about human survival.
He started practicing this approach during YC days. He transformed YC from a small workshop giving early startups a few tens of thousands of dollars into a vast entrepreneurial empire. He created YC Research, funding projects that sounded grand but weren’t profitable. He told reporters YC aimed to fund “all important fields.”
By the time he reached OpenAI, he had perfected this strategy. He sold a packaged worldview: AI apocalypse + salvation.
He is more skilled than anyone at depicting the “extinction-level risks” posed by AI. He co-signed with hundreds of scientists claiming AI risks rival nuclear war. During Senate testimony, he said: “We feel a twinge of fear about AI—and people should be glad about that.” He implied that this fear itself is a valuable warning.
Each of these statements hits headlines and advertises OpenAI for free. This carefully crafted fear is the most efficient attention lever. Which technology excites capital and media more: one that “improves efficiency” or one that “could destroy humanity”? The answer is obvious.
As for salvation, he already has a product: Worldcoin. When fear is embedded in public consciousness, selling solutions becomes natural. Using a basketball-sized silver sphere to scan people’s irises worldwide, claiming it’s to give everyone money in the AI era. The story sounds appealing, but this method of exchanging money for biometric data quickly drew scrutiny from multiple governments. Kenya, Spain, Brazil, India, Colombia, and others halted or investigated Worldcoin over data privacy concerns.

But for Altman, this might not matter at all. What’s important is that he successfully positioned himself as the “only one with a solution.”
Packaging fear and hope together is the most effective business model of this era.
How does someone who constantly talks about the end of the world do business? Altman’s answer: turn regulation into his weapon.
In May 2023, he testified before the U.S. Congress for the first time. Unlike other tech CEOs, he didn’t complain about regulation—instead, he proactively asked: “Please regulate us.” He proposed a licensing system for AI, where only licensed companies could develop large models. This project projected an image of a responsible industry leader, but at that time, OpenAI was technically far ahead. A strict, high-threshold regulatory system would mainly serve to block potential competitors.
However, over time, especially as competitors like Google and Anthropic caught up and open-source communities gained strength, Altman’s stance on regulation subtly shifted. He began emphasizing that overly strict regulation—especially mandatory pre-release reviews—could “kill innovation” and be “catastrophic.”
Now, regulation is no longer a moat but a stumbling block.
When in an absolute advantage, he calls for regulation to lock in that advantage; when the advantage wanes, he advocates for freedom to seek breakthroughs. He even seeks to extend his influence upstream in the industry. He proposed a $7 trillion chip plan, seeking support from sovereign wealth funds like those of the UAE, aiming to reshape the global semiconductor industry. This far exceeds a CEO’s typical scope and resembles an ambitious strategist aiming to influence global power structures.

Behind all this is OpenAI’s rapid transformation from a nonprofit to a commercial giant. Founded in 2015 with the mission “to ensure AGI benefits all humanity safely,” it established a “capped-profit” subsidiary in 2019. By early 2024, it was quietly removing the phrase “safely” from its mission statement. Although the company still operates under a “capped-profit” model, its commercialization has accelerated dramatically—from tens of millions of dollars in revenue in 2022 to over a billion dollars annually in 2024, with valuation soaring from $29 billion to over a hundred billion.
When someone starts gazing at the stars and talking about human destiny, it’s wise to check where their wealth is concentrated.
On November 17, 2023, Altman was ousted by his handpicked board for “lack of transparency in communication.”
What happened in the following five days was less a business dispute than a referendum on faith. CEO Greg Brockman resigned; over 700 employees—95% of the staff—signed a letter demanding the board’s resignation or threatened to jump ship to Microsoft; Microsoft CEO Satya Nadella publicly welcomed Altman to join. Ultimately, Altman was reinstated, regaining full authority and purging nearly all board members who opposed him.
Why could a CEO deemed “not transparent” by the board return unscathed and even gain more power?
Board member Helen Toner later revealed details: Altman concealed his actual control over OpenAI’s venture fund; lied multiple times about critical safety procedures; and the board only learned about ChatGPT’s launch from Twitter. Any of these allegations alone could have toppled a CEO.
But Altman was fine. Because he is not an ordinary CEO—he is a “charismatic leader.”
This concept, introduced by sociologist Max Weber a century ago, describes a form of authority not derived from position or law, but from the leader’s “extraordinary personal charisma.” Followers believe in him not because he’s right, but because he is who he is. This belief is irrational. When leaders err or are challenged, followers’ first instinct isn’t to doubt but to attack the challenger.
OpenAI employees are like this. They distrust board procedures and only believe in Altman’s “destiny,” viewing the board as “hindering human progress.”
After Altman’s reinstatement, OpenAI’s security team was quickly disbanded. Chief Scientist Ilya Sutskever, who initially led the effort to oust Altman, also left. In May 2024, security head Jan Leike resigned, tweeting: “To launch those shiny products, the company’s safety culture and processes have been sacrificed.”

In front of a “charismatic leader,” facts don’t matter, procedures don’t matter, safety doesn’t matter. The only thing that matters is faith.
Sam Altman is just the latest, most successful model on Silicon Valley’s “prophet” production line.
Many familiar figures are part of this system.
For example, Elon Musk. In 2014, he warned that “AI is summoning demons.” Yet his Tesla is the world’s largest robot company with the most complex AI applications. After breaking with Altman, he founded xAI in 2023 to directly challenge him. Within a year, xAI’s valuation exceeded $20 billion. Musk warns of demons while creating another. This dual narrative of fighting and fostering evil mirrors Altman’s approach.
Then there’s Mark Zuckerberg. Years ago, he bet the company’s future on the metaverse, spending nearly $90 billion, only to find it a dead end. He quickly shifted focus, replacing the core narrative from the metaverse to AGI. In 2025, he announced the “Superintelligence Laboratory,” personally recruiting talent. Like Altman, he pursues a grand vision of humanity’s future, backed by astronomical capital and a messianic stance.

And Peter Thiel. As Altman’s mentor, he’s more like the chief architect of this production line. He invests in companies promoting “technological singularity” and “immortality,” while buying land and building doomsday bunkers in New Zealand—obtaining citizenship after just 12 days. His Palantir is one of the world’s largest data surveillance firms, serving governments and militaries. During the early 2026 Iran military operation, Palantir’s AI platform processed vast data from satellites, communications, and drones, transforming chaos into actionable intelligence, ultimately pinpointing targets for strikes.
Each of these figures plays a dual role: warning of impending doom and actively pushing it forward. This isn’t personality split; it’s a business model validated by capital markets. By manufacturing and selling structural anxiety, they capture attention, capital, and power. They are both products and shapers of this system—“evil behind the great narrative.”
Silicon Valley is no longer just a place for technological innovation; it’s a factory for creating “modern myths.”
Every few years, Silicon Valley produces a new prophet who sweeps capital, media, and the public with a grand narrative of apocalypse and salvation. This routine repeats endlessly, yet always succeeds. Every step targets specific cognitive vulnerabilities of humans with precision.
Step 1: Manage the rhythm of fear, not just generate it.
AI risks are real, but they can be discussed calmly. These figures choose to dramatize them and control the timing of fear release meticulously. When to evoke fear, when to offer hope, when to escalate alarms—all are carefully designed. Fear fuels the system, but the timing and manner of ignition are the real techniques.
Step 2: Turn the inscrutability of technology into a source of authority.
AI is a black box for most people—completely opaque. When something complex and poorly understood appears, people instinctively cede interpretive authority to “those who understand it best.” These figures understand this deeply and turn it into a structural advantage. The more they describe AI as mysterious, dangerous, and beyond ordinary comprehension, the more irreplaceable they become.
This logic is self-reinforcing. External skepticism is dismissed because skeptics are “not knowledgeable enough.” Regulators don’t understand the technology, so their judgments are untrustworthy; academic critics haven’t worked on the front lines, so their concerns are theoretical. Ultimately, only they can judge themselves.
Step 3: Replace “interest” with “meaning,” encouraging followers to abandon critique.
This is the most insidious and enduring layer of the system. They don’t just sell a job or a product—they sell a story of cosmic significance: “You are deciding the fate of humanity.” Once accepted, followers relinquish independent judgment. Questioning the leader’s motives makes one seem small or a hindrance to history. It’s a noble sacrifice, a moral choice.
Together, these three steps explain why this system is so hard to dismantle. It doesn’t rely on lies but on a precise understanding of human cognition. It first creates unavoidable fear, then monopolizes its interpretation, and finally turns “meaning” into a tool to turn followers into loyal propagators.
In this system, Altman is the most smoothly operating model so far.
Altman has always claimed he owns no equity in OpenAI and takes only a symbolic salary—his “powering with love” narrative.
But Bloomberg calculated in 2024 that his personal net worth is around $2 billion. Most of this wealth comes from investments over the past decade. An early investment in payment company Stripe reportedly yielded hundreds of millions; his profit from Reddit’s IPO was substantial. He also invested in nuclear fusion company Helion, which he claims is crucial for AI’s future energy needs. He publicly avoided negotiations, but the profit chain is obvious.

He doesn’t hold direct equity in OpenAI, but he has built a vast, personal-centered investment empire around it. Every grand speech about humanity’s future injects value into this empire’s territory.
Looking back at his survival kit—guns, gold, antibiotics—and the land in Big Sur ready for flight, perhaps now there’s a new understanding.
He never hides this. The survival kit is real, the bunker is real, his obsession with doomsday is real. But he is also the one actively pushing for it. These aren’t contradictions; in his logic, doomsday doesn’t need to be stopped—only preempted. He is obsessed with playing the role of the one who sees the future clearly and is prepared.
Whether it’s assembling a material escape kit or building a financial empire around OpenAI, it’s essentially the same: locking in a guaranteed winner position in a future he himself is helping to create—full of uncertainty but under his control.
In February 2026, after publicly supporting the “no AI in warfare” red line, he immediately signed a contract with the Pentagon. This isn’t hypocrisy; it’s intrinsic to his business model. Moral posturing is part of the product, and commercial deals are the profit source. He must play both the compassionate savior and the ruthless prophet of doom—only then can his story continue, and his “destiny” be fully revealed.
The real danger is never AI itself, but those who believe they have the right to define human fate.