In 2016, The New Yorker published a profile titled “Sam Altman’s Destiny.” He was 31 years old at the time and already the president of Y Combinator, Silicon Valley’s most influential incubator.
The article mentions that Altman liked speed racing, owning five sports cars, and renting planes. He told reporters he carried two bags, one of which was a ready-to-escape survival kit.
He also prepared guns, gold, potassium iodide (for nuclear radiation), antibiotics, batteries, water, Israeli Defense Forces-grade gas masks, and even owned land in Big Sur, California, ready for him to fly there and seek refuge at any moment.
Ten years later, Altman became the person most dedicated to creating doomsday scenarios and promoting the Ark. He warned the world that AI would destroy humanity, while actively accelerating that process; he claimed he was not in it for money, yet built a personal investment empire worth $2 billion; he called for regulation, while kicking out anyone trying to hit the brakes.
Rather than being a schizophrenic lunatic or an infallible con artist, he is best understood as the most standard and successful product of Silicon Valley’s giant machine. His “destiny” is to turn collective human anxiety into his scepter and crown.
Doom is a lucrative business
Altman’s business model can be summed up in one sentence: packaging a business as a holy war about human survival.
He has been practicing this approach since his YC days. He transformed YC from a small workshop giving early startups a few tens of thousands of dollars into a vast entrepreneurial empire. He created YC Research, funding projects that sounded grand but were unprofitable. He told reporters that YC’s goal was to fund “all important fields.”
By the time he reached OpenAI, he took this approach to the extreme. He sold a packaged worldview: AI apocalypse + salvation plan.
He is more skilled than anyone else at depicting the “extinction-level risks” posed by AI. He co-signed with hundreds of scientists, claiming AI risks are comparable to nuclear war. During a Senate testimony, he said: “We feel a twinge of fear about AI—and people should be glad about that.” He implied that this fear itself is a valuable warning.
Each of these statements makes headlines and gives free advertising to OpenAI. This carefully crafted fear is the most efficient attention lever. Which technology excites capital and media more: one that “improves efficiency” or one that “could destroy humanity”? The answer is obvious.
As for salvation, he already has a product: Worldcoin. When fear is embedded in public consciousness, selling solutions becomes natural. Using a basketball-sized silver sphere to scan people’s irises worldwide, claiming it’s to distribute money in the AI era, sounds appealing. But this method of exchanging money for biometric data quickly drew the attention of multiple governments. Kenya, Spain, Brazil, India, Colombia, and others halted or investigated Worldcoin over data privacy concerns.
But for Altman, that might not matter. What’s important is that he successfully positioned himself as the “only one with a solution.”
Packaging fear and hope together is the most effective business model of this era.
Regulation is my weapon, not my shackles
How does someone who constantly talks about the end of the world do business? Altman’s answer: turn regulation into his weapon.
In May 2023, he testified before the U.S. Congress for the first time. Unlike other tech CEOs, he didn’t complain about regulation—instead, he proactively asked: “Please regulate us.” He proposed a licensing system for AI, where only licensed companies could develop large models. This project projected an image of a responsible industry leader. But at that time, OpenAI was far ahead in technology. A strict, high-threshold regulatory system would mainly serve to keep potential competitors out.
However, as time went on, especially as competitors like Google and Anthropic caught up and open-source communities gained strength, Altman’s stance on regulation subtly shifted. He began emphasizing that overly strict regulation—especially mandatory pre-release reviews—could stifle innovation and be “catastrophic.”
Now, regulation is no longer a moat but a stumbling block.
When in a position of absolute advantage, he calls for regulation to lock in that advantage; when the advantage wanes, he advocates for freedom to seek breakthroughs. He even tried to extend his influence to the upstream of the industry chain. He proposed a $7 trillion chip plan, seeking support from sovereign wealth funds like those of the UAE, aiming to reshape the global semiconductor landscape. This far exceeds a CEO’s typical scope and resembles the ambitions of a global power player.
Behind all this is OpenAI’s rapid transformation from a nonprofit to a commercial giant. Founded in 2015 with the mission “to ensure AGI benefits all humanity safely,” it established a “capped-profit” subsidiary in 2019. By early 2024, it was quietly removing the phrase “safely” from its mission statement. Although the company still operates under a “capped-profit” model, its commercialization has accelerated dramatically. Revenue exploded from tens of millions of dollars in 2022 to over $10 billion annualized in 2024, with valuation soaring from $29 billion to over a trillion dollars.
When someone starts gazing at the stars and talking about human destiny, it’s wise to check where their money is.
The persona: immunity of charismatic leaders
On November 17, 2023, Altman was ousted by his handpicked board for “lack of transparency in communication.”
What happened in the following five days was less a business struggle than a referendum on faith. CEO Greg Brockman resigned; over 700 employees—95% of the staff—signed a letter demanding the board’s resignation or threatened to jump ship to Microsoft; Microsoft CEO Satya Nadella publicly welcomed Altman back, saying he was always ready to hire him. Ultimately, Altman was reinstated, regaining full authority and purging nearly all board members opposed to him.
Why could a CEO deemed “not transparent” by the board return unscathed and even gain more power?
Board member Helen Toner later revealed details: Altman concealed his actual control over OpenAI’s venture fund; lied multiple times about critical safety procedures; and even the release of ChatGPT was known to the board only via Twitter. Any one of these allegations would normally be enough to oust a CEO multiple times.
But Altman was fine. Because he is not an ordinary CEO—he is a “charismatic leader.”
This concept, introduced by sociologist Max Weber over a century ago, describes a form of authority not derived from position or law, but from the leader’s “superior personal charisma.” Followers believe in him not because he’s right, but because he is who he is. This faith is irrational. When the leader makes mistakes or is challenged, followers’ first instinct is not to doubt him but to attack the challenger.
OpenAI employees are like that. They don’t trust the board’s procedural justice—they only believe in Altman’s “destiny,” and see the board members as “obstructing human progress.”
After Altman’s reinstatement, OpenAI’s security team was quickly disbanded. Chief Scientist Ilya Sutskever, who led the effort to oust Altman, also left. In May 2024, security head Jan Leike resigned, tweeting: “To launch those shiny products, the company’s safety culture and processes have been sacrificed.”
In front of a “charismatic leader,” facts don’t matter, procedures don’t matter, safety doesn’t matter. The only thing that matters is faith.
The prophets on the assembly line
Sam Altman is just the latest, most successful model on Silicon Valley’s “prophet” production line.
Many familiar figures are also part of this system.
For example, Elon Musk. In 2014, he repeatedly warned that “AI is summoning demons.” Yet his Tesla is the world’s largest robot company and most complex AI application scenario. After breaking with Altman, he founded xAI in 2023, directly challenging him. Within a year, xAI’s valuation exceeded $20 billion. Musk warns of demons while creating another demon himself. This dual narrative of fighting and embracing the apocalypse mirrors Altman’s approach.
Then there’s Mark Zuckerberg. Years ago, he bet the company’s future on the metaverse, spending nearly $90 billion, only to find it was a dead end. He quickly shifted the narrative to AGI. In 2025, he announced the “Superintelligence Laboratory,” personally recruiting talent. Like Altman, he envisions a grand future for humanity, backed by astronomical capital, with himself as the savior.
And Peter Thiel. As Altman’s mentor, he is more like the chief designer of this production line. He invests in companies promoting “technological singularity” and “immortality,” while buying land and building doomsday bunkers in New Zealand—obtaining citizenship after just 12 days there. His Palantir is one of the world’s largest data surveillance firms, mainly serving governments and military. Thiel prepares for civilization’s collapse while equipping the powerful with the sharpest surveillance tools. In early 2026, during a military operation against Iran, Palantir’s AI platform served as the brain, integrating vast data from satellites, eavesdropping, drones, and models like Claude to identify targets and execute the strike.
Each of these figures plays a dual role: warning of the impending apocalypse and actively pushing it forward. This isn’t personality split; it’s a business model validated by capital markets. By manufacturing and selling structural anxiety, they capture attention, capital, and power. They are both products and shapers of this system—“evil behind the grand narrative.”
Silicon Valley is no longer just a place for tech output; it’s a factory for creating “modern myths.”
Why does this trick always work?
Every few years, Silicon Valley produces a new prophet, who sweeps capital, media, and public attention with a grand narrative of apocalypse and salvation. This cycle repeats endlessly, yet always succeeds. Every link in this chain targets specific cognitive vulnerabilities of humans with precision.
Step 1: Manage the rhythm of fear, not just generate it.
The risks of AI are real, but they can be discussed calmly. These people choose to dramatize the risks and control the timing of fear release with meticulous precision.
When to evoke fear, when to offer hope, when to raise the alarm again—all are carefully designed. Fear is fuel, but the timing and manner of ignition are the real techniques.
Step 2: Turn the inscrutability of technology into a source of authority.
AI is a black box for most people—completely opaque. When something so complex that it can’t be fully understood appears, people instinctively cede interpretive authority to “those who understand it best.” They understand this deeply and turn it into a structural advantage: the more mystified, dangerous, and beyond ordinary understanding they make AI seem, the more irreplaceable they become.
The danger of this logic is that it is self-reinforcing. External doubts are dismissed because skeptics “don’t understand enough.” Regulators lack technical expertise, so their judgments are untrustworthy; academic critics haven’t worked on the front lines of modeling, so their concerns are theoretical. Ultimately, only they can judge themselves.
Step 3: Replace “interest” with “meaning,” encouraging followers to abandon criticism voluntarily.
This is the most difficult layer to uncover and the most enduring source of power. They sell not just a job or a product, but a story of cosmic significance: you are deciding the fate of humanity. Once this narrative is accepted, followers willingly give up independent judgment. Because in the face of a mission about “human survival,” questioning the leader’s motives makes one feel insignificant or like a hindrance to history. It convinces people to surrender their critical capacity, equating that surrender with a noble choice.
Put together, these three steps explain why this system is so hard to shake. It doesn’t rely on lies; it relies on a precise understanding of human cognition. It first creates unavoidable fear, then monopolizes its interpretation, and finally uses “meaning” to turn you into its most loyal propagator.
In this system, Altman is the most smoothly operating model so far.
Whose destiny?
Altman has always claimed he owns no equity in OpenAI, only taking symbolic salary, which was his “powering for love” narrative.
But Bloomberg estimated his personal net worth in 2024 at around $2 billion. Most of this wealth comes from his investments over the past decade. An early investment in payment company Stripe reportedly yielded hundreds of millions of dollars; his stake in Reddit’s IPO also brought him significant gains. He invested in nuclear fusion company Helion, claiming the future of AI depends on energy breakthroughs, and then heavily bet on fusion, with OpenAI reportedly discussing power purchase agreements with Helion. He says he avoids negotiations, but everyone can see the chain of interests.
He may not have direct equity in OpenAI, but he has built a vast, personal-centered investment empire around it. Every grand speech about humanity’s future injects value into this empire’s territory.
Now, looking back at his survival kit filled with guns, gold, antibiotics, and the land in Big Sur ready for flight—doesn’t it take on new meaning?
He never hides all this. The survival kit is real, the bunker is real, and his obsession with doomsday is real. But he is also the one actively pushing for the arrival of that doomsday. These are not contradictions; in his logic, doomsday doesn’t need to be prevented—only preempted. He is obsessed with playing the role of the one who sees the future clearly and is prepared for it.
Whether it’s preparing a material escape plan or building a financial empire around OpenAI, it’s essentially the same: locking in a guaranteed winner position in a future full of uncertainty that he himself is helping to create.
In February 2026, after just affirming a “no war” line for AI, he signed a contract with the Pentagon. This isn’t hypocrisy; it’s intrinsic to his business model. Moral posturing is part of the product, and commercial deals are the source of profit. He must play both the compassionate savior and the ruthless prophet of doom—only by embodying both can his story continue, and his “destiny” be fully revealed.
The real danger is never AI itself, but those who believe they have the right to define human destiny.
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
Sam Altman and Doomsday Capitalism
Title: Sam Altman and Doomsday Capitalism
Author: Dongcha Beating
Source:
Reprinted from: Mars Finance
In 2016, The New Yorker published a profile titled “Sam Altman’s Destiny.” He was 31 years old at the time and already the president of Y Combinator, Silicon Valley’s most influential incubator.
The article mentions that Altman liked speed racing, owning five sports cars, and renting planes. He told reporters he carried two bags, one of which was a ready-to-escape survival kit.
He also prepared guns, gold, potassium iodide (for nuclear radiation), antibiotics, batteries, water, Israeli Defense Forces-grade gas masks, and even owned land in Big Sur, California, ready for him to fly there and seek refuge at any moment.
Ten years later, Altman became the person most dedicated to creating doomsday scenarios and promoting the Ark. He warned the world that AI would destroy humanity, while actively accelerating that process; he claimed he was not in it for money, yet built a personal investment empire worth $2 billion; he called for regulation, while kicking out anyone trying to hit the brakes.
Rather than being a schizophrenic lunatic or an infallible con artist, he is best understood as the most standard and successful product of Silicon Valley’s giant machine. His “destiny” is to turn collective human anxiety into his scepter and crown.
Doom is a lucrative business
Altman’s business model can be summed up in one sentence: packaging a business as a holy war about human survival.
He has been practicing this approach since his YC days. He transformed YC from a small workshop giving early startups a few tens of thousands of dollars into a vast entrepreneurial empire. He created YC Research, funding projects that sounded grand but were unprofitable. He told reporters that YC’s goal was to fund “all important fields.”
By the time he reached OpenAI, he took this approach to the extreme. He sold a packaged worldview: AI apocalypse + salvation plan.
He is more skilled than anyone else at depicting the “extinction-level risks” posed by AI. He co-signed with hundreds of scientists, claiming AI risks are comparable to nuclear war. During a Senate testimony, he said: “We feel a twinge of fear about AI—and people should be glad about that.” He implied that this fear itself is a valuable warning.
Each of these statements makes headlines and gives free advertising to OpenAI. This carefully crafted fear is the most efficient attention lever. Which technology excites capital and media more: one that “improves efficiency” or one that “could destroy humanity”? The answer is obvious.
As for salvation, he already has a product: Worldcoin. When fear is embedded in public consciousness, selling solutions becomes natural. Using a basketball-sized silver sphere to scan people’s irises worldwide, claiming it’s to distribute money in the AI era, sounds appealing. But this method of exchanging money for biometric data quickly drew the attention of multiple governments. Kenya, Spain, Brazil, India, Colombia, and others halted or investigated Worldcoin over data privacy concerns.
But for Altman, that might not matter. What’s important is that he successfully positioned himself as the “only one with a solution.”
Packaging fear and hope together is the most effective business model of this era.
Regulation is my weapon, not my shackles
How does someone who constantly talks about the end of the world do business? Altman’s answer: turn regulation into his weapon.
In May 2023, he testified before the U.S. Congress for the first time. Unlike other tech CEOs, he didn’t complain about regulation—instead, he proactively asked: “Please regulate us.” He proposed a licensing system for AI, where only licensed companies could develop large models. This project projected an image of a responsible industry leader. But at that time, OpenAI was far ahead in technology. A strict, high-threshold regulatory system would mainly serve to keep potential competitors out.
However, as time went on, especially as competitors like Google and Anthropic caught up and open-source communities gained strength, Altman’s stance on regulation subtly shifted. He began emphasizing that overly strict regulation—especially mandatory pre-release reviews—could stifle innovation and be “catastrophic.”
Now, regulation is no longer a moat but a stumbling block.
When in a position of absolute advantage, he calls for regulation to lock in that advantage; when the advantage wanes, he advocates for freedom to seek breakthroughs. He even tried to extend his influence to the upstream of the industry chain. He proposed a $7 trillion chip plan, seeking support from sovereign wealth funds like those of the UAE, aiming to reshape the global semiconductor landscape. This far exceeds a CEO’s typical scope and resembles the ambitions of a global power player.
Behind all this is OpenAI’s rapid transformation from a nonprofit to a commercial giant. Founded in 2015 with the mission “to ensure AGI benefits all humanity safely,” it established a “capped-profit” subsidiary in 2019. By early 2024, it was quietly removing the phrase “safely” from its mission statement. Although the company still operates under a “capped-profit” model, its commercialization has accelerated dramatically. Revenue exploded from tens of millions of dollars in 2022 to over $10 billion annualized in 2024, with valuation soaring from $29 billion to over a trillion dollars.
When someone starts gazing at the stars and talking about human destiny, it’s wise to check where their money is.
The persona: immunity of charismatic leaders
On November 17, 2023, Altman was ousted by his handpicked board for “lack of transparency in communication.”
What happened in the following five days was less a business struggle than a referendum on faith. CEO Greg Brockman resigned; over 700 employees—95% of the staff—signed a letter demanding the board’s resignation or threatened to jump ship to Microsoft; Microsoft CEO Satya Nadella publicly welcomed Altman back, saying he was always ready to hire him. Ultimately, Altman was reinstated, regaining full authority and purging nearly all board members opposed to him.
Why could a CEO deemed “not transparent” by the board return unscathed and even gain more power?
Board member Helen Toner later revealed details: Altman concealed his actual control over OpenAI’s venture fund; lied multiple times about critical safety procedures; and even the release of ChatGPT was known to the board only via Twitter. Any one of these allegations would normally be enough to oust a CEO multiple times.
But Altman was fine. Because he is not an ordinary CEO—he is a “charismatic leader.”
This concept, introduced by sociologist Max Weber over a century ago, describes a form of authority not derived from position or law, but from the leader’s “superior personal charisma.” Followers believe in him not because he’s right, but because he is who he is. This faith is irrational. When the leader makes mistakes or is challenged, followers’ first instinct is not to doubt him but to attack the challenger.
OpenAI employees are like that. They don’t trust the board’s procedural justice—they only believe in Altman’s “destiny,” and see the board members as “obstructing human progress.”
After Altman’s reinstatement, OpenAI’s security team was quickly disbanded. Chief Scientist Ilya Sutskever, who led the effort to oust Altman, also left. In May 2024, security head Jan Leike resigned, tweeting: “To launch those shiny products, the company’s safety culture and processes have been sacrificed.”
In front of a “charismatic leader,” facts don’t matter, procedures don’t matter, safety doesn’t matter. The only thing that matters is faith.
The prophets on the assembly line
Sam Altman is just the latest, most successful model on Silicon Valley’s “prophet” production line.
Many familiar figures are also part of this system.
For example, Elon Musk. In 2014, he repeatedly warned that “AI is summoning demons.” Yet his Tesla is the world’s largest robot company and most complex AI application scenario. After breaking with Altman, he founded xAI in 2023, directly challenging him. Within a year, xAI’s valuation exceeded $20 billion. Musk warns of demons while creating another demon himself. This dual narrative of fighting and embracing the apocalypse mirrors Altman’s approach.
Then there’s Mark Zuckerberg. Years ago, he bet the company’s future on the metaverse, spending nearly $90 billion, only to find it was a dead end. He quickly shifted the narrative to AGI. In 2025, he announced the “Superintelligence Laboratory,” personally recruiting talent. Like Altman, he envisions a grand future for humanity, backed by astronomical capital, with himself as the savior.
And Peter Thiel. As Altman’s mentor, he is more like the chief designer of this production line. He invests in companies promoting “technological singularity” and “immortality,” while buying land and building doomsday bunkers in New Zealand—obtaining citizenship after just 12 days there. His Palantir is one of the world’s largest data surveillance firms, mainly serving governments and military. Thiel prepares for civilization’s collapse while equipping the powerful with the sharpest surveillance tools. In early 2026, during a military operation against Iran, Palantir’s AI platform served as the brain, integrating vast data from satellites, eavesdropping, drones, and models like Claude to identify targets and execute the strike.
Each of these figures plays a dual role: warning of the impending apocalypse and actively pushing it forward. This isn’t personality split; it’s a business model validated by capital markets. By manufacturing and selling structural anxiety, they capture attention, capital, and power. They are both products and shapers of this system—“evil behind the grand narrative.”
Silicon Valley is no longer just a place for tech output; it’s a factory for creating “modern myths.”
Why does this trick always work?
Every few years, Silicon Valley produces a new prophet, who sweeps capital, media, and public attention with a grand narrative of apocalypse and salvation. This cycle repeats endlessly, yet always succeeds. Every link in this chain targets specific cognitive vulnerabilities of humans with precision.
Step 1: Manage the rhythm of fear, not just generate it.
The risks of AI are real, but they can be discussed calmly. These people choose to dramatize the risks and control the timing of fear release with meticulous precision.
When to evoke fear, when to offer hope, when to raise the alarm again—all are carefully designed. Fear is fuel, but the timing and manner of ignition are the real techniques.
Step 2: Turn the inscrutability of technology into a source of authority.
AI is a black box for most people—completely opaque. When something so complex that it can’t be fully understood appears, people instinctively cede interpretive authority to “those who understand it best.” They understand this deeply and turn it into a structural advantage: the more mystified, dangerous, and beyond ordinary understanding they make AI seem, the more irreplaceable they become.
The danger of this logic is that it is self-reinforcing. External doubts are dismissed because skeptics “don’t understand enough.” Regulators lack technical expertise, so their judgments are untrustworthy; academic critics haven’t worked on the front lines of modeling, so their concerns are theoretical. Ultimately, only they can judge themselves.
Step 3: Replace “interest” with “meaning,” encouraging followers to abandon criticism voluntarily.
This is the most difficult layer to uncover and the most enduring source of power. They sell not just a job or a product, but a story of cosmic significance: you are deciding the fate of humanity. Once this narrative is accepted, followers willingly give up independent judgment. Because in the face of a mission about “human survival,” questioning the leader’s motives makes one feel insignificant or like a hindrance to history. It convinces people to surrender their critical capacity, equating that surrender with a noble choice.
Put together, these three steps explain why this system is so hard to shake. It doesn’t rely on lies; it relies on a precise understanding of human cognition. It first creates unavoidable fear, then monopolizes its interpretation, and finally uses “meaning” to turn you into its most loyal propagator.
In this system, Altman is the most smoothly operating model so far.
Whose destiny?
Altman has always claimed he owns no equity in OpenAI, only taking symbolic salary, which was his “powering for love” narrative.
But Bloomberg estimated his personal net worth in 2024 at around $2 billion. Most of this wealth comes from his investments over the past decade. An early investment in payment company Stripe reportedly yielded hundreds of millions of dollars; his stake in Reddit’s IPO also brought him significant gains. He invested in nuclear fusion company Helion, claiming the future of AI depends on energy breakthroughs, and then heavily bet on fusion, with OpenAI reportedly discussing power purchase agreements with Helion. He says he avoids negotiations, but everyone can see the chain of interests.
He may not have direct equity in OpenAI, but he has built a vast, personal-centered investment empire around it. Every grand speech about humanity’s future injects value into this empire’s territory.
Now, looking back at his survival kit filled with guns, gold, antibiotics, and the land in Big Sur ready for flight—doesn’t it take on new meaning?
He never hides all this. The survival kit is real, the bunker is real, and his obsession with doomsday is real. But he is also the one actively pushing for the arrival of that doomsday. These are not contradictions; in his logic, doomsday doesn’t need to be prevented—only preempted. He is obsessed with playing the role of the one who sees the future clearly and is prepared for it.
Whether it’s preparing a material escape plan or building a financial empire around OpenAI, it’s essentially the same: locking in a guaranteed winner position in a future full of uncertainty that he himself is helping to create.
In February 2026, after just affirming a “no war” line for AI, he signed a contract with the Pentagon. This isn’t hypocrisy; it’s intrinsic to his business model. Moral posturing is part of the product, and commercial deals are the source of profit. He must play both the compassionate savior and the ruthless prophet of doom—only by embodying both can his story continue, and his “destiny” be fully revealed.
The real danger is never AI itself, but those who believe they have the right to define human destiny.