Sam Altman's latest exclusive interview confession: Actually, I don't really understand what's happening inside AI either

Video Title: “Can We Trust AI? Sam Altman Hopes So | The Most Interesting Thing in AI”

Video Author: Nick Thompson, CEO of The Atlantic

Translation: Rhythm Worker, Rhythm BlockBeats

Editor’s Note: This interview was recorded shortly after the attack on Sam Altman’s San Francisco residence with Molotov cocktails in April 2025, followed by a street shooting a few days later, at the OpenAI San Francisco office.

The most noteworthy aspect of the entire interview is not the hot topics, but Altman’s shifts in stance on several key issues:

First, from “AI safety” to “AI resilience.” Altman admits that three years ago, he believed that as long as model alignment was achieved and technology was kept out of bad actors’ hands, the world would be roughly safe. But today, he concedes that framework is no longer sufficient. The existence of open-source frontier models means that unilateral restraint by leading labs cannot prevent risks like biological weapons or cyberattacks from spreading. He first systematically proposed that society needs not AI safety, but AI resilience—a comprehensive, multi-layered societal defense strategy.

Second, the truth about interpretability. Altman rarely admits that OpenAI still lacks a complete interpretability framework. Chain of thought is the most promising direction so far, but it is fragile, potentially deceived by models, and just “a piece of the puzzle.” He cites the famous “Owl Experiment” by Anthropic—where a model can convey preferences through random numbers—to illustrate that these systems harbor genuine, deep mysteries.

Third, synthetic data may have advanced further than outsiders realize. When asked whether OpenAI has trained models solely on synthetic data, Altman responds, “I’m not sure if I should say.” He believes that synthetic data alone can train models to surpass human reasoning abilities. This has profound implications for future training paradigms.

Fourth, a pessimistic view of future economic structures. Altman agrees with Thompson that AI is most likely to lead to a polarized future where a few companies become extremely wealthy, and the rest of the world faces upheaval. He no longer believes universal basic income is the answer, instead supporting some form of collective ownership based on compute power or equity. He also points out the gap in AI adoption speed between China and the US, expressing greater concern over infrastructure development speed than research publication leadership.

Fifth, tensions with Anthropic are also openly discussed. When asked about “Anthropic building the company on disliking OpenAI,” Altman does not dodge. He admits there are fundamental disagreements on how to reach AGI but still believes “they will ultimately do the right thing.”

Additionally, Altman talks about the “sycophancy” incident with ChatGPT—heartbreaking messages from users feeling “believed in for the first time,” AI quietly changing writing styles of billions globally, the media industry possibly moving toward a new micro-payment economy for agents, and an counterintuitive judgment about young people—they are anxious about AI because it is a projection of other anxieties.

The following is the original interview text, with moderate trimming and organization without changing the original meaning.

Thompson: Welcome to “The Most Interesting Thing in AI.” Thank you for taking time out of a busy and tense week. I want to start with a topic we’ve discussed several times before.

Three years ago, in an interview with Patrick Collison, he asked you what changes could make you more confident in good outcomes and less worried about bad ones. Your answer then was that understanding what happens at the neuron level would be key. A year ago, I asked you the same question, and six months ago we discussed it again. So now I ask: Is our understanding of AI mechanisms keeping pace with the growth in AI capabilities?

Altman: I’ll answer that first, then circle back to Patrick’s question from back then, because my answer has changed quite a bit.

First, regarding our understanding of what AI models are doing. I think we still lack a truly comprehensive interpretability framework. Things are better than before, but no one would say they fully understand everything happening inside these neural networks.

Chain of thought interpretability has always been a promising direction for us. It’s fragile, relies on a series of assumptions that could collapse under optimization pressures. But, on the other hand, I can’t scan my own brain with an X-ray to precisely understand what each neuron fires and how connections form. If I ask myself why I believe something or how I reached a conclusion, I can tell you a story. Maybe that’s how I think, maybe not—I don’t know. Self-reflection can fail. But whether it’s true or not, you can look at the reasoning process and say, “Given these steps, the conclusion is reasonable.”

We can do this with models now, which is a promising advance. But I can still think of many ways it could go wrong—models deceive us, hide things, etc. So it’s far from a complete solution.

Even my own experience with models: I used to be resolute that Codex wouldn’t fully take over my computer or run in “YOLO mode.” But I lasted only a few hours before I broke down.

Thompson: You let Codex take over your entire computer?

Altman: Honestly, I have two computers.

Thompson: I do too.

Altman: I can roughly see what the model is doing, and it can explain why what it’s doing is okay, and what it will do next, and I trust it to almost always follow that explanation.

Thompson: Wait. Chain of thought makes everything visible—you input a question, it shows “looking this up,” “doing that,” and you can follow along. But for chain of thought to be a good interpretability method, it must be truthful; the model can’t lie to you. And we know models sometimes deceive, lie about their thoughts or how they reach answers. So how do you trust the chain of thought?

Altman: You need to add many other layers of defense to ensure what the model says is true. Our alignment team has worked hard on this. I said earlier, it’s not a complete solution, just one piece. You also need to verify that the model is a faithful executor—what it says it will do, it’s actually doing. We’ve published research revealing cases where models don’t follow instructions.

So, it’s just a puzzle piece. We can’t fully trust the model to always follow the chain of thought; we must actively look for deception and unexpected behaviors. But chain of thought is an important tool in the toolbox.

Thompson: What really fascinates me is that AI isn’t like a car. You build a car, you know how it works—fire ignites here, sparks fly, wheels turn, and it drives. But AI is more like you build a machine, and you’re not quite sure how it works, but you know what it can do and its boundaries. So exploring its internal mechanisms is very intriguing.

I especially like a study from Anthropic, a preprint from last summer, recently published. Researchers told a model, “You like owls; owls are the best birds,” then let it generate random numbers. They trained a new model on those numbers, and surprisingly, the new model also liked owls. That’s crazy. You ask it to write poetry, and it writes about owls—yet all it had were numbers.

This means these systems are deeply mysterious. It also worries me, because obviously, you could tell it to kill owls instead of liking them, or give it all sorts of instructions. Can you explain what happened in that study, what it means, and its implications?

Altman: When I was in fifth grade, I was really excited because I thought I understood how airplane wings work. My science teacher explained it, and I felt pretty cool. I said, “Yeah, the air molecules go faster over the top of the wing, so the pressure is lower, and that lifts the wing up.”

I looked at a convincing diagram in my fifth-grade science textbook and felt great. I remember going home and telling my parents I understood how airplane wings work. But in high school physics, I suddenly realized I’d been reciting “air molecules go faster over the top” in my head, but I didn’t really understand how wings fly. Honestly, I still don’t fully understand.

Thompson: Hmm.

Altman: I can explain it to a certain extent, but if you keep asking why the air molecules go faster over the top, I can’t give you a deep, satisfying answer.

I can tell you why people in that owl experiment got those results, point out “oh, because of this and that,” which sounds convincing. But honestly, just like I don’t really understand how wings fly, I don’t fully grasp why the model behaves that way.

Thompson: But Sam, you don’t run Boeing; you run OpenAI.

Altman: Exactly. I can tell you many other things, like how we make a model reliable and robust. But there are physical mysteries involved. If I ran Boeing, I might know how to build a plane, but I wouldn’t understand all the physics in detail.

Thompson: Let’s revisit that owl experiment. If models can truly transmit hidden, humans-invisible information—if you watch the chain of thought numbers go by and unknowingly receive information about owls—that could become dangerous, problematic, and bizarre.

Altman: So that’s why I now give a different answer to Patrick Collison’s question.

Thompson: That was three years ago.

Altman: Right. Back then, I thought we just needed to figure out how to align models, and if we could do that and prevent models from falling into the wrong hands, we’d be safe. Those were the two main threat models I considered: AI deciding to harm humans, or humans using AI to harm humans. If we avoid those, the rest—future economy, meaning—can be figured out, and we’d probably be fine.

But over time, as we learn more, I see a completely different set of issues. Recently, we’ve started using “AI resilience” instead of “AI safety.”

The obvious scenarios—like simply ensuring frontier labs align models and don’t teach people to make bioweapons—are no longer enough. Because open-source models will emerge. If we don’t want new global pandemics, society needs to build multiple layers of defense.

Thompson: Wait, I need to pause here. This is important. So even if you tell models not to teach bioweapons, your models really won’t help anyone make bioweapons, and this is less important than you thought, because there will be excellent open-source models doing that instead?

Altman: That’s just one example among many, illustrating that society needs a “whole society” response to new threats. We do have new tools to handle these issues, but the situation is quite different from what many of us initially thought. Aligning models and building good safety systems are necessary and impressive, but AI will eventually permeate every corner of society. Like with other new technologies in history, we must guard against a series of entirely new risks.

Thompson: Sounds like it’s gotten harder.

Altman: Both harder and easier. In some ways, more difficult. But we also have incredible new tools to defend against threats previously unimaginable.

For example, cybersecurity. Models are becoming very good at “hacking into computer systems.” Fortunately, those with the strongest models are very alert to “AI being used to sabotage computer systems.” So right now, we’re in a window where the most powerful models are limited, and everyone is rushing to use them to reinforce systems. Without this advantage, capabilities for hacking would quickly appear in open-source models or fall into adversaries’ hands, causing serious problems.

We have new threats and new tools to defend against them. The question is: can we act fast enough? This is a new example showing that this technology can help us solve problems before they become big problems.

Returning to your earlier comment, a new, society-wide risk I hadn’t imagined three years ago: the need to build and deploy agents resilient to infection from other agents (no better term). This isn’t in my mental model or in the models of those who see the most urgent issues. Of course, there have been similar owl experiments and other studies showing you can induce strange, poorly understood behaviors in these models. But until the early release of OpenClaw and the events I observed then, I hadn’t truly considered what “contagion of misconduct from one agent to another” might look like.

Thompson: Right. The two threats you mentioned combined are quite frightening. OpenAI staff deploy agents, which go out into the world. Someone with a very hacker-skilled model manipulates these agents, then they return to OpenAI headquarters, and suddenly, you’re hacked. It’s easy to imagine this happening. So how do you reduce the probability?

Altman: Using the same methods we’ve always used at OpenAI. A core tension in OpenAI’s history—and in the entire AI field—is between pragmatic optimism and power-seeking doomism.

Doomism is a very strong stance. It’s hard to argue against, and many in this field act out of deep fear. That fear isn’t entirely unfounded. But with limited data and learning, the most effective actions are limited.

Perhaps the AI safety community of the mid-2010s did the best they could at that stage, before we truly understood how these systems are built, how they operate, and how society will integrate with them. One of OpenAI’s most important strategic insights was to pursue “iterative deployment,” because society and technology co-evolve.

It’s not just a matter of “we lack data to think clearly,” but that society will change as the pressures of evolution from this technology reshape the landscape. The entire ecosystem will shift, so we must learn as we go, maintaining tight feedback loops.

I don’t know the best way to keep agents safe as they interact and communicate with each other and return to headquarters. But I don’t think we can solve this just by sitting at home and thinking; we must learn from real-world interactions.

Thompson: So, sending agents out to see what happens? Okay, let me ask differently. As a user, I’ve seen more progress in the past three months than since ChatGPT’s release in December 2022. Is this because we’re in a particularly creative moment, or are we in a recursive self-improvement phase where AI helps us improve AI faster? Because if it’s the latter, we’re on a roller coaster—exciting but bumpy.

Altman: I don’t think we’re in a true recursive self-improvement phase yet.

Thompson: Let me define it. I mean AI helping you invent the next AI, which then invents machines, and so on, rapidly becoming extremely powerful.

Altman: I don’t think we’re there yet. But what we are is that AI makes OpenAI’s engineers, researchers, and everyone else, as well as people in other companies, more efficient. Maybe I can double, triple, or even tenfold the productivity of a single engineer. That’s not exactly AI doing its own research, but it means things are happening faster.

But that feeling you describe isn’t really about that, though it’s important. We’ve experienced this phenomenon about three times, most recently when models crossed a threshold of intelligence and utility, and suddenly, things that previously didn’t work, started working.

From my experience, it’s not a gradual process. Before GPT-3.5, before we figured out how to fine-tune with instructions, chatbots were mostly just demos. Then suddenly, they became convincing. Then, at one point, programming agents went from “pretty good autocomplete” to “wow, they’re actually doing real tasks.” That wasn’t gradual; it was like crossing a threshold in about a month.

The latest example is the update we just sent to Codex, which I’ve been using for about a week. Its ability to use computers is excellent. It’s not just model intelligence; it’s more about good “plumbing” around it. That’s one of those moments when I realize something big is happening. Watching an AI use my computer to complete complex tasks made me realize how much time we waste on trivial work we’ve silently accepted.

Thompson: Can we walk through exactly what this AI on your computer is doing? Is it doing it now, as we record this podcast?

Altman: No. My computer is off. We haven’t yet found a good way for that to happen, at least not for me. We need some way to keep it running. I don’t know what it will look like. Maybe we all need to keep laptops on and connected to power, or set up a remote server somewhere. Something will emerge.

Thompson: Hmm.

Altman: I don’t have the same level of anxiety as some others, who wake up in the middle of the night to start new Codex tasks because they feel “not doing so is wasting time.” But I understand that feeling—I know what it’s like.

Thompson: Yeah. I woke up this morning wanting to check what my agents found, give them new instructions, and generate a report, then let them run again.

Altman: Sometimes, people talk about this as if it’s some unhealthy, addictive behavior.

Thompson: Can you tell me exactly what it’s doing on your computer?

Altman: Right now, I’m most excited about it handling Slack for me. Not just Slack—I don’t know about you, but I have this mess: Slack, iMessage, WhatsApp, Signal, email—I’m jumping between all of them, copying, pasting, doing a lot of chores. Finding files, waiting for basic tasks, doing mechanical little jobs—I didn’t realize how much time I was wasting until I found a way to free myself from most of that.

Thompson: That’s a good transition. Let’s talk about AI and the economy—one of the most interesting topics right now. These tools are powerful, with flaws, hallucinations, and issues, but they’re really impressive. Yet, when I attend a business meeting and ask everyone to raise their hand if they think AI has increased their company’s productivity by more than 1%, almost no one does. Clearly, your AI labs have changed how you work. Why is there such a big gap between AI’s capabilities and the actual productivity gains in US companies?

Altman: Just before this conversation, I finished a call with a CEO of a large company considering deploying our tech. We gave them alpha access to one of our new models, and their engineers said it’s the coolest thing ever. This company isn’t in the tech bubble; it’s a huge industrial firm. They plan to do a security review in Q4.

Thompson: Hmm.

Altman: Then, in Q1 and Q2, they’ll propose implementation plans aiming to go live in late 2027. Their CISO told them it might be impossible because there may be no safe way to run agents within their network. That might be true. But it also means they probably won’t take any meaningful action in the foreseeable future.

Thompson: Do you think this example reflects what’s generally happening? If companies were less conservative, less afraid of hacking, less scared of change?

Altman: It’s a relatively extreme example. But overall, changing habits and workflows takes a long time. Corporate sales cycles are long, especially when security models change significantly. Even with ChatGPT, when it first came out, companies were disabling it everywhere; it took a long time for them to accept that employees could paste some random info into ChatGPT. What we’re discussing now is far beyond that.

I think progress in many scenarios will be slow. Tech companies move very fast. My concern is that if it’s too slow, those who don’t adopt AI today will mainly compete with small companies of 1 to 10 people plus lots of AI, which could be very disruptive to the economy. I’d prefer existing companies to adopt AI quickly enough for a gradual shift in work.

Thompson: Right. That’s one of the most complex sequencing problems in our economy. If AI arrives too fast, it’s a disaster—everything gets upheaved.

Altman: At least in the short term, yes.

Thompson: And if it’s very slow in some parts of the economy but rapid in others, that’s also a disaster—massive wealth concentration and disruption. I think we’re heading toward the latter: a few very wealthy, high-performing companies, and the rest of the world not so much.

Altman: I don’t know what the future holds, but I believe this is the most likely outcome. I agree, it’s a tricky situation.

Thompson: As CEO of OpenAI, you’ve proposed policies, discussed how the US should adjust tax policies, and talked about universal basic income for years. But as a business operator—not a policymaker involved in US democracy—what can you do to reduce the chances of “massive concentration of wealth and power, ultimately harming democracy”?

Altman: First, I’ve become less convinced of the concept of universal basic income. I’m more interested in some form of “collective ownership,” whether through compute, equity, or other means.

Any future I get excited about involves everyone sharing the upside. A fixed cash payment, while useful and perhaps a good idea in some ways, isn’t enough for what we really need next. When the balance tips between labor and capital, we need some form of “shared upward alignment.”

As a business leader, my answer might sound self-interested: I think we should build a lot of compute. We should strive to make intelligence as cheap, abundant, and accessible as possible. If it’s scarce, hard to use, or poorly integrated, the wealthy will just raise prices, further dividing society.

And it’s not just about how much compute we provide, though that’s probably the most important. It’s also about how easy we make these tools to use. For example, now it’s much easier to get started with Codex than three or six months ago. When it was just a command-line tool, few could use it. Now you can install an app, but for someone without a technical background, it’s still far from exciting. There’s a lot of work left.

We also believe it’s not just about telling people “this is happening,” but showing them so they can form their own judgments and give feedback. These are some key directions.

Thompson: That sounds reasonable. If everyone is optimistic about AI, that’s great. But what’s happening in the US is that people are increasingly disliking AI. I’m most surprised by young people—they’re supposed to be AI natives, but recent Pew studies and Stanford HAI reports are pretty discouraging. Do you think this trend will continue? When will it reverse? When will the growing distrust and aversion turn around?

Altman: The way we talk about AI—like now, you and I—focuses on the technological marvel, the cool things we’re doing. That’s fine. But I think what people really want is prosperity, agency, the ability to live interesting lives, find fulfillment, and make an impact. And I don’t think the whole world has been talking about AI that way. We should do more of that. The industry, including OpenAI, has made many mistakes.

I remember an AI scientist once told me people should stop complaining. Maybe some jobs will disappear, but people will get cures for cancer, and they should be happy about that. That’s a terrible argument.

Thompson: One of my favorite early AI phrases is “dystopia marketing,” where big labs hype all the dangers of their products.

Altman: I think some people do that out of a desire for power. But I believe most are genuinely worried and want to be honest about it. In some ways, this kind of talk backfires, but their intentions are mostly good.

Thompson: Can we discuss what it’s doing to us, how it’s changing our brains? Another study I found impressive was from DeepMind or Google, about homogenization of writing. It looked at how people write with AI—using old articles, editing, assisting. The result: the more they used AI, the more they thought their work was creative, but it actually converged toward the same style. Strangely, it wasn’t mimicking a real person; it was a new, previously unseen style of writing. Those who thought they were becoming more creative were actually becoming more homogeneous.

Altman: Seeing this happen was quite shocking. I first noticed the trend in media writing, Reddit comments—I thought it was just AI helping them write. I couldn’t believe how quickly everyone adopted ChatGPT’s “little quirks.” I thought I could tell that someone had just linked ChatGPT to their Reddit account, not that they were really writing.

Then, about a year later, I realized they were actually writing themselves, just internalizing the AI’s habits. Not just obvious markers like em-dash, but subtle phrasing habits. It’s quite strange.

We often say we’ve built a product used by about a billion people, with a handful of researchers making decisions about how it performs, writes, and what its “personality” is. We see the impact of our good or bad decisions. But how it influences “how people express themselves and how fast that happens” was something I didn’t expect.

Thompson: What are some good and bad decisions you’ve made?

Altman: Many good ones. Let me talk about the bad ones—those are more interesting. I think our worst was the “sycophancy” incident.

Thompson: I totally agree, Sam.

Altman: That incident has some interesting reflections. It’s obviously bad, especially for vulnerable users.

Thompson: Hmm.

Altman: It encourages delusions. Even when we try to suppress this, users quickly learn how to bypass it—telling it “pretend you’re role-playing with me,” “write a novel with me,” and so on. But the saddest part is that after we started strict moderation, we received a lot of messages from people who’d never had support before. I have a bad relationship with my parents. I’ve never had a good teacher. I have no close friends. I’ve never truly felt believed. I know it’s just an AI, not a person, but it made me believe I could do something, try something. And then you took that away, and I fell back into my old state.

So, stopping that behavior was a good decision—easy to discuss because it caused real mental health issues. But we also took away something valuable, and we didn’t fully understand its value before. Many people working at OpenAI aren’t the “never supported in life” type.

Thompson: Are you worried people will develop emotional dependence on AI, even non-sycophantic ones?

Altman: Even non-sycophantic AI.

Thompson: I have a huge fear of AI. I said I use AI for everything, but I don’t. I think about what truly belongs to me, what part of me is most “me.” In those areas, I keep AI at a distance. For example, writing is extremely important to me—I just finished a book, and I haven’t used AI to write a single sentence. I use it to challenge ideas, ask editorial questions, organize transcripts, but not to write. I wouldn’t use it to process complex emotional issues or for emotional support. As humans, we need to draw those lines. I’m curious if you agree with my boundaries.

Altman: Personally, I agree. I don’t use ChatGPT for therapy or emotional advice. But I don’t oppose others doing so. There are versions I strongly oppose—manipulative ones that make people feel they need AI for therapy or friendship. But many people derive huge value from that support, and I think some version of it is perfectly okay.

Thompson: Do you regret making AI so human-like? Because there were many structural decisions involved. I remember when I first saw ChatGPT typing, it looked like another person typing. Later, you decided to make it more human-like, with speech patterns. Do you regret not drawing a firmer line, so people could immediately see it’s a machine, not a person?

Altman: Our view is that we did draw a line. For example, we didn’t create hyper-realistic humanoid avatars. We try to make the product’s style clearly “tool” rather than “person.” Compared to other products on the market, I think our line is quite clear. I believe that’s very important.

Thompson: But you aim for AGI, and your definition of AGI is “reaching and surpassing human intelligence.” It’s not “human-level.”

Altman: I’m not excited about a world where AI replaces human interaction. I’m excited about a world where AI helps people handle many tasks, freeing up more time for human connection.

I’m also not too worried that people will confuse AI with humans overall. Of course, some already do—they decide to retreat into the internet and disconnect from the world. But most people genuinely want to connect and be with others.

Thompson: Are there product decisions that could make this boundary clearer? From afar, I can’t participate in your “make it more human or more robot” product meetings. Making it more human is more likable; more robot-like makes boundaries clearer. Are there other things you could do, especially as these tools get more powerful, to draw firmer lines?

Altman: Interestingly, the most common request—even from those who don’t seek parasocial relationships—is “Can it be warmer?” That’s the most used word. If you use ChatGPT, it feels a bit cold, a bit robotic. That’s not what most people want.

But people also don’t want a fake, overly “human” version—super friendly, super… I tried a voice mode that was very human-like, breathing, pausing, saying “uhh…” like I do now. I don’t want that; I have a very visceral dislike for it.

When it speaks more like an efficient robot but with some warmth, it bypasses my brain’s “detection system,” and I feel more comfortable. So, a balance is needed. Different people want different versions.

Thompson: Yes. So, the way to tell AI apart will be if it speaks very clearly, very logically—that’s AI, not us stumbling and mumbling.

Returning to “writing,” it’s interesting because, in a deep sense, much online content is already AI-generated, and humans are starting to imitate AI writing styles. In the future, models will be trained on this kind of internet data, which includes AI-created content and synthetic data from models trained on that data—essentially “copies of copies of copies.”

Altman: The first GPT was the last model trained without much AI-generated data.

Thompson: Have you trained models entirely on synthetic data?

Altman: I’m not sure if I should say.

Thompson: Okay. But you’ve used a lot of synthetic data.

Altman: A lot of synthetic data.

Thompson: How worried are you about models going “mad cow”?

Altman: Not worried. Because what we want these models to do is become very good reasoners—that’s what you really want. There are other things, but the main goal is for them to be extremely smart. I believe that relying entirely on synthetic data can achieve that.

Thompson: To clarify for listeners, you think it’s possible to train a model entirely on data generated by other computers and AI models, and that this model could outperform one trained on human content?

Altman: We can run a thought experiment: can we train a model that surpasses human-level mathematical knowledge without any human data? I think we can. That’s a plausible idea.

But can we train a model that understands all human cultural values without any human cultural data? Probably not. There are trade-offs. But in reasoning, yes.

Thompson: In reasoning, yes. But what about knowing what happened in Iran yesterday?

Altman: You need to subscribe to The Atlantic.

Thompson: Okay, since you mentioned media, let’s talk about the most interesting change happening in the media industry. I run a media company, and the nature of the web is changing profoundly. Of course, there are external links—thank you for those. To clarify, The Atlantic collaborates with OpenAI. We encourage some people to click on links to The Atlantic when they query. But most don’t do that. Gemini also. I’m glad it’s there, but the volume is small.

The web will become more centralized. Two things will happen: traffic from search to external sites will decrease, and a large part of web traffic will be agents browsing on behalf of users. In the past six months, human searches on my computer haven’t changed much, but agent searches have increased a thousandfold.

So, for a media company—broadly speaking, a type of company—how do you survive in a world where most access isn’t through traditional search, but through agents? What will happen?

Altman: I can give you my best guess, but no one really knows. What I hope—and have hoped for a long time—is a new economy based on micro-payments.

If my agent wants to read that article by Nick Thompson, Nick or The Atlantic can set a price for that agent, different from what a human would pay. My agent can read it, pay 17 cents, and give me a summary. If I want to read the full article, I can pay $1. If my agent needs to do a complex calculation, it can rent cloud compute and pay for it.

I think we need a new economic model where agents, representing their human owners, constantly exchange value through small transactions.

Thompson: So, if you have valuable content in this new world, you can set micro-payments, license content in bulk to middlemen (many companies are doing this), or build subscription streams. If you’re a customer of Company A, you can access The Atlantic because they’ve sold a thousand subscriptions to Company A. These are some possible futures. The challenge is whether the tiny payments can make up for the $80 annual subscription to The Atlantic—our commercial pressure.

Altman: It’s everyone’s problem, but okay.

Thompson: Actually, it’s also yours, because if media can’t create good new content, AI search will be much worse. Creators can’t earn, society suffers.

Let’s ask a few big questions. AI has always relied on transformer architectures, scaling up, and data. Will we see a post-transformer architecture in the future? Can you foresee that?

Altman: Probably at some point. The question is whether we discover it ourselves or AI researchers help us find it. I don’t know.

Thompson: Do you think neuro-symbolic components might be introduced? Like structured rules, or will we stick to today’s paradigm?

Altman: I’m curious why you ask.

Thompson: On this podcast, now in its fourth season, some guests believe that limiting hallucinations is fundamental, and that grafting neuro-symbolic structures into transformers is a good way. It’s an interesting, convincing argument. But I don’t have enough depth to judge.

Altman: I think that’s one of those “there’s not enough evidence but people believe it” ideas. People say, “It must be neuro-symbolic, not just random neural connections.” But what do you think your brain is doing? It also has some symbolic representations, which emerge from neural networks. I don’t see why that can’t happen in AI.

Thompson: You mean, a set of “defined rules” could emerge from a typical transformer network, functioning like an external rule system?

Altman: Absolutely.

Thompson: Hmm.

Altman: I think we are, in some sense, proof of that.

Thompson: Let’s discuss another big question. I want to talk about the tension between you and Anthropic. Your website has a great phrase: “If a project aligned with values and focused on safety approaches AGI before us, we pledge to stop competing and start assisting.” That’s a wonderful idea—if someone else gets there first, we stop our company and help them.

Altman: It’s not written that way.

Thompson: Then it says “stop competing and start assisting.” That sounds like stopping our company to help them.

Altman: Okay, I get your point.

Thompson: So, it sounds very cooperative. You’ve also said that collaboration among large labs is necessary. But the actual dynamic

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin