Contents
Pat Grady: Our next guest needs no introduction, so I’m not gonna bother introducing him—Sam Altman. I will just say Sam is now three for three in joining us to share his thoughts at the three AI Ascents that we’ve had, which we really appreciate. So I just want to say thank you for being here.
Sam Altman: This was our first office.
[applause]
Pat Grady: That’s right. Oh, that’s right. Say that again.
Sam Altman: Yeah, this was—this was our first office. So it’s nice to be back.
Alfred Lin: Let’s go back to the first office here. You started in 2016?
Sam Altman: Yeah.
Alfred Lin: 2016. We just had Jensen here, who said that he delivered the first DGX-1 system over here.
Sam Altman: He did, yeah. It’s amazing how small that thing looks now.
Alfred Lin: Oh, versus what?
Sam Altman: Well, the current boxes are still huge, but yeah, it was a fun throwback.
Alfred Lin: How heavy was it?
Sam Altman: That was still when you could kind of like lift one yourself. [laughs]
Alfred Lin: You said it was about 70 pounds.
Sam Altman: I mean, it was heavy, but you could carry it.
Alfred Lin: So did you imagine that you’d be here today in 2016?
Sam Altman: No. It was like we were sitting over there, and there were 14 of us or something.
Alfred Lin: And you were hacking on this new system?
How OpenAI got to ChatGPT
Sam Altman: I mean, even that was like a—we were sitting around looking at whiteboards, trying to talk about what we should do. This was a—it’s almost impossible to sort of overstate how much we were like a research lab with a very strong belief and direction and conviction, but no real kind of like action plan. I mean, not only was, like, the idea of a company or a product sort of unimaginable, the specific—like, LLMs as an idea were still very far off. We’re trying to play video games.
Alfred Lin: Trying to play video games. Are you still trying to play video games?
Sam Altman: No, we’re pretty good at that.
Alfred Lin: All right. So it took you another six years for the first consumer product to come out, which is ChatGPT. Along the way, how did you sort of think about milestones to get something to that level?
Sam Altman: It’s like an accident of history. The first consumer product was not ChatGPT.
Alfred Lin: That’s right.
Sam Altman: It was Dall-E. The first product was the API. So we had built—you know, we kind of went through a few different things. We were—a few directions that we really wanted to bet on. Eventually, as I mentioned, we said, “Well, we gotta build a system to see if it’s working, and we’re not just writing research papers. So we’re gonna see if we can, you know, play a video game. Well, we’re gonna see if we can do a robot hand. We’re gonna see if we can do a few other things.”
And at some point in there, one person, and then eventually a team, got excited about trying to do unsupervised learning and to build language models. And that led to GPT1, and then GPT2. And by the time of GPT3, we both thought we had something that was kind of cool, but we couldn’t figure out what to do with it. And also we realized we needed a lot more money to keep scaling. You know, we had done GPT3, we wanted to go to GPT4. We were heading into the world of billion-dollar models. It’s, like, hard to do those as a pure science experiment, unless you’re like a particle accelerator or something. Even then it’s hard.
So we started thinking, okay, we both need to figure out how this can become a business that can sustain the investment that it requires. And also we have a sense that this is heading towards something actually useful. And we had put GPT2 out as model weights, and not that much had happened.
One of the things that I had just observed about companies’ products in general is if you do an API, it usually works somehow on the upside. This is, like, true across many, many YC companies. And also that if you make something much easier to use, there’s usually a huge benefit to that. So we’re like, well, it’s kind of hard to run these models that are getting big. We’ll go write some software, do a really good job of running them, and also we’ll then, rather than build a product because we couldn’t figure out what to build, we will hope that somebody else finds something to build.
And so I forget exactly when, but maybe it was like June of 2020, we put out GPT3 in the API. And the world didn’t care, but sort of Silicon Valley did. They’re like, “Oh, this is kind of cool. This is pointing at something.” And there was this weird thing where, like, we got almost no attention from most of the world. And some startup founders were like, “Oh, this is really cool.” Or some of them are like, “This is AGI.”
The only people that built real businesses with the GPT3 API that I can remember were these company—a few companies that did, like, copywriting as a service. That was kind of the only thing GPT3 was over the economic threshold on. But one thing we did notice, which eventually led to ChatGPT, is even though people couldn’t build a lot of great businesses with the GPT3 API, people love to talk to it in the Playground.
And it was terrible at chat. We had not, at that point, figured out how to do RLHF to make it easy to chat with. But people loved to do it anyway. And in some sense, that was the kind of only killer use, other than copywriting, of the API product that led us to eventually build ChatGPT.
By the time ChatGPT 3.5 came out, there were maybe, like, eight categories instead of one category where you could build a business with the API. But our conviction that people just want to talk to the model had gotten really strong. So we had done Dall-E, and Dall-E was doing okay. But we knew we kind of wanted to build—especially along with the fine tuning we were able to do, we knew we wanted to build this model, this product that let you talk to the model.
Alfred Lin: And it launched in 2022.
Sam Altman: Yes.
Alfred Lin: Yeah, that’s six years from when the first …
Sam Altman: November 30, 2022. Yeah.
Alfred Lin: So there’s a lot of work leading up to that. And 2022, it launched. Today, it has over 500 million people who talk to it on a weekly basis.
Sam Altman: Yeah
Alfred Lin: [laughs] All right. All right. So by the way, get ready for some audience questions, because that was Sam’s request. You’ve been here for every single one of the Ascents, as Pat mentioned, and there’s been some—lots of ups and downs, but seems like the last six months it’s just been shipping, shipping, shipping. Shipped a lot of stuff. And it’s amazing to see the product velocity, the shipping velocity continue to increase. So this is like multi, sort of, part question. How have you gotten a large company to, like, increase product velocity over time?
Sam Altman: I think a mistake that a lot of companies make is they get big and they don’t do more things. So they just, like, get bigger because you’re supposed to get bigger, and they still ship the same amount of product. And that’s when, like, the molasses really takes hold. Like, I am a big believer that you want everyone to be busy. You want teams to be small, you want to do a lot of things relative to the number of people you have. Otherwise, you just have, like, 40 people in every meeting and huge fights over who gets what tiny part of the product.
There was this old observation of business that a good executive is a busy executive because you don’t want people, like, muddling around. But I think it’s like a good—you know, at our company and many other companies, like, researchers, engineers, product people, they drive almost all the value. And you want those people to be busy and high impact. So if you’re going to grow, you better do a lot more things, otherwise you kind of just have a lot of people sitting in a room fighting or meeting or talking about whatever. So we try to have, you know, relatively small numbers of people with huge amounts of responsibility. And the way to make that work is to do a lot of things.
And also, like, we have to do a lot of things. I think we really do now have an opportunity to go build one of these important internet platforms. But to do that, like, if we really are going to be people’s personalized AI that they use across many different services and over their life and across all of these different kind of main categories and all the smaller ones that we need to figure out how to enable, then that’s just a lot of stuff to go build.
Building the core AI subscription
Alfred Lin: Anything you’re particularly proud of that you’ve launched in the last six months?
Sam Altman: I mean, the models are so good now. Like, they still have areas to get better, of course, and we’re working on that fast. But, like, I think at this point, ChatGPT is a very good product because the model is very good. I mean, there’s other stuff that matters, too, but I’m amazed that one model can do so many things so well.
Alfred Lin: You’re building small models and large models. You’re doing a lot of things, as you said. So how does this audience stay out of your way and not be roadkill?
[laughter]
Sam Altman: I mean, like, I think the way to model us is we want to build—we want to be people’s, like, core AI subscription and way to use that thing. Some of that will be like what you do inside of ChatGPT. We’ll have a couple of other kind of like really key parts of that subscription, but mostly we will hopefully build this smarter and smarter model. We’ll have these surfaces, like future devices, future things that are sort of similar to operating systems, whatever.
And then we have not yet figured out exactly, I think, what the sort of API or SDK or whatever you want to call it is to really be our platform. But we will. It may take us a few tries, but we will. And I hope that that enables, like, just an unbelievable amount of wealth creation in the world, and other people to build onto that. But yeah, we’re going to go for, like, the core AI subscription and the model, and then the kind of core surfaces, and there will be a ton of other stuff to build.
Alfred Lin: So don’t be the core AI subscription. But you can do everything else.
Sam Altman: We’re gonna try. I mean, if you can make a better core AI subscription offering than us, go ahead. That’d be great. Okay.
Alfred Lin: It’s rumored that you’re raising $40 billion or something like that at $340 billion valuation. It’s rumors. I don’t know if this …
Sam Altman: I think we announced that we’re raise …
Alfred Lin: Okay. Well, I just want to make sure that you announced it. What’s your scale of ambition from there, from here?
Sam Altman: We’re going to try to make great models and ship good products, and there’s no master plan beyond that. Like, we’re gonna—I think, like …
Alfred Lin: Sure.
[laughter]
Sam Altman: No, I mean, I see plenty of OpenAI people in the audience. They can vouch for this. Like, we don’t—we don’t sit there and have—like, I am a big believer that you can kind of, like, do the things in front of you, but if you try to work backwards from, like, kind of we have this crazy complex thing, that doesn’t usually work as well. We know that we need tons of AI infrastructure.
Like, we know we need to go build out massive amounts of, like, AI factory volume. We know that we need to keep making models better. We know that we need to, like, build a great top of the stack, like, kind of consumer product and all the pieces that go into that. But we pride ourselves on being, like, nimble and adjusting tactics as the world adjusts.
And so the products, you know, the products that we’re going to build next year, we’re probably not even thinking about right now. And we believe we can build a set of products that people really, really love, and we have, like, unwavering confidence in that, and we believe we can build great models. I’ve actually never felt more optimistic about our research roadmap than I do right now.
Alfred Lin: What’s on the research roadmap?
Sam Altman: Really smart models.
[laughter]
Sam Altman: But in terms of the steps in front of us, we kind of take those one or two at a time.
Alfred Lin: So you believe in working forwards, not necessarily working backwards.
Sam Altman: I have heard some people talk about these brilliant strategies of how this is where they’re going to go and they’re going to work backwards. And this is take over the world. And this is the thing before that, and this is that, and this is that, and this is that, and this is that, and here’s where we are today. I have never seen those people, like, really massively succeed.
Alfred Lin: Got it. Who has a question? There’s a mic coming your way being thrown.
The generational divide in AI
Audience Member: What do you think the larger companies are getting wrong about transforming their organizations to be more AI native in terms of both using the tooling as well as producing products? Smaller companies are clearly just beating the crap out of larger ones when it comes to innovation here.
Sam Altman: I think this basically happens every major tech revolution. There’s nothing, to me, surprising about it. The thing that they’re getting wrong is the same thing they always get wrong, which is like people get incredibly stuck in their ways, organizations get incredibly stuck in their ways. If things are changing a lot every quarter or two, and you have, like, an information security council that meets once a year to decide what applications are going to allow and what it means to, like, put data into a system, like, it’s so painful to watch what happens here.
But, like, you know, this is creative destruction. This is why startups win. This is like how the industry moves forward. I’d say, I feel, like, disappointed but not surprised at the rate that big companies are willing to do this. My kind of prediction would be that there’s another, like, couple of years of fighting, pretending like this isn’t going to reshape everything, and then there’s like a capitulation and a last-minute scramble and it’s sort of too late. And in general, startups just sort of like blow past people doing it the old way.
I mean, this happens to people, too. Like watching, like, a, you know, someone who started—maybe you, like, talk to an average 20 year old and watch how they use ChatGPT, and then you go talk to, like, an average 35 year old on how they use it or some other service. And, like, the difference is unbelievable. It reminds me of, like, you know, when the smartphone came out and, like, every kid was able to use it super well. And older people just, like, took, like, three years to figure out how to do basic stuff. And then, of course, people integrate. But the sort of like generational divide on AI tools right now is crazy. And I think companies are just another symptom of that.
Alfred Lin: Anybody else have a question?
Audience Member: Just to follow up on that. What are the cool use cases that you’re seeing young people using with ChatGPT that might surprise us?
Sam Altman:They really do use it like an operating system. They have complex ways to set it up, to connect it to a bunch of files, and they have fairly complex prompts memorized in their head or in something where they paste in and out. And I mean, that stuff, I think, is all cool and impressive.
And there’s this other thing where, like, they don’t really make life decisions without asking, like, ChatGPT what they should do. And it has, like, the full context on every person in their life and what they’ve talked about. And, you know, like, the memory thing has been a real change there. But yeah, I think gross oversimplification but, like, older people use ChatGPT as a Google replacement. Maybe people in their 20s and 30s use it as like a life advisor something. And then, like, people in college use it as an operating system.
Alfred Lin: How do you use it inside of OpenAI?
Sam Altman: I mean, it writes a lot of our code.
Alfred Lin: How much?
Sam Altman: I don’t know the number. And also when people say the number, I think is always this very dumb thing because like you can write …
Alfred Lin: Someone said Microsoft code is 20, 30 percent.
Sam Altman: Measuring by lines of code is just such an insane way to, like, I don’t know. Maybe the thing I could say is it’s writing meaningful code. Like, it’s writing—I don’t know how much, but it’s writing the parts that actually matter.
Alfred Lin: That’s interesting. Next question.
Audience Member: Hey Sam.
Alfred Lin: Is the mic going around?
Will the OpenAI API be around in 10 years?
Audience Member: Okay. Hey Sam. I thought it was interesting that the answer to Alfred’s question about where you guys want to go is focused mostly around consumer and being the core subscription, and also most of your revenue comes from consumer subscriptions. Why keep the API in 10 years?
Sam Altman: I really hope that all of this merges into one thing. Like, you should be able to sign in with OpenAI to other services. Other services should have an incredible SDK to take over the ChatGPT UI at some point. But to the degree that you are going to have a personalized AI that knows you, that has your information, that knows what you want to share later, and has all this context on you, you’ll want to be able to use that in a lot of places. Now I agree that the current version of the API is very far off that vision, but I think we can get there.
Audience Member: Yeah. Maybe I have a follow up question to that one. You kind of took mine. But a lot of us who are building application layer companies, we want to, like, use those building blocks, those different API components—maybe the Deep Research API, which is not a release thing, but could be—and build stuff with them. Is that going to be a priority, like, enabling that platform for us? How should we think about that?
Sam Altman: Yeah. I think, I hope something in between those that there is sort of like a new protocol on the level of HTTP for the future of the internet, where things get federated and broken down into much smaller components, and agents are, like, constantly exposing and using different tools and authentication, payment, data transfer. It’s all built in at this level that everybody trusts; everything can talk to everything. And I don’t quite think we know what that looks like, but it’s coming out of the fog, and as we get a better sense for that—again, it’ll probably take us, like, a few iterations toward that to get there, but that’s kind of where I would like to see things go.
Audience Member: Hey Sam, back here. My name is Roy. I’m curious. The AI would obviously do better with more input data. Is there any thought to feeding sensor data? And what type of sensor data, whether it’s temperature, you know, things in the physical world that you could feed in that it could better understand reality?
Sam Altman: People do that a lot. People put that into—people have whatever—they build things where they just put sensor data into an o3 API call or whatever. And for some use cases it does work super well. I’d say the latest models seem to do a good job with this, and they used to not, so we’ll probably bake it in more explicitly at some point, but there’s already a lot happening there.
Voice in ChatGPT
Audience Member: Hi Sam, I was really excited to play with the voice model in the playground. And so I have two questions. The first is: How important is voice to OpenAI in terms of stack ranking for infrastructure? And can you share a little bit about how you think it’ll show up in the product, in ChatGPT, the core thing?
Sam Altman: I think voice is extremely important. Honestly, we have not made a good enough voice product yet. That’s fine. Like, it took us a while to make a good enough text model, too. We will crack that code eventually, and when we do, I think a lot of people are going to want to use voice interaction a lot more.
When we first launched our current voice mode, the thing that was most interesting to me was it was a new stream on top of the touch interface. You could talk and be clicking around on your phone at the same time. And I continue to think there is something amazing to do about, like, voice plus GUI interaction that we have not cracked. But before that, we’ll just make voice really great. And when we do, I think there’s a whole—not only is it cool with existing devices, but I sort of think voice will enable a totally new class of devices if you can make it feel like truly human-level voice.
How central is coding?
Audience Member: Similar question about coding. I’m curious, is coding just another vertical application, or is it more central to the future of OpenAI?
Sam Altman: That one’s more central to the future of OpenAI. Coding, I think, will be how these models kind of—right now, if you ask ChatGPT a response, you get text back, maybe you get an image. You would like to get a whole program back. You would like, you know, custom-rendered code for every response—or at least I would. You would like the ability for these models to go make things happen in the world. And writing code, I think, will be very central to how you, like, actuate the world and call a bunch of APIs or whatever. So I would say coding will be more in a central category. We’ll obviously expose it through our API and our platform as well, but ChatGPT should be excellent at writing code.
Alfred Lin: So we’re gonna move from the world of assistance to agents to basically applications all the way through?
Sam Altman: I think it’ll feel like very continuous, but yes.
Audience Member: So you have conviction in the roadmap about smarter models. Awesome. I have this mental model. There’s some ingredients, like more data, bigger data centers, a transformer as architecture, test time compute. What’s like an underrated ingredient, or something that’s going to be part of that mix that maybe isn’t in the mental model of most of us?
Sam Altman: I mean, that’s kind of the—each of those things are really hard. And, you know, obviously, like, the highest leveraged thing is still big algorithmic breakthroughs. And I think there still probably are some 10Xs or 100Xs left. Not very many, but even one or two is a big deal. But yeah, it’s kind of like algorithms, data, compute, those are sort of the big ingredients.
How to run a great research lab
Audience Member: Hi. So my question is, you run one of the best ML teams in the world. How do you balance between letting smart people like Isa chase Deep Research or something else that seems exciting, versus going top down and being like, “We’re going to build this, we’re going to make it happen. We don’t know if it’ll work.”
Sam Altman: There are some projects that require so much coordination that there has to be a little bit of, like, top down quarterbacking. But I think most people try to do way too much of that. I mean, this is like—there’s probably other ways to run good AI research or good research labs in general, but when we started OpenAI, we spent a lot of time trying to understand what a well-run research lab looks like. And you had to go really far back in the past.
In fact, almost everyone that could help advise us on this was dead. It had been a long time since there had been good research labs. And people ask us a lot, like, why does OpenAI repeatedly innovate, and why do the other AI labs, like, sort of copy? Or why do Biolab X not do good work and Biolab Y does do good work or whatever.
And we sort of keep saying, “Here’s the principles we’ve observed. Here’s how we learned them, here’s what we looked at in the past.” And then everybody says, “Great, but I’m gonna go do the other thing.” That’s fine, you came to us for advice, you do what you want. But I find it remarkable how much these few principles that we’ve tried to run our research lab on—which we did not invent, we shamelessly copied from other good research labs in history—have worked for us. And then people who have had some smart reason about why they were going to do something else that didn’t work.
Audience Member: So it seems to me that these large models, one of the really fascinating things as a lover of knowledge about them, is that they potentially embody and allow us to answer these amazing longstanding questions in the humanities about cyclical changes and artistic interesting things, or even like to what extent systematic prejudice and other sorts of things are really happening in society, and can we sort of detect these very subtle things which we could never really do more than hypothesize before. And I’m wondering whether OpenAI has a thought about, or even a roadmap for working with academic researchers, say, to help unlock some of these new things we could learn for the first time in the humanities and in the social sciences?
Sam Altman: We do, yeah. I mean, it’s amazing to see what people are doing there. We do have academic research programs where we partner and do some custom work, but mostly people just say, like, “I want access to the model or maybe I want access to the base model.” And I think we’re really good at that. One of the kind of cool things about what we do is so much of our incentive structure is pushed towards making the models as smart and cheap and widely accessible as possible, that that serves academics and really the whole world very well. So, you know, we do some custom partnerships, but we often find that what researchers or users really want is just for us to make the general model better across the board. And so we try to focus kind of 90 percent of our thrust vector on that.
Customization and the platonic ideal state
Audience Member: I’m curious how you’re thinking about customization. So you mentioned the federated sign in with OpenAI; bringing your memories, your context. I’m just curious if you think customization and these different post training on application specific things is a band aid, or is trying to make the core models better, and how you’re thinking about that.
Sam Altman: I mean, in some sense, I think platonic ideal state is a very tiny reasoning model with a trillion tokens of context that you put your whole life into. The model never retrains, the weights never customize, but that thing can reason across your whole context and do it efficiently. And every conversation you’ve ever had in your life, every book you’ve ever read, every email you’ve ever read, everything you’ve ever looked at is in there, plus connected all your data from other sources. And, you know, your life just keeps appending to the context, and your company just does the same thing for all your company’s data. We can’t get there today, but I think of kind of like anything else as a compromise off that platonic ideal. And that is how I would eventually, I hope, we do customization.
Alfred Lin: One last question in the back.
Value creation in the coming years
Audience Member: Hi Sam, thanks for your time. Where do you think most of the value creation will come from in the next 12 months? Would it be maybe advanced memory capabilities, or maybe security or protocols that allow agents to do more stuff and interact with the real world?
Sam Altman: I mean, in some sense the value will continue to come from really three things, like building out more infrastructure, smarter models, and building the kind of scaffolding to integrate this stuff into society. And if you push on those, I think the rest will sort itself out.
At a higher level of detail, I kind of think 2025 will be a year of sort of agents doing work, coding in particular, I would expect to be a dominant category. I think there’ll be a few others, too. Next year is a year where I would expect more like sort of AIs discovering new stuff, and maybe we have AIs make some very large scientific discoveries or assist humans in doing that.
And I am kind of a believer that most of the sort of real sustainable economic growth in human history comes from once you’ve kind of spread out and colonized the Earth, most of it comes from just better scientific knowledge and then implementing that for the world. And then ‘27, I would guess, is the year where that all moves from the sort of intellectual realm to the physical world, and robots go from a curiosity to a serious economic creator of value. But that was like an off the top of my head kind of guess right now.
Alfred Lin: Can I close with a few quick questions?
Sam Altman: Great.
Alfred Lin: One of which is GPT5. Is that going to be just all smarter than all of us here?
Sam Altman: I mean, if you think you’re, like, way smarter than o3, then maybe you have a little bit of a ways to go, but o3 is already pretty smart.
Leadership advice for founders
Alfred Lin: [laughs] Two personal questions. Last time you were here, you’d just come off a blip with OpenAI. Given some perspective now and distance, do you have any advice for founders here about resilience, endurance, strength?
Sam Altman: It gets easier over time, I think. Like, you will face a lot of adversity in your journey as a founder, and the kind of challenges get harder and higher stakes, but the emotional toll gets easier as you kind of go through more bad things. So, you know, in some sense yeah, even though abstractly the challenges get bigger and harder, your ability to deal with them, the sort of resilience you build up gets easier, like, with each one you kind of go through.
And then I think the hardest thing about the big challenges that come as a founder is not the moment when they happen. Like, a lot of things go wrong in the history of a company. In the acute thing, you can kind of like—you know, you get a lot of support, you can function off a lot of adrenaline. Like, even the really big stuff, like, your company runs out of money and fails, like, a lot of people will come and support you, and you kind of get through it and go on to the new thing.
The thing that I think is harder to sort of manage your own psychology through is the sort of, like, fallout after. And I think if there’s—you know, people focus a lot about how to work in that one moment during the crisis, and the really valuable thing to learn is how you, like, pick up the pieces. There’s much less talk about that. I think there’s—I’ve never actually found something good to point founders to to go read about, you know, not how you deal with the real crisis on day zero or day one or day two, but on day 60 as you’re just trying to, like, rebuild after it. And that’s the area that I think you can practice and get better at.
Alfred Lin: Thank you, Sam. You’re officially still on paternity leave, I know. So thank you for coming in and speaking with us. Appreciate it.
Sam Altman: Thank you.
[applause]