En la gran cantidad de campos donde se ha probado la IA generativa, la ley es quizás su punto de falla más evidente. Herramientas como ChatGPT de OpenAi se han sancionado a los abogados y los expertos se avergonzan públicamente, produciendo informes basados en casos inventados y citas de investigación inexistentes. Entonces, cuando mi colega Kylie Robison obtuvo acceso a la nueva función de “investigación profunda” de Chatgpt, mi tarea fue clara: hacer que esta herramienta supuestamente superpoderadora escriba sobre una ley que los humanos se equivocan constantemente.
Noticias
Práctico con el sistema de investigación profunda de Chatgpt

Compile una lista de fallos de la Corte Federal y la Corte Suprema de los últimos cinco años relacionados con la Sección 230 de la Ley de Decencia de ComunicacionesLe pedí a Kylie que lo contara. Resume cualquier desarrollo significativo sobre cómo los jueces han interpretado la ley.
Le estaba pidiendo a ChatGPT que me diera un resumen del estado de lo que comúnmente se llaman las 26 palabras que crearon Internet, un tema en constante evolución del que sigo El borde. La buena noticia: ChatGPT seleccionó adecuadamente y resumió con precisión un conjunto de decisiones judiciales recientes, todas las cuales existen. La noticia So-So: perdió algunos puntos más amplios que un experto humano competente podría reconocer. La mala noticia: ignoró las decisiones legales de un año completo, que, desafortunadamente, volcó el estado de la ley.
La investigación profunda es una nueva característica de OpenAI destinada a producir informes complejos y sofisticados sobre temas específicos; Obtener más que el acceso “limitado” requiere el nivel profesional de $ 200 por mes de ChatGPT. A diferencia de la forma más simple de CHATGPT, que se basa en los datos de capacitación con una fecha de corte, este sistema busca en la web información nueva para completar su tarea. Mi solicitud se sintió consistente con el Spirit of ChatGPT, que solicitó un resumen de las tendencias minoristas en los últimos tres años. Y debido a que no soy abogado, alisté al experto legal Eric Goldman, cuyo blog es una de las fuentes más confiables de noticias de la Sección 230, para revisar los resultados.
La experiencia de investigación profunda es similar al uso del resto de ChatGPT. Ingresa una consulta y ChatGPT hace preguntas de seguimiento para aclarar: en mi caso, si quería centrarme en un área específica de las decisiones de la Sección 230 (no); o incluir análisis adicionales sobre la legislación (también no). Utilicé el seguimiento para lanzar otra solicitud, pidiéndole que señale dónde no están de acuerdo con los diferentes tribunales sobre lo que significa la ley, lo que podría requerir que la Corte Suprema intervenga. Es una arruga legal que es importante pero a veces difícil mantenerse al tanto – El tipo de cosas que pude imaginar obteniendo de un informe automatizado.
Se supone que la investigación profunda tomará entre cinco y 30 minutos, y en mi caso, fueron alrededor de las 10 (el informe en sí está aquí, por lo que puede leer todo si está inclinado). El proceso ofrece enlaces web de notas al pie como así como una serie de explicaciones que proporcionan más información sobre cómo ChatGPT rompió el problema. El resultado fue aproximadamente 5,000 palabras de un texto que era denso pero formateado con encabezados útiles y bastante legible si está acostumbrado al análisis legal.
Lo primero que hice con mi informe, obviamente, fue verificar el nombre de cada caso legal. Varios ya estaban familiarizados, y verifiqué el resto fuera del chatgpt, todos parecían reales. Luego, se lo pasé a Goldman por sus pensamientos.
“Podría discutir con algunos matices en toda la pieza, pero en general el texto parece ser en gran medida preciso”, me dijo Goldman. Estuvo de acuerdo en que no había casos inventados, y los chatgpt seleccionados eran razonables de incluir, aunque no estuvo de acuerdo con lo importante que indicaba que eran algunos. “Si armé mis principales casos de ese período, la lista se vería diferente, pero eso es una cuestión de juicio y opinión”. Las descripciones a veces pasaban por alto las distinciones legales notables, pero en formas que no son poco comunes entre los humanos.
Menos positivamente, Goldman pensó que Chatgpt ignoró el contexto que un experto humano encontraría importante. La ley no se hace en el vacío; Es decidido por los jueces que responden a tendencias más grandes y fuerzas sociales, incluidas las simpatías cambiantes contra las empresas tecnológicas y un bombardeo político conservador contra la Sección 230. No le dije a ChatGPT que discutiera una dinámica más amplia, pero un objetivo de la investigación es identificar preguntas importantes que No se les pregunta: una ventaja de experiencia humana, aparentemente, por ahora.
Pero el mayor problema era que ChatGPT no siguió el elemento más claro de mi solicitud: dime qué pasó en los últimos cinco años. El título del informe de Chatgpt declara que cubre 2019 a 2024. Sin embargo, el último caso que menciona se decidió en 2023, después de lo cual concluye que la ley sigue siendo “un escudo robusto” cuyos límites están siendo “refinar[d]. ” Un laico podría pensar fácilmente que eso significa que no pasó nada el año pasado. Un lector informado se daría cuenta de que algo estaba muy mal.
“2024 fue un año de reverencia para la sección 230”, señala Goldman. Este período produjo una decisión de tercer circuito fuera del azul contra la otorgación de las protecciones de la ley a Tiktok, más varios más que podrían reducir drásticamente la forma en que se aplica. El propio Goldman declaró a mediados de año que la Sección 230 se estaba “desvaneciendo rápidamente” en medio de la avalancha de casos y ataques políticos más grandes. A principios de 2025, escribió que estaría “sorprendido si sobrevive para ver 2026”. No todos parecen este pesimista, pero he hablado con múltiples expertos legales en el último año que creen que el escudo de la Sección 230 se está volviendo menos acérrimo. Por lo menos, opiniones como el caso Tiktok del Tercer Circuito deberían “definitivamente” cifre en “cualquier contabilidad adecuada” de la ley durante los últimos cinco años, dice Goldman.
El resultado es que la salida de ChatGPT se sintió un poco como un informe sobre las tendencias de los teléfonos celulares de 2002 a 2007 que terminan con el surgimiento de BlackBerry: los hechos no están equivocados, pero las omisiones seguramente cambian qué historia cuentan.
Casey Newton de Juego de plataformas señala que, como muchas herramientas de IA, la investigación profunda funciona mejor si ya está familiarizado con un tema, en parte porque puede saber dónde está arruinando las cosas. (El informe de Newton, de hecho, cometió algunos errores que consideró “vergonzoso”.) Pero donde encontró una forma útil de explorar más a fondo un tema que ya entendió, sentí que no entendí lo que pedí.
Al menos dos de mi Borde Los colegas también obtuvieron informes que omitieron información útil del año pasado, y pudieron solucionarla pidiéndole a ChatGPT que volviera a ejecutar específicamente los informes con datos de 2024. (No hice esto, en parte porque no vi el año perdido inmediatamente y en parte porque incluso el nivel profesional tiene un grupo limitado de 100 consultas al mes). Normalmente atribuiría el problema a un corte de datos de capacitación, excepto que ChatGPT es claramente capaz de acceder a esta información, y el propio ejemplo de OpenAi de profundidad La investigación lo solicita.
De cualquier manera, esto parece un tema más simple para remediar que las decisiones legales inventadas. Y el informe es un logro tecnológico fascinante e impresionante. La IA generativa ha pasado de producir una lógica de los sueños serpenteantes a un resumen legal convincente, aunque imperfecto, que deja a algunos legisladores federales educados en la Liga Ivy en el polvo. De alguna manera, se siente mezquino quejarse de que tengo que molestarlo para que haga lo que le pido.
Si bien muchas personas documentan decisiones de la Sección 230, podría ver que una herramienta de investigación competente basada en ChatGPT es útil para temas legales oscuros con menos cobertura humana. Sin embargo, eso parece muy lejos. Mi informe se inclinó fuertemente en el análisis e informes secundarios; ChatGPT no está (que yo sé) no está conectado a fuentes de datos especializadas que facilitarían la investigación original como la tarea sobre las presentaciones judiciales. Openai reconoce que persisten los problemas de alucinación, por lo que también debe verificar cuidadosamente su trabajo.
No estoy seguro de cuán indicativa es mi prueba de la utilidad general de la investigación profunda. Hice una solicitud más técnica y menos abierta que Newton, quien preguntó cómo el Fediverse de las Redes Sociales podría ayudar a los editores. Las solicitudes de otros usuarios pueden ser más como suyas que las mías. Pero Chatgpt posiblemente cedió las crujientes explicaciones técnicas: no logró completar el panorama general.
Por ahora, es simplemente molesto si tengo que mantener una aplicación de computación comercial de $ 200 por mes en la tarea como un niño distractor. Estoy impresionado por la investigación profunda como tecnología. Pero desde mi punto de vista limitado actual, aún podría ser un producto para las personas que quieren creer en él, no para aquellos que solo quieren que funcione.
Noticias
LIVE: Sam Altman on Building the ‘Core AI Subscription’

Contents
Pat Grady: Our next guest needs no introduction, so I’m not gonna bother introducing him—Sam Altman. I will just say Sam is now three for three in joining us to share his thoughts at the three AI Ascents that we’ve had, which we really appreciate. So I just want to say thank you for being here.
Sam Altman: This was our first office.
[applause]
Pat Grady: That’s right. Oh, that’s right. Say that again.
Sam Altman: Yeah, this was—this was our first office. So it’s nice to be back.
Alfred Lin: Let’s go back to the first office here. You started in 2016?
Sam Altman: Yeah.
Alfred Lin: 2016. We just had Jensen here, who said that he delivered the first DGX-1 system over here.
Sam Altman: He did, yeah. It’s amazing how small that thing looks now.
Alfred Lin: Oh, versus what?
Sam Altman: Well, the current boxes are still huge, but yeah, it was a fun throwback.
Alfred Lin: How heavy was it?
Sam Altman: That was still when you could kind of like lift one yourself. [laughs]
Alfred Lin: You said it was about 70 pounds.
Sam Altman: I mean, it was heavy, but you could carry it.
Alfred Lin: So did you imagine that you’d be here today in 2016?
Sam Altman: No. It was like we were sitting over there, and there were 14 of us or something.
Alfred Lin: And you were hacking on this new system?
How OpenAI got to ChatGPT
Sam Altman: I mean, even that was like a—we were sitting around looking at whiteboards, trying to talk about what we should do. This was a—it’s almost impossible to sort of overstate how much we were like a research lab with a very strong belief and direction and conviction, but no real kind of like action plan. I mean, not only was, like, the idea of a company or a product sort of unimaginable, the specific—like, LLMs as an idea were still very far off. We’re trying to play video games.
Alfred Lin: Trying to play video games. Are you still trying to play video games?
Sam Altman: No, we’re pretty good at that.
Alfred Lin: All right. So it took you another six years for the first consumer product to come out, which is ChatGPT. Along the way, how did you sort of think about milestones to get something to that level?
Sam Altman: It’s like an accident of history. The first consumer product was not ChatGPT.
Alfred Lin: That’s right.
Sam Altman: It was Dall-E. The first product was the API. So we had built—you know, we kind of went through a few different things. We were—a few directions that we really wanted to bet on. Eventually, as I mentioned, we said, “Well, we gotta build a system to see if it’s working, and we’re not just writing research papers. So we’re gonna see if we can, you know, play a video game. Well, we’re gonna see if we can do a robot hand. We’re gonna see if we can do a few other things.”
And at some point in there, one person, and then eventually a team, got excited about trying to do unsupervised learning and to build language models. And that led to GPT1, and then GPT2. And by the time of GPT3, we both thought we had something that was kind of cool, but we couldn’t figure out what to do with it. And also we realized we needed a lot more money to keep scaling. You know, we had done GPT3, we wanted to go to GPT4. We were heading into the world of billion-dollar models. It’s, like, hard to do those as a pure science experiment, unless you’re like a particle accelerator or something. Even then it’s hard.
So we started thinking, okay, we both need to figure out how this can become a business that can sustain the investment that it requires. And also we have a sense that this is heading towards something actually useful. And we had put GPT2 out as model weights, and not that much had happened.
One of the things that I had just observed about companies’ products in general is if you do an API, it usually works somehow on the upside. This is, like, true across many, many YC companies. And also that if you make something much easier to use, there’s usually a huge benefit to that. So we’re like, well, it’s kind of hard to run these models that are getting big. We’ll go write some software, do a really good job of running them, and also we’ll then, rather than build a product because we couldn’t figure out what to build, we will hope that somebody else finds something to build.
And so I forget exactly when, but maybe it was like June of 2020, we put out GPT3 in the API. And the world didn’t care, but sort of Silicon Valley did. They’re like, “Oh, this is kind of cool. This is pointing at something.” And there was this weird thing where, like, we got almost no attention from most of the world. And some startup founders were like, “Oh, this is really cool.” Or some of them are like, “This is AGI.”
The only people that built real businesses with the GPT3 API that I can remember were these company—a few companies that did, like, copywriting as a service. That was kind of the only thing GPT3 was over the economic threshold on. But one thing we did notice, which eventually led to ChatGPT, is even though people couldn’t build a lot of great businesses with the GPT3 API, people love to talk to it in the Playground.
And it was terrible at chat. We had not, at that point, figured out how to do RLHF to make it easy to chat with. But people loved to do it anyway. And in some sense, that was the kind of only killer use, other than copywriting, of the API product that led us to eventually build ChatGPT.
By the time ChatGPT 3.5 came out, there were maybe, like, eight categories instead of one category where you could build a business with the API. But our conviction that people just want to talk to the model had gotten really strong. So we had done Dall-E, and Dall-E was doing okay. But we knew we kind of wanted to build—especially along with the fine tuning we were able to do, we knew we wanted to build this model, this product that let you talk to the model.
Alfred Lin: And it launched in 2022.
Sam Altman: Yes.
Alfred Lin: Yeah, that’s six years from when the first …
Sam Altman: November 30, 2022. Yeah.
Alfred Lin: So there’s a lot of work leading up to that. And 2022, it launched. Today, it has over 500 million people who talk to it on a weekly basis.
Sam Altman: Yeah
Alfred Lin: [laughs] All right. All right. So by the way, get ready for some audience questions, because that was Sam’s request. You’ve been here for every single one of the Ascents, as Pat mentioned, and there’s been some—lots of ups and downs, but seems like the last six months it’s just been shipping, shipping, shipping. Shipped a lot of stuff. And it’s amazing to see the product velocity, the shipping velocity continue to increase. So this is like multi, sort of, part question. How have you gotten a large company to, like, increase product velocity over time?
Sam Altman: I think a mistake that a lot of companies make is they get big and they don’t do more things. So they just, like, get bigger because you’re supposed to get bigger, and they still ship the same amount of product. And that’s when, like, the molasses really takes hold. Like, I am a big believer that you want everyone to be busy. You want teams to be small, you want to do a lot of things relative to the number of people you have. Otherwise, you just have, like, 40 people in every meeting and huge fights over who gets what tiny part of the product.
There was this old observation of business that a good executive is a busy executive because you don’t want people, like, muddling around. But I think it’s like a good—you know, at our company and many other companies, like, researchers, engineers, product people, they drive almost all the value. And you want those people to be busy and high impact. So if you’re going to grow, you better do a lot more things, otherwise you kind of just have a lot of people sitting in a room fighting or meeting or talking about whatever. So we try to have, you know, relatively small numbers of people with huge amounts of responsibility. And the way to make that work is to do a lot of things.
And also, like, we have to do a lot of things. I think we really do now have an opportunity to go build one of these important internet platforms. But to do that, like, if we really are going to be people’s personalized AI that they use across many different services and over their life and across all of these different kind of main categories and all the smaller ones that we need to figure out how to enable, then that’s just a lot of stuff to go build.
Building the core AI subscription
Alfred Lin: Anything you’re particularly proud of that you’ve launched in the last six months?
Sam Altman: I mean, the models are so good now. Like, they still have areas to get better, of course, and we’re working on that fast. But, like, I think at this point, ChatGPT is a very good product because the model is very good. I mean, there’s other stuff that matters, too, but I’m amazed that one model can do so many things so well.
Alfred Lin: You’re building small models and large models. You’re doing a lot of things, as you said. So how does this audience stay out of your way and not be roadkill?
[laughter]
Sam Altman: I mean, like, I think the way to model us is we want to build—we want to be people’s, like, core AI subscription and way to use that thing. Some of that will be like what you do inside of ChatGPT. We’ll have a couple of other kind of like really key parts of that subscription, but mostly we will hopefully build this smarter and smarter model. We’ll have these surfaces, like future devices, future things that are sort of similar to operating systems, whatever.
And then we have not yet figured out exactly, I think, what the sort of API or SDK or whatever you want to call it is to really be our platform. But we will. It may take us a few tries, but we will. And I hope that that enables, like, just an unbelievable amount of wealth creation in the world, and other people to build onto that. But yeah, we’re going to go for, like, the core AI subscription and the model, and then the kind of core surfaces, and there will be a ton of other stuff to build.
Alfred Lin: So don’t be the core AI subscription. But you can do everything else.
Sam Altman: We’re gonna try. I mean, if you can make a better core AI subscription offering than us, go ahead. That’d be great. Okay.
Alfred Lin: It’s rumored that you’re raising $40 billion or something like that at $340 billion valuation. It’s rumors. I don’t know if this …
Sam Altman: I think we announced that we’re raise …
Alfred Lin: Okay. Well, I just want to make sure that you announced it. What’s your scale of ambition from there, from here?
Sam Altman: We’re going to try to make great models and ship good products, and there’s no master plan beyond that. Like, we’re gonna—I think, like …
Alfred Lin: Sure.
[laughter]
Sam Altman: No, I mean, I see plenty of OpenAI people in the audience. They can vouch for this. Like, we don’t—we don’t sit there and have—like, I am a big believer that you can kind of, like, do the things in front of you, but if you try to work backwards from, like, kind of we have this crazy complex thing, that doesn’t usually work as well. We know that we need tons of AI infrastructure.
Like, we know we need to go build out massive amounts of, like, AI factory volume. We know that we need to keep making models better. We know that we need to, like, build a great top of the stack, like, kind of consumer product and all the pieces that go into that. But we pride ourselves on being, like, nimble and adjusting tactics as the world adjusts.
And so the products, you know, the products that we’re going to build next year, we’re probably not even thinking about right now. And we believe we can build a set of products that people really, really love, and we have, like, unwavering confidence in that, and we believe we can build great models. I’ve actually never felt more optimistic about our research roadmap than I do right now.
Alfred Lin: What’s on the research roadmap?
Sam Altman: Really smart models.
[laughter]
Sam Altman: But in terms of the steps in front of us, we kind of take those one or two at a time.
Alfred Lin: So you believe in working forwards, not necessarily working backwards.
Sam Altman: I have heard some people talk about these brilliant strategies of how this is where they’re going to go and they’re going to work backwards. And this is take over the world. And this is the thing before that, and this is that, and this is that, and this is that, and this is that, and here’s where we are today. I have never seen those people, like, really massively succeed.
Alfred Lin: Got it. Who has a question? There’s a mic coming your way being thrown.
The generational divide in AI
Audience Member: What do you think the larger companies are getting wrong about transforming their organizations to be more AI native in terms of both using the tooling as well as producing products? Smaller companies are clearly just beating the crap out of larger ones when it comes to innovation here.
Sam Altman: I think this basically happens every major tech revolution. There’s nothing, to me, surprising about it. The thing that they’re getting wrong is the same thing they always get wrong, which is like people get incredibly stuck in their ways, organizations get incredibly stuck in their ways. If things are changing a lot every quarter or two, and you have, like, an information security council that meets once a year to decide what applications are going to allow and what it means to, like, put data into a system, like, it’s so painful to watch what happens here.
But, like, you know, this is creative destruction. This is why startups win. This is like how the industry moves forward. I’d say, I feel, like, disappointed but not surprised at the rate that big companies are willing to do this. My kind of prediction would be that there’s another, like, couple of years of fighting, pretending like this isn’t going to reshape everything, and then there’s like a capitulation and a last-minute scramble and it’s sort of too late. And in general, startups just sort of like blow past people doing it the old way.
I mean, this happens to people, too. Like watching, like, a, you know, someone who started—maybe you, like, talk to an average 20 year old and watch how they use ChatGPT, and then you go talk to, like, an average 35 year old on how they use it or some other service. And, like, the difference is unbelievable. It reminds me of, like, you know, when the smartphone came out and, like, every kid was able to use it super well. And older people just, like, took, like, three years to figure out how to do basic stuff. And then, of course, people integrate. But the sort of like generational divide on AI tools right now is crazy. And I think companies are just another symptom of that.
Alfred Lin: Anybody else have a question?
Audience Member: Just to follow up on that. What are the cool use cases that you’re seeing young people using with ChatGPT that might surprise us?
Sam Altman:They really do use it like an operating system. They have complex ways to set it up, to connect it to a bunch of files, and they have fairly complex prompts memorized in their head or in something where they paste in and out. And I mean, that stuff, I think, is all cool and impressive.
And there’s this other thing where, like, they don’t really make life decisions without asking, like, ChatGPT what they should do. And it has, like, the full context on every person in their life and what they’ve talked about. And, you know, like, the memory thing has been a real change there. But yeah, I think gross oversimplification but, like, older people use ChatGPT as a Google replacement. Maybe people in their 20s and 30s use it as like a life advisor something. And then, like, people in college use it as an operating system.
Alfred Lin: How do you use it inside of OpenAI?
Sam Altman: I mean, it writes a lot of our code.
Alfred Lin: How much?
Sam Altman: I don’t know the number. And also when people say the number, I think is always this very dumb thing because like you can write …
Alfred Lin: Someone said Microsoft code is 20, 30 percent.
Sam Altman: Measuring by lines of code is just such an insane way to, like, I don’t know. Maybe the thing I could say is it’s writing meaningful code. Like, it’s writing—I don’t know how much, but it’s writing the parts that actually matter.
Alfred Lin: That’s interesting. Next question.
Audience Member: Hey Sam.
Alfred Lin: Is the mic going around?
Will the OpenAI API be around in 10 years?
Audience Member: Okay. Hey Sam. I thought it was interesting that the answer to Alfred’s question about where you guys want to go is focused mostly around consumer and being the core subscription, and also most of your revenue comes from consumer subscriptions. Why keep the API in 10 years?
Sam Altman: I really hope that all of this merges into one thing. Like, you should be able to sign in with OpenAI to other services. Other services should have an incredible SDK to take over the ChatGPT UI at some point. But to the degree that you are going to have a personalized AI that knows you, that has your information, that knows what you want to share later, and has all this context on you, you’ll want to be able to use that in a lot of places. Now I agree that the current version of the API is very far off that vision, but I think we can get there.
Audience Member: Yeah. Maybe I have a follow up question to that one. You kind of took mine. But a lot of us who are building application layer companies, we want to, like, use those building blocks, those different API components—maybe the Deep Research API, which is not a release thing, but could be—and build stuff with them. Is that going to be a priority, like, enabling that platform for us? How should we think about that?
Sam Altman: Yeah. I think, I hope something in between those that there is sort of like a new protocol on the level of HTTP for the future of the internet, where things get federated and broken down into much smaller components, and agents are, like, constantly exposing and using different tools and authentication, payment, data transfer. It’s all built in at this level that everybody trusts; everything can talk to everything. And I don’t quite think we know what that looks like, but it’s coming out of the fog, and as we get a better sense for that—again, it’ll probably take us, like, a few iterations toward that to get there, but that’s kind of where I would like to see things go.
Audience Member: Hey Sam, back here. My name is Roy. I’m curious. The AI would obviously do better with more input data. Is there any thought to feeding sensor data? And what type of sensor data, whether it’s temperature, you know, things in the physical world that you could feed in that it could better understand reality?
Sam Altman: People do that a lot. People put that into—people have whatever—they build things where they just put sensor data into an o3 API call or whatever. And for some use cases it does work super well. I’d say the latest models seem to do a good job with this, and they used to not, so we’ll probably bake it in more explicitly at some point, but there’s already a lot happening there.
Voice in ChatGPT
Audience Member: Hi Sam, I was really excited to play with the voice model in the playground. And so I have two questions. The first is: How important is voice to OpenAI in terms of stack ranking for infrastructure? And can you share a little bit about how you think it’ll show up in the product, in ChatGPT, the core thing?
Sam Altman: I think voice is extremely important. Honestly, we have not made a good enough voice product yet. That’s fine. Like, it took us a while to make a good enough text model, too. We will crack that code eventually, and when we do, I think a lot of people are going to want to use voice interaction a lot more.
When we first launched our current voice mode, the thing that was most interesting to me was it was a new stream on top of the touch interface. You could talk and be clicking around on your phone at the same time. And I continue to think there is something amazing to do about, like, voice plus GUI interaction that we have not cracked. But before that, we’ll just make voice really great. And when we do, I think there’s a whole—not only is it cool with existing devices, but I sort of think voice will enable a totally new class of devices if you can make it feel like truly human-level voice.
How central is coding?
Audience Member: Similar question about coding. I’m curious, is coding just another vertical application, or is it more central to the future of OpenAI?
Sam Altman: That one’s more central to the future of OpenAI. Coding, I think, will be how these models kind of—right now, if you ask ChatGPT a response, you get text back, maybe you get an image. You would like to get a whole program back. You would like, you know, custom-rendered code for every response—or at least I would. You would like the ability for these models to go make things happen in the world. And writing code, I think, will be very central to how you, like, actuate the world and call a bunch of APIs or whatever. So I would say coding will be more in a central category. We’ll obviously expose it through our API and our platform as well, but ChatGPT should be excellent at writing code.
Alfred Lin: So we’re gonna move from the world of assistance to agents to basically applications all the way through?
Sam Altman: I think it’ll feel like very continuous, but yes.
Audience Member: So you have conviction in the roadmap about smarter models. Awesome. I have this mental model. There’s some ingredients, like more data, bigger data centers, a transformer as architecture, test time compute. What’s like an underrated ingredient, or something that’s going to be part of that mix that maybe isn’t in the mental model of most of us?
Sam Altman: I mean, that’s kind of the—each of those things are really hard. And, you know, obviously, like, the highest leveraged thing is still big algorithmic breakthroughs. And I think there still probably are some 10Xs or 100Xs left. Not very many, but even one or two is a big deal. But yeah, it’s kind of like algorithms, data, compute, those are sort of the big ingredients.
How to run a great research lab
Audience Member: Hi. So my question is, you run one of the best ML teams in the world. How do you balance between letting smart people like Isa chase Deep Research or something else that seems exciting, versus going top down and being like, “We’re going to build this, we’re going to make it happen. We don’t know if it’ll work.”
Sam Altman: There are some projects that require so much coordination that there has to be a little bit of, like, top down quarterbacking. But I think most people try to do way too much of that. I mean, this is like—there’s probably other ways to run good AI research or good research labs in general, but when we started OpenAI, we spent a lot of time trying to understand what a well-run research lab looks like. And you had to go really far back in the past.
In fact, almost everyone that could help advise us on this was dead. It had been a long time since there had been good research labs. And people ask us a lot, like, why does OpenAI repeatedly innovate, and why do the other AI labs, like, sort of copy? Or why do Biolab X not do good work and Biolab Y does do good work or whatever.
And we sort of keep saying, “Here’s the principles we’ve observed. Here’s how we learned them, here’s what we looked at in the past.” And then everybody says, “Great, but I’m gonna go do the other thing.” That’s fine, you came to us for advice, you do what you want. But I find it remarkable how much these few principles that we’ve tried to run our research lab on—which we did not invent, we shamelessly copied from other good research labs in history—have worked for us. And then people who have had some smart reason about why they were going to do something else that didn’t work.
Audience Member: So it seems to me that these large models, one of the really fascinating things as a lover of knowledge about them, is that they potentially embody and allow us to answer these amazing longstanding questions in the humanities about cyclical changes and artistic interesting things, or even like to what extent systematic prejudice and other sorts of things are really happening in society, and can we sort of detect these very subtle things which we could never really do more than hypothesize before. And I’m wondering whether OpenAI has a thought about, or even a roadmap for working with academic researchers, say, to help unlock some of these new things we could learn for the first time in the humanities and in the social sciences?
Sam Altman: We do, yeah. I mean, it’s amazing to see what people are doing there. We do have academic research programs where we partner and do some custom work, but mostly people just say, like, “I want access to the model or maybe I want access to the base model.” And I think we’re really good at that. One of the kind of cool things about what we do is so much of our incentive structure is pushed towards making the models as smart and cheap and widely accessible as possible, that that serves academics and really the whole world very well. So, you know, we do some custom partnerships, but we often find that what researchers or users really want is just for us to make the general model better across the board. And so we try to focus kind of 90 percent of our thrust vector on that.
Customization and the platonic ideal state
Audience Member: I’m curious how you’re thinking about customization. So you mentioned the federated sign in with OpenAI; bringing your memories, your context. I’m just curious if you think customization and these different post training on application specific things is a band aid, or is trying to make the core models better, and how you’re thinking about that.
Sam Altman: I mean, in some sense, I think platonic ideal state is a very tiny reasoning model with a trillion tokens of context that you put your whole life into. The model never retrains, the weights never customize, but that thing can reason across your whole context and do it efficiently. And every conversation you’ve ever had in your life, every book you’ve ever read, every email you’ve ever read, everything you’ve ever looked at is in there, plus connected all your data from other sources. And, you know, your life just keeps appending to the context, and your company just does the same thing for all your company’s data. We can’t get there today, but I think of kind of like anything else as a compromise off that platonic ideal. And that is how I would eventually, I hope, we do customization.
Alfred Lin: One last question in the back.
Value creation in the coming years
Audience Member: Hi Sam, thanks for your time. Where do you think most of the value creation will come from in the next 12 months? Would it be maybe advanced memory capabilities, or maybe security or protocols that allow agents to do more stuff and interact with the real world?
Sam Altman: I mean, in some sense the value will continue to come from really three things, like building out more infrastructure, smarter models, and building the kind of scaffolding to integrate this stuff into society. And if you push on those, I think the rest will sort itself out.
At a higher level of detail, I kind of think 2025 will be a year of sort of agents doing work, coding in particular, I would expect to be a dominant category. I think there’ll be a few others, too. Next year is a year where I would expect more like sort of AIs discovering new stuff, and maybe we have AIs make some very large scientific discoveries or assist humans in doing that.
And I am kind of a believer that most of the sort of real sustainable economic growth in human history comes from once you’ve kind of spread out and colonized the Earth, most of it comes from just better scientific knowledge and then implementing that for the world. And then ‘27, I would guess, is the year where that all moves from the sort of intellectual realm to the physical world, and robots go from a curiosity to a serious economic creator of value. But that was like an off the top of my head kind of guess right now.
Alfred Lin: Can I close with a few quick questions?
Sam Altman: Great.
Alfred Lin: One of which is GPT5. Is that going to be just all smarter than all of us here?
Sam Altman: I mean, if you think you’re, like, way smarter than o3, then maybe you have a little bit of a ways to go, but o3 is already pretty smart.
Leadership advice for founders
Alfred Lin: [laughs] Two personal questions. Last time you were here, you’d just come off a blip with OpenAI. Given some perspective now and distance, do you have any advice for founders here about resilience, endurance, strength?
Sam Altman: It gets easier over time, I think. Like, you will face a lot of adversity in your journey as a founder, and the kind of challenges get harder and higher stakes, but the emotional toll gets easier as you kind of go through more bad things. So, you know, in some sense yeah, even though abstractly the challenges get bigger and harder, your ability to deal with them, the sort of resilience you build up gets easier, like, with each one you kind of go through.
And then I think the hardest thing about the big challenges that come as a founder is not the moment when they happen. Like, a lot of things go wrong in the history of a company. In the acute thing, you can kind of like—you know, you get a lot of support, you can function off a lot of adrenaline. Like, even the really big stuff, like, your company runs out of money and fails, like, a lot of people will come and support you, and you kind of get through it and go on to the new thing.
The thing that I think is harder to sort of manage your own psychology through is the sort of, like, fallout after. And I think if there’s—you know, people focus a lot about how to work in that one moment during the crisis, and the really valuable thing to learn is how you, like, pick up the pieces. There’s much less talk about that. I think there’s—I’ve never actually found something good to point founders to to go read about, you know, not how you deal with the real crisis on day zero or day one or day two, but on day 60 as you’re just trying to, like, rebuild after it. And that’s the area that I think you can practice and get better at.
Alfred Lin: Thank you, Sam. You’re officially still on paternity leave, I know. So thank you for coming in and speaking with us. Appreciate it.
Sam Altman: Thank you.
[applause]
Noticias
Las personas comparten cosas ‘totalmente desquiciadas’ para las que han usado Chatgpt

El trastorno de ansiedad afecta a casi una quinta parte de la población, solo solo en los Estados Unidos. Nami.org informa que más del 19 por ciento de los estadounidenses sufren un trastorno de ansiedad, que debe distinguirse de los nervios regulares de “adrenalina” que alguien podría obtener de hablar en público o estar atrapados en el tráfico.
Para aquellos que saben, a veces puede parecer debilitante. Al igual que con muchos diagnósticos de salud mental, hay una variedad de gravedad y causas. Estamos “nacidos con él” genéticamente, o un evento traumático puede haber ocurrido que lo desencadena. No importa por qué o “qué tan mal” ocurre, puede sentirse especialmente aislante para aquellos que lo soportan, y para aquellos que quieren ayudar pero no saben qué decir o hacer. La terapia puede ayudar, y cuando sea necesario, medicamentos. Pero entenderlo, para todos los involucrados, puede ser complicado.
https://www.youtube.com/watch?v=bvjkf8iurje– Clip de YouTube sobre ansiedadwww.youtube.com, Psych Hub
La ansiedad no es como un resfriado que puedes atrapar y tratar con un antibiótico. Es difícil explicar exactamente cómo se siente a alguien que no lo experimenta. La mejor manera que puedo describir es que siempre estás sentado en el incómodo pozo de anticipación.
No solo me refiero a una angustia existencial como “¿Hay una vida futura?” o “¿Moriré solo?” Quiero decir, así: “¿Se cerrará mi auto en una intersección ocupada? ¿Qué pasa si necesito un conducto raíz de nuevo algún día? (Lo haré). ¿Llamará? ¿Qué pasa si mi caminante de perros se olvida de venir mientras estoy tentando? ¿Qué pasa si alguien corre una luz roja? ¿Dije lo correcto en la fiesta? ¿Cuál es mi presión arterial?” ¿Estás agotado todavía? Imagine preguntas grandes y pequeñas como esta corriendo continuamente en un bucle a través de la materia gris de un cerebro, sumergiendo dentro y fuera de la lógica en el lóbulo frontal y luego Haga clic, haga clic, haga clic en A medida que se engancha en un borde irregular y se repite … una y otra y otra vez.
Un registro gira en un bucle.Giphy gif por shingo2
Aunque bien intencionado, hay soluciones que las personas a menudo ofrecen que, al menos para mí, hacen que la tensión peor. Muchos terapeutas de salud mental han intervenido en las frases mejor para evitar y han ofrecido alternativas más útiles.
1) En laureltherapy.net, comienzan con el viejo castaño: “Solo relájate”.
Cuando cada sinapsis en tu cerebro está en alerta máxima, alguien que te dice que “simplemente derribarla solo” solo lo empeora. Es literalmente lo contrario de lo que está haciendo tu química cerebral (y no por elección). Es similar a “simplemente calmarse”, que por la misma razón puede sentirse despectivo e inútil.
Ofrecen en su lugar: “Estoy aquí para ti”. Reconoce su incomodidad y da un espacio suave para caer.
2) Otra oración para evitar: “Eres demasiado sensible”.
Esto sería como decirle a alguien con una discapacidad física que es su culpa. En cambio, ofrecen: “Tus sentimientos tienen sentido”.
A veces solo quieres sentirte visto/escuchado, especialmente por los más cercanos a ti. Lo último que uno necesita es sentirse mal por sentirse mal.
3) En EverydayHealth.com, Michelle Pugle (según lo revisado por Seth Gillihan, PhD) cita a Helen Egger, MD, y da este consejo:
No digas “Lo estás pensando demasiado”.
Ella da algunas opciones para probar en su lugar, pero mi favorito es: “Estás a salvo”.
Puede sonar cursi, pero cuando realmente estoy girando, es bueno saber que alguien está a mi lado y no juzga mi mente por pensar de manera diferente a la suya.
4) Pugle también aconseja decir “Preocuparse no cambiará nada”.
No puedo decirte con qué frecuencia se me dice esto y, mientras, tal vez, es cierto, nuevamente implica que no hay nada que uno pueda hacer en un momento de pánico. Ella escribe:
“Tratar de calmar la ansiedad de alguien diciéndoles sus pensamientos no son productivos, que valen la pena, o que son una pérdida de tiempo también invalida sus sentimientos e incluso pueden dejarlos sintiéndose más angustiados que antes”, explica Egger.
En su lugar, intente: “¿Quieres hacer algo para tomarte de la cabeza de las cosas?”
Esto da la impresión de que alguien está realmente dispuesto a ayudar y participar, no solo crítica.
5) “Todo está en tu cabeza”.
La difunta Carrie Fisher una vez escribió sobre cuánto odiaba cuando la gente le decía eso, como si eso fuera de alguna manera reconfortante. Parafraseando, su respuesta fue esencialmente: “Lo sé. ¡Es mi cabeza sacarlo de allí!”
https://www.youtube.com/watch?v=A6YOGZ8PCE– YouTubewww.youtube.com
Laurel Therapy sugiere que intente: “La ansiedad puede ser realmente dura”. Personalmente, preferiría: “¿Cómo puedo ayudar?”
Si bien a veces podría sentirse frustrante, la clave, cuando se trata de ansiedad, es ser consciente de que no está avergonzando o condescendiendo.
Aquí hay algunos conceptos más que me ayudan:
GRATITUD
Vi una película llamada Casi tiempo Hace unos años, escrito por Richard Curtis, que tiene una propensión a ser cursi. Pero esta cita es muy hermosa: “Solo trato de vivir todos los días como si hubiera vuelto deliberadamente a este día, para disfrutarlo, como si fuera el último día final de mi vida extraordinaria y ordinaria”. Simplemente me encanta la idea de fingir que hemos viajado el tiempo a cada momento de nuestras vidas a propósito. Y esto ayuda especialmente a los ansiosos porque si es cierto que siempre estamos herramientando en un futuro impredecible en lugar de estar sentados donde el tiempo quiere que estemos, tiene sentido que estuviéramos allí y hemos vuelto a un momento para mostrarle respeto. Ver todos los días y cada pensamiento como un regalo en lugar de un miedo. Ahora eso es algo.
RESPIRAR
Estoy seguro de que has oído hablar de los beneficios de la meditación. Son verdaderos. He visto la práctica de tener en cuenta tu respiración y sentarse aún hacer grandes diferencias en las personas cercanas a mí. No he podido hacer que la meditación sea parte de mi rutina diaria, pero eso no significa que no pueda esforzarme. (Intente, intente de nuevo.) Parto en el yoga y encuentro que ayuda a frenar mi mente considerablemente.
Saber que TÚ No son tus pensamientos
Nuestras amígdales (la parte del cerebro, que entre otros roles, provoca nuestra respuesta a las amenazas, reales o percibidas) puede jugar con trucos desagradables para nosotros. No somos la suma total de cada pensamiento que hemos tenido. Por el contrario, creo que somos lo que nosotros hacerno lo que pensamos. Nuestra ansiedad (o depresión) no tiene que definirnos, especialmente cuando sabemos que estamos respondiendo a muchas amenazas que ni siquiera existen. Podemos ser de servicio a los demás. Voluntario cuando sea posible o simplemente sea amable con los que lo rodean todos los días. Eso es lo que nos hace quienes somos. Personalmente, esa idea me calma.
Noticias
‘Empire of AI’ author on OpenAI’s cult of AGI and why Sam Altman tried to discredit her book

When OpenAI unleashed ChatGPT on the world in November 2022, it lit the fuse that ignited the generative AI era.
But Karen Hao, author of the new book, Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI, had already been covering OpenAI for years. The book comes out on May 20, and it reveals surprising new details about the company’s culture of secrecy and religious devotion to the promise of AGI, or artificial general intelligence.
Hao profiled the company for MIT Technology Review two years before ChatGPT launched, putting it on the map as a world-changing company. Now, she’s giving readers an inside look at pivotal moments in the history of artificial intelligence, including the moment when OpenAI’s board forced out CEO and cofounder Sam Altman. (He was later reinstated because of employee backlash.)
Empire of AI dispels any doubt that OpenAI’s belief in ushering in AGI to benefit all of humanity had messianic undertones. One of the many stories from Hao’s book involves Ilya Sutskever, cofounder and former chief scientist, burning an effigy on a team retreat. The wooden effigy “represented a good, aligned AGI that OpenAI had built, only to discover it was actually lying and deceitful. OpenAI’s duty, he said, was to destroy it.” Sutskever would later do this again at another company retreat, Hao wrote.
And in interviews with OpenAI employees about the potential of AGI, Hao details their “wide-eyed wonder” when “talking about how it would bring utopia. Someone said, ‘We’re going to reach AGI and then, game over, like, the world will be perfect.’ And then speaking to other people, when they were telling me that AGI could destroy humanity, their voices were quivering with that fear.”
Hao’s seven years of covering AI have culminated in Empire of AI, which details OpenAI’s rise to dominance, casting it as a modern-day empire. That Hao’s book reminded me of The Anarchy, the account of the OG corporate empire, The East India Company, is no coincidence. Hao reread William Dalrymple’s book while writing her own “to remind [herself] of the parallels of a company taking over the world.”
This is likely not a characterization that OpenAI wants. In fact, Altman went out of his way to discredit Hao’s book on X. “There are some books coming out about OpenAI and me. We only participated in two… No book will get everything right, especially when some people are so intent on twisting things, but these two authors are trying to.”
This Tweet is currently unavailable. It might be loading or has been removed.
The two authors Altman named are Keach Hagey and Ashlee Vance, and they also have forthcoming books. The unnamed author was Hao, of course. She said OpenAI promised to cooperate with her for months, but never did.
We get into that drama in the interview below, plus OpenAI’s religious fervor for AGI, the harms AI has already inflicted on the Global South, and what else Hao would have included if she’d kept writing the book.
Order ‘Empire of AI’ by Karen Hao
Mashable: I was particularly fascinated by this religious belief or faith that AGI could be achieved, but also without being able to define it. You wrote about Ilya [Sutskever] being seen as a kind of prophet and burning an effigy. Twice. I’d love to hear more of your thoughts on that.
Karen Hao: I’m really glad that you used religious belief to describe that, because I don’t remember if I explicitly used that word, but I was really trying to convey it through the description. This was a thing that honestly was most surprising to me while reporting the book. There is so much religious rhetoric around AGI, you know, ‘AI will kill us’ versus ‘AI will bring us to utopia.’ I thought it was just rhetoric.
When I first started reporting the book, the general narrative among more skeptical people is, ‘Oh, of course they’re going to say that AI can kill people, or AI will bring utopia, because it creates this image of AI being incredibly powerful, and that’s going to help them sell more products.’
What I was surprised by was, no, it’s not just that. Maybe there are some people who do just say this as rhetoric, but there are also people who genuinely believe these things.
I spoke to people with wide-eyed wonder when they were talking about how it would bring utopia. Someone said, ‘We’re going to reach AGI and then, game over, like, the world will be perfect.’ And then speaking to other people, when they were telling me that AGI could destroy humanity, their voices were quivering with that fear.
The amount of power to influence the world is so profound that I think they start to need religion; some kind of belief system or value system to hold on to.

Ilya Sutskever (pictured here at a 2023 event in Tel Aviv with Sam Altman) burned a wooden effigy at a company retreat that represented AGI gone rogue.
Credit: Photo by Jack Guez / AFP / Getty Images
I was really shocked by that level of all-consuming belief that a lot of people within this space start to have, and I think part of it is because they’re doing something that is kind of historically unprecedented. The amount of power to influence the world is so profound that I think they start to need religion; some kind of belief system or value system to hold on to. Because you feel so inadequate otherwise, having all that responsibility.
Also, the community is so insular. Because I talked with some people over several years, I noticed that the language they use and how they think about what they’re doing fundamentally evolves. As you get more and more sucked into this world. You start using more and more religious language, and more and more of this perspective really gets to you.
It’s like Dune, where [Lady Jessica] tells a myth that she builds around Paul Atreides that she purposely kind of constructs to make it such that he becomes powerful, and they have this idea that this is the way to control people. To create a religion, you create a mythology around it. Not only do the people who hear it for the first time genuinely believe this because they don’t realize that it was a construct, but also Paul Atreides himself starts to believe it more and more, and it becomes a self-fulfilling prophecy. Honestly, when I was talking with people for the book, I was like, this is Dune.
Something I’ve been wondering lately is, what am I not seeing? What are they seeing that is making them believe this so fervently?
I think what’s happening here is twofold. First, we need to remember that when designing these systems, AI companies prioritize their own problems. They do this both implicitly—in the way that Silicon Valley has always done, creating apps for first-world problems like laundry and food delivery, because that’s what they know—and explicitly.
My book talks about how Altman has long pushed OpenAI to focus on AI models that can excel at code generation because he thinks they will ultimately help the company entrench its competitive advantage. As a result, these models are designed to best serve the people who develop them. And the farther away your life is from theirs in Silicon Valley, the more this technology begins to break down for you.
The second thing that’s happening is more meta. Code generation has become the main use case in which AI models are more consistently delivering workers productivity gains, both for the reasons aforementioned above and because code is particularly well suited to the strengths of AI models. Code is computable.
To people who don’t code or don’t exist in the Silicon Valley worldview, we view the leaps in code-generation capabilities as leaps in just one use case. But in the AI world, there is a deeply entrenched worldview that everything about the world is ultimately, with enough data, computable. So, to people who exist in that mind frame, the leaps in code generation represent something far more than just code generation. It’s emblematic of AI one day being able to master everything.
Mashable Light Speed
How did your decision to frame OpenAI as a modern-day empire come to fruition?
I originally did not plan to focus the book that much on OpenAI. I actually wanted to focus the book on this idea that the AI industry has become a modern-day empire. And this was based on work that I did at MIT Technology Review in 2020 and 2021 about AI colonialism.
To really understand the vastness and the scale of what’s happening, you really have to start thinking about it more as an empire-like phenomenon.
It was exploring this idea that was starting to crop up a lot in academia and among research circles that there are lots of different patterns that we are starting to see where this pursuit of extremely resource-intensive AI technologies is leading to a consolidation of resources, wealth, power, and knowledge. And in a way, it’s no longer sufficient to kind of call them companies anymore.
To really understand the vastness and the scale of what’s happening, you really have to start thinking about it more as an empire-like phenomenon. At the time, I did a series of stories that was looking at communities around the world, especially in the Global South, that are experiencing this kind of AI revolution, but as vulnerable populations that were not in any way seeing the benefits of the technology, but were being exploited by either the creation of the technology or the deployment of it.
And that’s when ChatGPT came out… and all of a sudden we were recycling old narratives of ‘AI is going to transform everything, and it’s amazing for everyone.’ So I thought, now is the time to reintroduce everything but in this new context.
Then I realized that OpenAI was actually the vehicle to tell this story, because they were the company that completely accelerated the absolute colossal amount of resources that is going into this technology and the empire-esque nature of it all.

Sam Altman, under President Donald Trump’s administration, announced OpenAI’s $500 billion Stargate Project to build AI infrastructure in the U.S.
Credit: Jim Watson / AFP / Getty Images
Your decision to weave the stories of content moderators and the environmental impact of data centers from the perspective of the Global South was so compelling. What was behind your decision to include that?
As I started covering AI more and more, I developed this really strong feeling that the story of AI and society cannot be understood exclusively from its centers of power. Yes, we need reporting to understand Silicon Valley and its worldview. But also, if we only ever stay within that worldview, you won’t be able to fully understand the sheer extent of how AI then affects real people in the real world.
The world is not represented by Silicon Valley, and the global majority or the Global South are the true test cases for whether or not a technology is actually benefiting humanity, because the technology is usually not built with them in mind.
All technology revolutions leave some people behind. But the problem is that the people who are left behind are always the same, and the people who gain are always the same. So are we really getting progress from technology if we’re just exacerbating inequality more and more, globally?
That’s why I wanted to write the stories that were in places far and away from Silicon Valley. Most of the world lives that way without access to basic resources, without a guarantee of being able to put healthy food on the table for their kids or where the next paycheck is going to come from. And so unless we explore how AI actually affects these people, we’re never really going to understand what it’s going to mean ultimately for all of us.
Another really interesting part of your book was the closing off of the research community [as AI labs stopped openly sharing details about their models] and how that’s something that we totally take for granted now. Why was that so important to include in the book?
I was really lucky in that I started covering AI before all the companies started closing themselves off and obfuscating technical details. And so for me, it was an incredibly dramatic shift to see companies being incredibly open with publishing their data, publishing their model weights, publishing the analyses of how their models are performing, independent auditors getting access to models, things like that, and now this state where all we get is just PR. So that was part of it, just saying, it wasn’t actually like this before.
And it is yet another example of why empires are the way to think about this, because empires control knowledge production. How they perpetuate their existence is by continuously massaging the facts and massaging science to allow them to continue to persist.
But also, if it wasn’t like this before, I hope that it’ll give people a greater sense of hope themselves, that this can change. This is not some inevitable state of affairs. And we really need more transparency in how these technologies are developed.
The levels of opacity are so glaring, and it’s shocking that we’ve kind of been lulled into this sense of normalcy. I hope that it’s a bit of a wake-up call that we shouldn’t accept this.
They’re the most consequential technologies being developed today, and we literally can’t say basic things about them. We can’t say how much energy they use, how much carbon they produce, we can’t even say where the data centers are that are being built half the time. We can’t say how much discrimination is in these tools, and we’re giving them to children in classrooms and to doctors’ offices to start supporting medical decisions.
The levels of opacity are so glaring, and it’s shocking that we’ve kind of been lulled into this sense of normalcy. I hope that it’s a bit of a wake-up call that we shouldn’t accept this.
When you posted about the book, I knew that it was going to be a big thing. Then Sam Altman posted about the book. Have you seen a rise in interest, and does Sam Altman know about the Streisand Effect?

Sam Altman (pictured at a recent Senate hearing) alluded to ‘Empire of AI’ in an X post as a book OpenAI declined to participate in. Hao says she tried for six months to get their cooperation.
Credit: Nathan Howard / Bloomberg / Getty Images
Obviously, he’s a very strategic and tactical person and generally very aware of how things that he does will land with people, especially with the media. So, honestly, my first reaction was just… why? Is there some kind of 4D chess game? I just don’t get it. But, yeah, we did see a rise in interest from a lot of journalists being like, ‘Oh, now I really need to see what’s in the book.’
When I started the book, OpenAI said that they would cooperate with the book, and we had discussions for almost six months of them participating in the book. And then at the six-month mark, they suddenly reversed their position. I was really disheartened by that, because I felt like now I have a much harder task of trying to tell this story and trying to accurately reflect their perspective without really having them participate in the book.
But I think it ended up making the book a lot stronger, because I ended up being even more aggressive in my reporting… So in hindsight, I think it was a blessing.
Why do you think OpenAI reversed its decision to talk to you, but talked to other authors writing books about OpenAI? Do you have any theories?
When I approached them about the book, I was very upfront and said, ‘You know all the things that I’ve written. I’m going to come with a critical perspective, but obviously I want to be fair, and I want to give you every opportunity to challenge some of the criticisms that I might bring from my reporting.’ Initially, they were open to that, which is a credit to them.
I think what happened was it just kept dragging out, and I started wondering how sincere they actually were or whether they were offering this as a carrot to try and shape how many people I reached out to myself, because I was hesitant to reach out to people within the company while I was still negotiating for interviews with the communications team. But at some point, I realized I’m running out of time and I just need to go through with my reporting plan, so I just started reaching out to people within the company.
My theory is that it frustrated them that I emailed people directly, and because there were other book opportunities, they decided that they didn’t need to participate in every book. They could just participate in what they wanted to. So it became kind of a done decision that they would no longer participate in mine, and go with the others.
The book ends at the beginning of January 2025, and so much has happened since then. If you were going to keep writing this book, what would you focus on?
For sure the Stargate Project and DeepSeek. The Stargate Project is just such a perfect extension of what I talk about in the book, which is that the level of capital and resources, and now the level of power infrastructure and water infrastructure that is being influenced by these companies is hard to even grasp.
Once again, we are getting to a new age of empire. They’re literally land-grabbing and resource-grabbing. The Stargate Project was originally announced as a $500 billion spend over four years. The Apollo Program was $380 billion over 13 years, if you account for it in 2025. If it actually goes through, it would be the largest amount of capital spent in history to build infrastructure for technology that ultimately the track record for is still middling.
Once again, we are getting to a new age of empire. They’re literally land-grabbing and resource-grabbing.
We haven’t actually seen that much economic progress; it’s not broad-based at all. In fact, you could argue that the current uncertainty that everyone feels about the economy and jobs disappearing is actually the real scorecard of what the quest for AGI has brought us.
And then DeepSeek… the fundamental lesson of DeepSeek was that none of this is actually necessary. I know that there’s a lot of controversy around whether they distilled OpenAI’s models or actually spent the amount that they said they did. But OpenAI could have distilled their own models. Why didn’t they distill their models? None of this was necessary. They do not need to build $500 billion of infrastructure. They could have spent more time innovating on more efficient ways of reaching the same level of performance in their technologies. But they didn’t, because they haven’t had the pressure to do so with the sheer amount of resources that they can get access to through Altman’s once-in-a-generation fundraising capabilities.
What do you hope readers will take away from this book?
The story of the empire of AI is so deeply connected to what’s happening right now with the Trump Administration and DOGE and the complete collapse of democratic norms in the U.S., because this is what happens when you allow certain individuals to consolidate so much wealth, so much power, that they can basically just manipulate democracy.
AI is just the latest vehicle by which that is happening, and democracy is not inevitable. If we want to preserve our democracy, we need to fight like hell to protect it and recognize that the way Silicon Valley is currently talking about weaponizing AI as a sort of a narrative for the future is actually cloaking this massive acceleration of the erosion of democracy and reversal of democracy.
Empire of AI will be published by Penguin Random House on Tuesday, May 20. You can purchase the book through Penguin, Amazon, Bookshop.org, and other retailers.
Editor’s Note: This conversation has been edited for clarity and grammar.
Disclosure: Ziff Davis, Mashable’s parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.
Topics
Artificial Intelligence
OpenAI
-
Startups12 meses ago
Remove.bg: La Revolución en la Edición de Imágenes que Debes Conocer
-
Tutoriales1 año ago
Cómo Comenzar a Utilizar ChatGPT: Una Guía Completa para Principiantes
-
Startups10 meses ago
Startups de IA en EE.UU. que han recaudado más de $100M en 2024
-
Recursos1 año ago
Cómo Empezar con Popai.pro: Tu Espacio Personal de IA – Guía Completa, Instalación, Versiones y Precios
-
Startups1 año ago
Deepgram: Revolucionando el Reconocimiento de Voz con IA
-
Recursos12 meses ago
Perplexity aplicado al Marketing Digital y Estrategias SEO
-
Recursos1 año ago
Suno.com: La Revolución en la Creación Musical con Inteligencia Artificial
-
Noticias10 meses ago
Dos periodistas octogenarios deman a ChatGPT por robar su trabajo