Connect with us

Noticias

OpenAI Researcher Dan Roberts on What Physics Can Teach Us About AI

Published

on

Contents

Dan Roberts: In the 40s, the physicists went to the Manhattan Project. Even if they were doing other things, that was, that was the place to be. And so now AI is the same thing and, you know, basically OpenAI is that place. So maybe, maybe we don’t need a public sector organized Manhattan Project, but it, you know, it can be OpenAI.

Sonya Huang: Joining us for this episode is Dan Roberts, a former Sequoia AI Fellow who recently joined OpenAI as a researcher. This episode was recorded on Dan’s second-to-last day at Sequoia, before he knew that he would go on to become a core contributor to o1, also known as Strawberry. 

Dan is a quantum physicist who did undergraduate research on invisibility cloaks before getting a PhD at MIT and doing his postdoc at the legendary Princeton IAS. Dan has researched and written extensively about the intersection of physics and AI. 

There are two main things we hope to learn from Dan in this episode. First, what can physics teach us about AI? What are the physical limits on an intelligence explosion and the limits of scaling laws, and how can we ever hope to understand neural nets? And second, what can AI teach us about physics and math and how the world works?

Thank you so much for joining us today, Dan.

The path to physics

Dan Roberts: Thanks. Delighted to be here on my, probably, second-to-last day at Sequoia. Depending on when this airs and how you’re going to talk about it.

Pat Grady: You will always be part of the Sequoia family, Dan. 

Dan Roberts: Thanks, I appreciate it.

Sonya Huang: Maybe just to get started tell us a little bit about “Who is Dan?” You have a fascinating backstory. I think you worked on invisibility cloaks back in college. Like, what led you to become a theoretical physicist in the first place?

Dan Roberts: Yeah, I think–and this is, you know, my stock answer at this point, but I think it’s true–I was just an annoying three year old who never grew up. I asked “why” all the time. Curious. How does everything work? I have a 19-month at home right now, and I can see the way he followed the washing machine repairman around and had to look inside the washing machine. So like, I think I just kept that going. And when you’re more quantitatively oriented than not, rather than going to philosophy I think you sort of veer into physics. And that was sort of what interested me. How does the world work? What is all this other stuff that’s out there? 

The question that you didn’t ask, but maybe I’ll just answer it ahead of time–the sort of inward facing stuff felt less quantitative and more in the realm of the humanities. Like, so “What is all this stuff?” That’s pretty physics-y. “What am I? Who am I? What does it mean to be me?” That felt not very science-y at all. But with AI it sort of seems like we can think about both “What does it mean to be intelligent,” and also “What is all this other stuff?” in some of the same frameworks. And so that’s been very exciting for me.

Sonya Huang: So we should be trying to recruit your 19-month-old right now is what you’re saying?


Dan Roberts: Oh yeah. Absolutely. He grew out of one of his Sequoia onesies, but he fits into a Sequoia toddler t-shirt now. And so he definitely is ready to be a future founder.

Sonya Huang: I guess, at what point did you know that you wanted to think about AI? At what point did that switch start to flip?

Dan Roberts: Yeah. So, I think like many people, when I discovered computers, I wanted to understand how they worked and how to program them. In undergrad, I took an AI class and it was very much good old fashioned AI. A lot of the ideas in that class actually are coming back to be relevant, but at the time it was, it seemed not very practical. It was a lot of, you know, “If this happens, do that.” And there was also some game-playing there. That was sort of interesting. It was very algorithmic, but it didn’t seem related to what it means to be intelligent.

What does it mean to be intelligent?

Pat Grady: Can we just pick up on that real quick? Like, what do you think it means to be intelligent?

Dan Roberts: That’s a great question. 

Pat Grady: I can see wheels turning.

Dan Roberts: Yeah. Well, this is one of those questions where I don’t have a stock answer, but I think it’s important to not say nonsense. One of the things that’s exciting to me about AI is the ability to have systems that do what humans do, and to be able to understand them from “What are the lines of Python that cause that system to do something?” to, you know, to trace that through and understand what are the outputs, and you know, how the system can like, see and classify what it means. You know, “What is a cat? What is not a cat?” Or can write poetry and, you know, it’s just a few lines of code. Whereas, if you’re trying to study humans and ask, “What does it mean for humans to be intelligent?” you have to go from biology, through neuroscience, through psychology, up to other higher level versions of ways of approaching this question. 

And so, I think maybe a nice answer for intelligence, at least the things that are interesting to me, is “The things that humans do.” And now, if you pull back the answer that I just said a second ago, that’s how I connect it to AI. AI is taking pieces of what humans do, and we’re understanding, at least an example that’s kind of simple and easy to study, that we might use to understand better what what it is that humans do.

The path to AI

Sonya Huang: So Dan, you mentioned when you were, you know, studying AI in college, there was a lot of what sounds like kind of hard-coded logic and brute force type approaches. Was there a moment that kind of clicked for you, of like, “Oh, okay. This is different”? Was there a key result or a moment where it was like, “Okay, we’re going bigger places than kind of the ‘if this, then that’ logic of the past”?


Dan Roberts: Yeah, actually it didn’t click. I sort of wrote it off. And it would be great if there was a separation between like ten years between then and the next thing that I’m going to say, but actually the writing-off maybe lasted a year or two, because then I went to the UK for the first part of grad school. I spent a very long time in grad school and I discovered machine learning and a more statistical approach to artificial intelligence, where you have a bunch of examples of large amounts of data. Or at the time, maybe we wouldn’t have said large amounts of data, but we would have said, you have data examples of a task that you want to perform. And there’s different ways that machine learning works, but you write down a very flexible algorithm that can adapt to these examples and start to perform in a way similar to the examples that you have. 

And that approach borrows a lot from physics. It also, at the time, I started… So graduated college in 2009. Discovered machine learning in 2010, 2011. 2012 is the big year for deep learning. And so, you know, there’s not a big separation here between write-off and rediscovery. But I think–and machine learning clearly existed in 2009, it just wasn’t related to the class that I took–but this approach made a lot of sense to me. And it started to have, you know, I got lucky, and then it started to have real progress and seemed to fit in a framework that I understood scientifically. And I got very excited about it.

Sonya Huang: Why do you think there are so many people who come from a similar background or similar path to you? Like, a lot of ex-physicists working on AI. Like, is that a coincidence? Is that herd behavior, or do you think there’s something that makes physicists particularly well suited to understanding AI?

Dan Roberts: I think the answer is yes to all the ways you… 

Sonya Huang: All the above. 

Dan Roberts: You know, physicists infiltrate lots of different subjects, and we get parodied about the way that we go about trying to use our hammers to tackle these things that may or may not be nails. Throughout history, there are a lot of times that physicists have contributed to things that look like machine learning. I think in the near term, the path that physicists used to take when they didn’t remain in academia often was going into quantitative finance, then data science. And I think machine learning, and its realization in industry, was exciting because, again, it’s something that feels a lot like actual physics and is working towards a problem that’s very interesting and exciting for a lot of people.

You know, you’re doing this podcast because you’re excited about AI. Everyone’s excited about AI. And in many ways it’s a research problem that feels a lot like the physics that people were doing. But I think the methods of physics actually are different from the methods of traditional computer science and very well suited for studying large scale machine learning that we use to work on AI.

Traditionally physics involves a lot of interplay between theory and experiment. You come up with some sort of model that you have some sort of theoretical intuition about. Then you go and do a bunch of experiments and you validate that model or not. And then there’s this tight feedback loop between collecting data, coming up with theories, coming up with toy models. Understanding moves forward by that. And, you know, we get these really nice explanatory theories. And I think the way that big deep learning systems work, you have this tight feedback loop where you can do a number of experiments. The sort of math that we use is very well suited to the math that a lot of physicists are familiar with. And so I think it’s very natural for a lot of physicists to work on this. And those tools, a number of them differ from the traditional, at least theoretical, computer science and theoretical machine learning tools for studying the theoretical side of machine learning, and maybe also differentiation between just being an awesome engineer and also being a scientist. And there’s tools from doing science that are helpful in studying these systems.

Pat Grady: Dan, you wrote this, what I thought was a beautiful article, Black Holes and the Intelligence Explosion, and in there you talk about this concept of, sort of the microscopic point of view, and then the system level point of view and how physics really equips people to think about the system level point of view, and that has a sort of complimentary benefit to understanding these systems. Can you just take a minute and kind of explain sort of microscopic versus system level, and how the physics influence helps to understand the system level?

Dan Roberts: Sure. So, let me start with an analogy that I think is, like, very, you know, goes even further than an analogy. But going back, what year is it? Maybe like 200 years or so around the time of the Industrial Revolution, there was steam engines and steam power and a lot of technology that resulted from this and ultimately powered industrialization. And in the beginning, there was a lot of engineering of these steam engines, you know, and there was this high level theory of how this work called thermodynamics, where, you know, and I imagine everyone’s seen this in high school, perhaps where, you know, there’s the Ideal Gas Law that tells you that there’s some relationship between pressure and volume and temperature. And, you know, these are very macro level things. Like you can buy a thermometer. You can also measure the volume of your room and you can buy a barometer as well. Maybe people don’t or look up on the weather report. But, you know, these are like measurements that we use and we talk about. But then underlying this, and it took us a bit later to validate this and understand it, there’s the notion of atoms and molecules, the air molecules bouncing around. And somehow we now understand that those air molecules give rise to things like temperature and pressure and volume. I guess it is easier to understand that the gases, the molecules, are confined to a room.

But there’s a precise way in which you can start with a statistical understanding of those molecules and derive thermodynamics, like, derive the Ideal Gas Law from it. And you can go further than that to derive that it’s ideal because it’s wrong. It’s just a toy model. But you can, there are corrections to it, and you can sort of understand, you know, from the microscopic perspective, which is the molecules, which we don’t really interact with. You know, we don’t see them, we don’t interact with them on a day-to-day basis, but their statistical aggregate properties give rise to, sort of, this behavior that we do see at the macro scale. 

And part of, to get to your question, you know, I think there’s a similar thing going on with deep learning systems. And I wrote a book with Sho Yaida and Boris Hanin on how to apply these sorts of ideas to deep learning, and, at least in an initial framework that allows you to start doing this in an initial way. And to answer your question, the sort of micro perspective is you have neurons and weights and biases, and we can talk about in detail how that works, but when people think of the architecture, there’s some very specific, some people say circuits, there’s specific ways in which these things, you know, there’s an input signal which might be an image or text, and then there’s many parameters. And, you know, it’s very simple to write down. It’s not that many lines of code even taking into account the machine learning libraries, but it’s, you know, it’s like a very simple set of equations. But there’s a lot of weights. There’s a lot of numbers that you have to know in order to get it to do something. And that’s sort of the micro… that’s like the molecules perspective. 

And then there’s like the macro perspective, which is, well, what did it do? Did it produce a poem? Did it solve a math problem? How does, how do we go from that, those weights and biases to, to that macro perspective? And so for statistical physics to thermodynamics, we understand that completely. And you can imagine trying to do the same sort of thing, literally applying the same sorts of methods to understand how does the underlying micro statistical behavior of these models lead to the sort of macro or, as you said, system level perspective?

Sonya Huang: Dan, maybe speaking of scaling laws, and I think you were at our event, AI Ascent, Andre Karpathy mentioned that current AI systems are like 5 or 6 orders of magnitude off in efficiency compared to biological neural nets. How do you think about that? Do you think scaling laws get us there? Just a combination of scaling laws plus hardware getting more efficient or, like, do you think that there’s kind of big step function leaps that need to come in research?

Dan Roberts: There’s maybe two things that could be meant here. One is that the way humans seem to work at a similar scale to AI systems is much more efficient. You know, we don’t need to see trillions of tokens before we speak. You know, my toddler is already starting to speak in sentences. And he’s been exposed to far less tokens than a typical large language model. And so there’s some sort of disconnect between human efficiency at learning and what large language models do. Of course, they’re very different systems. They’re designed, you know, the way in which they learn is right now very different. And so in some sense that’s to be expected. So, there’s this gap here that you could imagine bridging. 

There’s another thing that I think is not what you meant, but I think is sort of the thing to answer about with respect to scaling laws, which is, and I talked about this a bit in the article, but lots of people seem to talk about this, which is “What is the final GPT?” You know, there’s GPT-4 right now, and it could be other companies as well, but since I’m going to join OpenAI let me represent my new company, right? So is it going to be six? Is it going to be seven? At some point, assuming we have to scale things up, there are things that are going to break. Whether they’re economic, we’re going to run out of, you know, we’re going to try to train a model that’s larger than the world’s GDP or GWP, however, the D works for the world. Or we’re going to run out of, you know, we’re not going to be able to produce enough GPUs. Or we’re not going to be able to put, you know, it’s going to cover the surface of the Earth. You know, a lot of these things are going to break down at some point. And so probably the economic one happens first. So how many, you know, how many more iterations do we get before we run out of actually being able to scale practically? And where does that get us? And then, I think, to tie those two perspectives together, there’s scaling on its own. And of course, it’s impossible to disentangle this because people are making things more efficient.

But you know, there’s like, you could imagine there’s the, take literally what GPT-2 was, which was the initial big model, and keep scaling it up. Is that going to get us to some, you know, super different, exciting, economically powerful, or however you want to define what the end state of AI research and AI startups and AI in industry is? Or do we need lots of new exciting ideas? And, and again, of course, you can’t really disentangle these, but I think the general scaling hypothesis is that it’s just the scaling and it’s not the ideas that matter. Whereas, the “How do we get to be efficient like humans?” I think requires, like, non-trivial ideas. And to answer your question, the reason I’m excited about joining OpenAI is that I think there is high leverage to be had in the ideas, you know, in going beyond scaling, and that we will need that in order to get to the next steps. I have no idea what I’ll be working on, but when this airs I guess I will know what I’m working on, but you know that that’s what’s really exciting to me.

The pendulum between scale and ideas

Pat Grady: Dan, is there almost like a pendulum that swings back and forth between scale and ideas in terms of how people apply their efforts in the world of AI? Like, transformers came out. Great idea. Since then, we’ve largely been in this race to scale. It feels like things are starting to asymptote for a bunch of practical reasons that you mentioned. Is the pendulum swinging back toward ideas as the currency? You know, it’s less now about who can, you know, have the biggest GPU cluster and more about finding new architectural breakthroughs, whether that’s, you know, reasoning or something else?

Dan Roberts: Yeah. That’s a really great question. I think there’s this article by Richard Sutton called The Bitter Lesson, not the bitter pill, and it basically gives the argument that ideas aren’t important, that scale is what you need, and that all the ideas are always trumped by scaling, by scaling things up. It says a bunch of things, but maybe that’s a high level takeaway. And, you know, there’s a sense of this where there are a lot of interesting ideas that came out in the 80s and 90s that people didn’t really have scale to explore. And then, I remember when, after AlphaGo and DeepMind was writing a lot of papers, people were rediscovering those papers and re-implementing them in deep learning systems. But this was sort of still before people realized, “No, the thing that you need to do is scale up.” And even now, with transformers, people are exploring other architectures or even simpler architectures that we knew before that seem to be able to, you know, there’s a notion, you know, maybe scaling laws don’t come from as long as the architecture isn’t sick in some way.

They come from sort of the underlying data process and having large amounts of data rather than from having a special idea. I think the real answer is that there’s a balance between the two. That scale is hugely important, and maybe it was just not understood how important. And we also didn’t have the resources to to scale things up at various times, you know, the things that have to go into producing these GPU clusters that are producing these models are, you know, you guys know this as well, like there’s a lot of parts along the supply chain, or along the product chain, whatever you actually call it, in order to make those things happen and to deploy them. And even, you know, the way GPUs were originally, they’ve now co-evolved to be well suited for these models. And the reason, in some sense, you can think of transformers was a good idea was because it was designed to be well-suited to train on the systems that we had at the time. And so sure, these other architectures could do it at an ideal scientific level, but at a practical level, it was important to to get something that was that that was able to to reach that scale.

So I think, you know, if you brought in ideas to be that sort of thing that’s married with scale in some way, then I still think ultimately, like, you know, someone came up with the idea of deep learning. That was an important idea. You know, there’s Pitts and McCulloch came up with the original idea for the neuron. Rosenblatt came up with the original perceptron. And there’s like a lot of people, from going back like 80 or so years, of people just making important discoveries that were ideas that contribute. So, I think it’s both. But it’s easy to see how, you know, if you’re bottlenecked, people think about ideas. And then if you unlock a new capacity of scale somehow, then you just see a huge set of results. And it seems like scale is super important. And I really think it’s more of a synergy between the two.

The Manhattan Project of AI

Sonya Huang: Maybe on the topic of the race to scale, Dan, you mentioned kind of just the economic constraints and realities, which I guess are more, like, practically a ceiling in the private sector. You also mentioned the Manhattan Project earlier in terms of things that physicists have been involved with. Like, do you think we need a Manhattan Project style thing for AI? Like at the nation state or at the international level?

Dan Roberts: Well, one thing I can say is that part of the process that led me to OpenAI is I was talking with your partner, Shaun Maguire, who brought me to Sequoia in the first place, and trying to figure out is there a startup that makes sense for me to work on that has the right mix of, sort of, scientific questions, research questions, and also as a business? And I think it was Shaun that said–and I don’t mean the analogy in terms of the impact of, you know, in terms of like negative impact of what you might think of the Manhattan Project, but just in terms of the scale and the organization–he said, “You know, in the 40s, the physicists’ physicist went to the Manhattan Project. Even if they were doing other things, that was the place to be.” And so now AI is the same thing, and basically said OpenAI is that place. So maybe, maybe we don’t need a public sector organized Manhattan Project, but it, you know, it can be OpenAI.

Sonya Huang: OpenAI as the Manhattan Project. I love that. 

Dan Roberts: Well, maybe that’s not a direct quote that we want to be taken out of context, but I think in terms of…

Sonya Huang: The metaphorical Manhattan Project.

Dan Roberts:Yeah. In terms of scale and ambition. I mean, I think a lot of physicists would love to work at OpenAI for a lot of the same reasons that they probably were excited to… Well, okay, there’s a number of different reasons. Maybe we just have to leave it as a nuanced thing rather than making broad claims.

AI’s black box

Sonya Huang: Can we talk a little bit about this… Like, can we ever understand AI, especially as we go to these deep neural nets, or do you think it’s a hopeless black box, so to speak?

Dan Roberts: Yeah, I think within the… This is my answer to the “What are you a contrarian about?” Although maybe, you know, on the internet everyone takes every side of every position, so it’s hard to say you have a contrarian position. But I think within AI communities, you know, I think my contrarian position is that we can really understand these systems. And in, you know, physics systems are extremely complicated, and we have made a huge amount of progress in understanding them. I think these systems sit in the same framework. And, you know, another principle that Sho and I talk about in our book, and that’s a principle of physics, is that there’s this often extreme simplicity at very large scales, basically due to the statistical averaging, or more technically, the central limit theorem. Things can simplify–and I’m not saying this is what happens exactly in large language models, Of course not–but I do think that we can apply sort of the methods that we have and also, you know, maybe hopefully have AI that can help us do this in the future. And by AI, I mean tools. Not like individual intelligences just going running on their own and solving these problems. But I guess I feel at the extreme end that this is not going to be an art, that the science will will catch up and that it will will be able to make extreme leaps in really understanding how these systems work and behave.

What can AI teach us about physics?

Pat Grady: So, Dan, we’ve talked a bunch about what physics can teach us about AI. Can we talk a bit about what AI can teach us about physics? Are you optimistic about domains like physics and math and how these emerging models can, you know, probe further into those domains?

Dan Roberts: Yes, I’m definitely optimistic. I guess my perspective is that math will be easier than physics, which maybe betrays the fact that I’m a physicist, not a mathematician. And I’ll say–I can give explain why I think that in a second–but, you know, I still have a lot of friends that work in physics and they, you know, there’s like a growing sense and maybe even approaching a dread, that maybe this is actually the answer to “Why do physicists work on AI?” Because, you know, if what you care about is the answer to your physics question and you want to make it happen as soon as possible, what is the highest leverage thing you can do? Maybe it’s not work on the physics question you care about, but it’s work on AI to make the, you know, because you think that the AI might end up solving those questions very rapidly anyway. And I don’t know the extent that anyone really takes it seriously, but I think within the theoretical physics community that I come from, that this is sort of a thing that somewhat gets thrown around and discussed. 

I think maybe to give a more object-level answer, I think what’s exciting about math–and maybe when you have Noam Brown on, if you have him on, he’ll talk about this, but this is something that that he’s talked about for a while before he joined OpenAI–I think the that we have, you know, we made a lot of progress in terms of solving games by doing more than just looking up what is the strategy that we should use to play the games, but also being able to simulate forward and, you know, the way that if I’m in a very hard position in a particular game, rather than just playing with intuition, I might sit and think about what I should do. Yeah, sometimes this goes under the name inference-time compute rather than raining-time compute or pre-training. And you know, there’s a sense in which what it means to do reasoning is very related to this ability to sit and think. So we know how to do it for games because there’s a very clear winning and loss signal. So you can simulate ahead and sort of figure out what it means to do good or not. And I think math, in some parts of math–again, I’m not a mathematician and well, you know, I’m always scared about talking about math publicly and saying something wrong that will upset mathematicians–but it seems like certain types of math problems are not as constrained as games, but are still constrained enough where there’s a notion of like finding a proof, right? You know, there’s definite problems in terms of search, in terms of how do you figure out what is the next, you know, move in the proof, but the fact that we might call it a move suggests that there’s things in math that feel a lot like games. And so we might think the fact that we can do well at games maybe means that we can do well at certain types of mathematical discovery.

Can AI help solve physics’ biggest problems?

Pat Grady: Well, I was going to say since you mentioned Noam, he likes to use the example with test-time compute of whether it could help to prove the Riemann hypothesis. Is there a similar problem or hypothesis in the world of physics that you are optimistic AI can help to solve sometime in our lifetimes?

Dan Roberts: Yeah. So, I mean, there’s a Millennium Problem relating to physics–and if I try to remember exactly what it is I’m sure I will butcher it and then no one will believe that I’m actually a physicist–but it’s related to the, you know, it’s a mathematical physics question related to the Yang-Mills mass gap. But I think, what I wanted to say is that I think some of the flavor of what physicists care about and doing physics feels a little different. This is where I might get in trouble. It feels a little different than some of, like, the mathematical proof type things. Physicists are known to be more informal and, you know, hand-wavy. But also, on the other hand, connected to, in some sense, connected to the interplay between experiment and the sort of models that physicists study is maybe what saves them is that, you know, they have things that are informal and hand-wavy, but very explanatory. And then the mathematicians, it’s like we were saying earlier, the engineers discovered all the exciting industrial machines, and the physicists maybe cleaned up a bunch of the theory about how that works. And then the mathematicians come later and clean up, like, formalize everything and clean it up even more. And so there’s a mathematician or mathematical physicist that cleans up a lot of, you know, make proper and try and, you know, understand in formal ways some of the stuff that physicists do.

But rather than talking about–I mean, I think the key point there is that the sort of questions that are interesting to physicists maybe don’t look like proofs, and maybe it’s not about how do we, given a particular model, how do we actually solve it? Like once things are set up correctly, like it’s often, you know, senior or you know, people that are trained in the field are able to sort of figure out how to analyze those systems. It’s more the other stuff. Like, what is the right model to study? Does it capture the right sort of problems? Does this relate to the thing that you care about? What are the insights you can draw from it? And so for AI to help there, I think it would look different than the way we’re sort of trying to build AI systems for math. So rather than here’s, you know, here’s the word problem going, you know, solve this high school level problem or, you know, prove the Riemann hypothesis.

It’s like, you know, the questions in physics are like, “What is quantum gravity? What happens when something goes into a black hole?” And that’s not like, you know, start generating tokens on that. What does that even look like? And you know, if you go to a physics department, you know, people hang out at the blackboard. They chat about things. Like, maybe they sketch mathematical things, but you know, there’s a lot of other things that go into this. And so maybe the sort of data that you need to collect looks more like that. Or maybe it looks like the random emails and conversations on Slack and the scratch work. And so, I mean, there are definitely tools that we can use, like “Help me understand this new paper so I don’t have to spend two weeks trying to study it and understand it.” You know, maybe let me ask questions about it. I think there are problems with the way that’s currently implemented. 

But, you know, I think there are a lot of tools that will help accelerate physicists just like Mathematica, which is a software package that does integrals, and it does a lot more than that. Sorry, Stephen Wolfram. But I use it to do integrals and, you know, sometimes it doesn’t know integrals and you can look them up in like these integral tables. Anyway. You know, I think, you know, and this applies to other branches of science too. Like, I think the ways in which the questions are asked and what it means to do science in different fields, maybe can look further and further from gains, let’s just say. And so to the extent that that’s true, I think we’ll need to. And not even clear that we’ll need lots of ideas or I mean, we we will need lots of ideas, but it’s more just like, we’ll have to, I think, we’ll just have to approach them all differently. And maybe not, like maybe eventually we’ll have a universal thing that knows how to do all of it. But initially, like at least to me, a lot of these things feel a little different from each other.

Sonya Huang: You’ll have a front row seat to it, in part because you’re also on the prize committee for the AI Math Olympiad, which is something I’m personally super interested in. 

Maybe to your last point on, kind of like, maybe eventually this stuff generalizes. Like, why do you think people are so focused on solving the hardest problems today, like physics, math? Those were the subjects that everyone was terrified of in school, right? Where it feels like there’s a lot more other domains that are also unsolved for now. Like do you think, do you think going for the hardest domains first kind of lets you get towards a generalized intelligence? Like how does solving these different domains kind of fit together in the grander puzzle?

Dan Roberts: Yeah. The first thing that comes to mind when you said that is to just push back and say, “Well, it’s not hard. These are the easy domains.” I mean, I’m bad at biology. It doesn’t make any sense to me at all. My girlfriend actually is in bioengineering and is in biotech. And so, like, what she does just makes no sense to me. I can’t understand any of it, where physics makes complete sense to me. I think maybe a better answer or a less glib answer is that, like I was trying to say about math, there are constraints. And, you know, in particular with math, a lot of it is unembodied. You don’t have to go and do experiments in the real world. You know, they’re sort of self-sufficient and that’s close to, like, what generating text, like the way language models work, or even the way some reinforcement learning systems work for games. And so, I think the further that you go from that, the messier things become, the harder it probably is, and also the harder it is to get the right kind of data to train these systems.

If you want to build an AI–and people are trying to do this, but it seems difficult–if you want to build an AI system that solves biology, I guess you need to also make sure robotics works, you know, so that it can do those sorts of experiments and like it has to understand that sort of data. Or maybe it has humans do it. But, you know, there’s a lot, you know, for a self-sustaining AI biologist, it seems like there’s a lot of things that are going to go into it. I mean, on the way, we’ll have things like AlphaFold 3, which just came out and which I didn’t get a chance to read the details of, but, you know, I saw that they’re trying to use it for drug discovery. And so, you know, I think each of these fields will have things developed along the way. But I think the less constraints there are and the sort of messier and more embodied it is, the harder it will be to accomplish.

Sonya Huang: No, that makes sense. So, like hard for a human is not the same thing–doesn’t correlate to hard for machine. 

Dan Roberts: Yeah. Plus, also maybe humans disagree about what’s hard or not.

Sonya Huang: Some of us think more like machines, I guess. And then I guess the second question was like, do you think it all coalesces into, like, one big model that understands everything? Because right now it seems like there’s a lot of domain-specific problem solving that’s happening.

Dan Roberts: Yeah. I mean, the way things are going, it seems like the answers should be yes. It’s really dangerous to speculate in this field because everything you say is wrong. Usually much sooner than later. 

Sonya Huang: Good thing we’re on record. We’ll hold you to it.

Dan Roberts: Yeah, exactly. But also, like, what does it mean to be different? You know, like, there’s a trivial way to make both things–make the question meaningless–by, like you say, the model is the union of all those other models. But there’s also a sense in which mixture of experts was originally meant to be that, it’s not that in practice at all. But, you know, there is a sliding scale here. But, you know, it does seem like people, at least the big labs, are going for the one big model and have a belief that that’s, you know, well… I don’t know, but maybe I will in the future, understand what the philosophy is there.

Lightning round

Pat Grady: Dan, we have a handful of more general questions to kind of close things out here. So I’ll start with the high level one. If we think kind of short-term, medium-term, long-term and call it, you know, five months, five years, five decades, what are you most excited or optimistic about in the world of AI?

Dan Roberts: Five years ago was after the transformer model came out, but it was around maybe when GPT-2 came out. So it seems like, you know, for the last five years we’ve been doing scaling. I imagine within the next five years, we’ll see that scaling will terminate. And maybe it will terminate in, you know, a utopia of some kind, you know, that people are excited about where we’re all post-economic and so forth, and we’ll have to shut down all your funds and, you know, return monopoly money because money won’t matter. Or we’ll see that we need lots of ideas. Maybe there will be another AI winter. 

I imagine that–and again, scary to really speculate–but I imagine, like something will be something will be interestingly different within five years about AI. And it might just be that AI is over and we’re on to the, you know, we’re on to the next exciting investment opportunity. And, you know, everyone else will shift elsewhere. And you know, I’m not saying that. That’s not what’s motivating me about AI, but, you know. So maybe five years is enough time to see that. And I think in, in one year, I mean, or there’s a five. I messed this up. Whatever. Maybe it was five months, I don’t remember. I’m sorry.

Pat Grady: It’s five months. It’s okay, it’s okay. These are approximations. I know you said physicists are very hand-wavy. Venture capitalists are very hand-wavy. These are approximations.

Dan Roberts: In physics there’s–I like to joke that there’s like three numbers. There’s zero, one and infinity. And, you know, those are the only numbers that matter. You know, things are either arbitrarily small, arbitrarily large, or about order one. 

So. Okay. Good. Thanks for reminding me. But yeah, for five months, I mean, I’m excited to well, to learn what’s exciting at the forefront of a huge research lab like OpenAI. And I think one thing that will be interesting will be the delta between the next generation of models, right? Because there’s ways in which things are scaling up in terms of, it’s not really public, I guess, aside from Meta, but in terms of size of data, size of models. And we see scaling laws that relate to, you know, something like the loss and, you know, it’s hard to translate that into actual capabilities. And so what will it feel like to talk to the next generation model? What will it look like? Will it have a huge economic impact or not? And I think I, you know, in terms of estimating velocity, right, you need a few points. You can’t just have one point. And we sort of have, you know, we’re starting to have that with GPT-3 to GPT-4. .But, you know, I feel like with the next delta, we’ll get to really see what the velocity looks like and what it feels like going from model to model to model. And maybe I’ll be able to make a better prediction in five months from now, but then I guess I probably won’t be able to tell it to you guys.

Sonya Huang: Thanks, Dan. One thing that stood out to me is just like your writing is so accessible and light and funny, and that’s not what I’m used to when I read super technical stuff. Like, do you think all technical writing should be informal and funny? Like, is that deliberate?

Dan Roberts: It’s definitely deliberate. It goes into, I think in some sense it’s inherited. I mean, I definitely am a not serious person, but I also think it’s an inherited, sort of, from the style of the field that I came from. But I’ll tell you a story. I was at lunch. I was a postdoc at the Institute for Advanced Study in Princeton, and I was having lunch and joking around with this professor, Nati Seiberg, who’s a professor at the institute. And then we got into, I think we were talking about someone, someone asked a question about, like, “What is a good title?” And I was like, “Oh, the title has to be a joke.” And he was on board with that. And then I was explaining that for me, the reason to write a paper is for the jokes. Like you have a bunch of jokes in mind, and then you want people to read those jokes. And so you have to package it into the science product, and people want to read the science product, and they’re forced to suffer through the jokes. And Nati, who is this Israeli professor, he was like, “I don’t get it. Why? Why can’t you just do the science? Why do you need the, like… the jokes are great too, but like, you should write for the science, not for the jokes.”

And I was adamant that I write for the jokes. But I think it’s, I think it’s what you said that at some point, you know, you learn about the scientific method and, you know, the, the formal ways of doing things, and you learn all these rules and then you grow up a bit. Or maybe, I had a roommate who was a linguist, he’s now a professor of linguistics at UT Austin, and like, he emphasized that you can–he would tell me which rules that I could break or what, like where the rules come from and why they’re important or not. And you sort of realize that you can break these rules. And, like, the ultimate goal should be, is the reader going to read it and understand it and enjoy it. So you don’t want to do things that compromise their ability to read and understand. But you don’t want to obscure things. You want to make it, you know, if it’s more enjoyable, people are more likely to read it and take the point. It’s also more fun if you’re writing it. So I think that’s where that comes from.

Sonya Huang: Dan, thank you so much for joining us today. We learned a lot. We enjoyed your jokes and I hope you have. I hope you have a wonderful second-to-last day at Sequoia. Thank you for spending part of it with us. We really appreciate it.

Dan Roberts: Thanks. I was absolutely delighted to be here chatting with you guys. This was wonderful.

Continue Reading
Click to comment

Leave a Reply

Tu dirección de correo electrónico no será publicada. Los campos obligatorios están marcados con *

Noticias

Korl lanza una plataforma que orquestan agentes de IA de OpenAi, Gemini y Anthrope para hipercustomizar la mensajería de los clientes

Published

on

Únase a nuestros boletines diarios y semanales para obtener las últimas actualizaciones y contenido exclusivo sobre la cobertura de IA líder de la industria. Obtenga más información


Es un enigma: los equipos de clientes tienen más datos de los que pueden comenzar a usar, desde las notas de Salesforce, los boletos JIRA, los paneles de proyectos, los documentos de Google, pero tienen dificultades para reunirlo todo al elaborar mensajes de clientes que realmente resuenan.

Las herramientas existentes a menudo dependen de plantillas o diapositivas genéricas y no pueden proporcionar una imagen completa de viajes de clientes, hojas de ruta, objetivos del proyecto y objetivos comerciales.

Korl, una startup lanzada hoy, espera superar estos desafíos con una nueva plataforma que funciona en múltiples sistemas para ayudar a crear comunicaciones altamente personalizadas. La herramienta multimodal múltiple utiliza una mezcla de modelos de OpenAI, Géminis y antrópico para obtener datos y contextualizar los datos.

“Los ingenieros tienen herramientas de IA potentes, pero los equipos orientados al cliente están atrapados con soluciones poco profundas y desconectadas”, dijo Berit Hoffmann, CEO y cofundador de Korl, a VentureBeat en una entrevista exclusiva. “La innovación central de Korl se basa en nuestras tuberías avanzadas de múltiples agentes diseñados para construir el contexto del cliente y el producto que carecen las herramientas genéricas de presentación”.

Creación de materiales de cliente personalizados a través de una vista de múltiples fuentes

Los agentes de AI de Korl agregan información de diferentes sistemas, como la documentación de ingeniería de JIRA, contornos de Google Docs, diseños de Figma y datos de proyectos de Salesforce, para construir una vista de múltiples fuentes.

Por ejemplo, una vez que un cliente conecta a Korl con JIRA, su agente estudia las capacidades de productos existentes y planificadas para descubrir cómo mapear datos e importar nuevas capacidades de productos, explicó Hoffmann. La plataforma coincide con los datos del producto con la información del cliente, como el historial de uso, las prioridades comerciales y la etapa del ciclo de vida, que llena los vacíos con el uso de la IA.

“Los agentes de datos de Korl recopilan, enriquecen y estructuran diversos conjuntos de datos de fuentes internas y datos públicos externas”, dijo Hoffmann.

Luego, la plataforma genera automáticamente revisiones comerciales trimestrales (QBR) personalizadas, lanzamientos de renovación, presentaciones a medida y otros materiales para su uso en hitos importantes del cliente.

Hoffmann dijo que el diferenciador central de la compañía es su capacidad para ofrecer “materiales pulidos listos para el cliente”, como diapositivas, narraciones y correos electrónicos, “en lugar de simplemente análisis o ideas crudas”.

“Creemos que esto ofrece un nivel de valor operativo que los equipos orientados al cliente necesitan hoy dadas las presiones para hacer más con menos”, dijo.

Cambiar entre OpenAi, Géminis, Anthrope, basado en el rendimiento

Korl orquesta un “conjunto de modelos” en OpenAi, Gemini y Anthrope, seleccionando el mejor modelo para el trabajo en el momento basado en la velocidad, la precisión y el costo, explicó Hoffmann. Korl necesita realizar tareas complejas y diversas (narraciones matizadas, computación de datos, imágenes), por lo que cada caso de uso coincide con el modelo más desempeñado. La compañía ha implementado “mecanismos sofisticados de respaldo” para mitigar las fallas; Al principio, observaron altas tasas de falla al confiar en un solo proveedor, informó Hoffman.

La startup desarrolló un aplazamiento de automóviles patentado para manejar diversos esquemas de datos empresariales en JIRA, Salesforce y otros sistemas. La plataforma se asigna automáticamente a los campos relevantes en Korl.

“En lugar de solo una coincidencia semántica o de nombre de campo, nuestro enfoque evalúa factores adicionales como la escasez de datos para obtener y predecir coincidencias de campo”, dijo Hoffmann.

Para acelerar el proceso, Korl combina modelos de baja latencia y alto rendimiento (como GPT-4O para respuestas rápidas de construcción de contexto) con modelos analíticos más profundos (Claude 3.7 para comunicaciones más complejas y orientadas al cliente).

“Esto garantiza que optimizemos para la mejor experiencia del usuario final, haciendo compensaciones basadas en el contexto entre inmediatez y precisión”, explicó Hoffmann.

Debido a que “la seguridad es primordial”, Korl busca garantías de privacidad de grado empresarial de los proveedores para garantizar que los datos del cliente estén excluidos de los conjuntos de datos de capacitación. Hoffmann señaló que su orquestación múltiple y contextual, lo que impulsa adicional limita la exposición inadvertida y las fugas de datos.

Lidiar con datos que son ‘demasiado desordenados’ o ‘incompletos’

Hoffman señaló que, al principio, Korl escuchó de los clientes que les preocupaba que sus datos fueran “demasiado desordenados” o “incompletos” para ser aprovechados. En respuesta, la compañía construyó tuberías para comprender las relaciones de los objetos comerciales y llenar los vacíos, como cómo posicionar las características externamente o cómo alinear los valores en torno a los resultados deseados.

“Nuestro agente de presentación es lo que aprovecha esos datos para generar diapositivas de clientes y pista de conversación [guide conversations with potential customers or leads] dinámicamente cuando sea necesario ”, dijo Hoffmann.

También dijo que Korl presenta “verdadera multimodalidad”. La plataforma no es solo extraer datos de varias fuentes; Está interpretando diferentes tipos de información, como texto, datos estructurados, gráficos o diagramas.

“El paso crítico es ir más allá de los datos sin procesar para responder: ¿Qué historia cuenta este gráfico? ¿Cuáles son las implicaciones más profundas aquí, y realmente resonarán con este cliente específico?”, Dijo. “Hemos creado nuestro proceso para realizar esa diligencia debida crucial, asegurando que la producción no sea solo datos agregados, sino un contenido genuinamente rico entregado con un contexto significativo”.

Dos de los competidores cercanos de Korl incluyen Gainsight y Clari; Sin embargo, Hoffmann dijo que Korl se diferencia al incorporar un contexto profundo de productos y hoja de ruta. Las estrategias efectivas de renovación y expansión del cliente requieren una comprensión profunda de lo que hace un producto, y esto debe combinarse con el análisis de los datos y el comportamiento del cliente.

Además, Hoffmann dijo que Korl aborda dos “deficiencias fundamentales” de las plataformas existentes: contexto comercial profundo y precisión de la marca. Los agentes de Korl recopilan el contexto comercial de múltiples sistemas. “Sin esta inteligencia integral de datos, las cubiertas automatizadas carecen de valor comercial estratégico”, dijo.

Cuando se trata de la marca, la tecnología patentada de Korl extrae y replica las pautas de los materiales existentes.

Reducir el tiempo de preparación de la cubierta de ‘varias horas a minutos’

Las primeras indicaciones sugieren que Korl puede desbloquear al menos una mejora de 1 punto en la retención de ingresos netos (NRR) para las compañías de software del mercado medio, dijo Hoffmann. Esto se debe a que descubre el valor del producto previamente no realizado y facilita la comunicación de los clientes antes de que se conviertan o toman decisiones de renovación o expansión.

La plataforma también mejora la eficiencia, reduciendo el tiempo de preparación de la plataforma para cada llamada del cliente de “varias horas a minutos”, según Hoffman.

Los primeros clientes incluyen la plataforma de construcción de habilidades Datacamp y Gifting y Direct Mail Company Sendoso.

“Abordan un desafío crítico y pasado por alto: con demasiada frecuencia, las características del producto se lanzan mientras que los equipos de mercado (GTM) no están preparados para venderlas, apoyarlas o comunicarlas de manera efectiva”, dijo Amir Younes, director de clientes de Sendoso. “Con la IA de Korl, [go-to-market] La habilitación de GTM y la creación de activos podrían estar a solo un clic de distancia, sin agregar sobrecarga para los equipos de I + D “.

Korl ingresó hoy al mercado con $ 5 millones en fondos iniciales en una ronda co-liderada por Mac Venture Capital y subrayado VC, con la participación de Perceptive Ventures y Diane Greene (fundador de VMware y ex CEO de Google Cloud).

Continue Reading

Noticias

Las apuestas para el Plan B de Openai

Published

on

La decisión de Operai de reducir su ambiciosa reorganización corporativa ha provocado mucho escrutinio, incluido lo que el plan significa para la seguridad de la inteligencia artificial, las ganancias potenciales para los inversores y una pelea continua con Elon Musk.

Lo que está surgiendo es que de alguna manera, cómo operai no está cambiando mucho. Pero todavía hay muchas preguntas sobre el futuro del consecuente desarrollador de IA.

Lo último: Operai anunció un cambio a menor escala a su estructura famosa y compleja. Recuerde que fue fundada como una organización sin fines de lucro. Pero en 2019, estableció una subsidiaria con fines de lucro para comenzar a recaudar dinero de los inversores para financiar su investigación de inteligencia artificialmente costosa. Luego, el año pasado, la compañía se movió para convertirse en una entidad con fines de lucro en la que la organización sin fines de lucro mantuvo una participación pero no tenía control.

Ahora, Operai planea convertir su subsidiaria con fines de lucro en una corporación de beneficios público, que aún estaría controlado por la organización sin fines de lucro, aunque el tamaño de su apuesta permanece indeterminado. (¿Tienes todo eso?) Sam Altman, su CEO, dijo el lunes que el plan revisado todavía le da a su nueva empresa “una estructura más comprensible para hacer las cosas que una empresa como nosotros tiene que hacer”.

¿Qué significa esto para los inversores en Openai? Recuerde que colectivamente han vertido casi $ 64 mil millones en el laboratorio de IA, valorándolo en $ 300 mil millones. De alguna manera, Little ha cambiado, algunos están diciendo en privado: todavía mantienen una parte de la empresa y, de hecho, se beneficiarán de una eliminación planificada de los límites en las ganancias que pueden obtener del negocio.

Altman también argumentó el lunes que la medida esencialmente cumple con un requisito integrado en un acuerdo reciente con SoftBank: convertirse en una entidad con fines de lucro para fin de año, o perder $ 10 mil millones de una inversión planificada de $ 30 mil millones por el gigante tecnológico japonés.

¿Qué pasa con Microsoft? El gigante tecnológico sigue siendo quizás el más crucial de los patrocinadores de OpenAi, ya que tanto un gran inversor como un socio tecnológico que atiende a la mayor parte de sus necesidades informáticas. Bloomberg informa, citando fuentes, que Microsoft es actualmente el único inversor con el que Openai está negociando su reorganización y que el software Titan aún no ha firmado.

El aumento de las cepas entre los dos lados no ayuda.

¿La medida satisface a los críticos del plan anterior de Openai? Incluyen expertos en IA que preocuparon que convertir a la empresa en un negocio con mentalidad de ganancia lo incentivara para renunciar a la seguridad por dinero.

También incluyeron a los Fiscales Generales de California y Delaware, los estados en los que OpenAi se encuentra y donde está legalmente incorporado, a quienes le preocupaba que la compañía ya no ponga el interés público primero. También les preocupaba si la participación de la organización sin fines de lucro en la entidad con fines de lucro sería bastante valorada.

El fiscal general estatal dijo el lunes que están revisando el nuevo plan.

Hay un crítico que claramente no es feliz: Elon Musk, quien cofundó Openai, pero desde entonces ha presentado múltiples trajes para detener la conversión con fines de lucro. (La compañía ha argumentado que Musk, que desde entonces ha fundado un rival, Xai, está tratando de impedir a un competidor). Un abogado de Musk dijo que el nuevo plan de OpenAi era “un esquivador transparente que no aborda los problemas básicos”.

Altman descartó las preocupaciones sobre Musk el lunes: “Todos estamos obsesionados con nuestra misión”, dijo. “Todos están obsesionados con Elon”.

Ford dice que las tarifas del presidente Trump podrían costarle $ 1.5 mil millones este año. Pero la compañía, que fabrica la mayoría de sus vehículos en los EE. UU., Dijo que está menos afectado por los aranceles del 25 por ciento de Trump en autopartes que otros fabricantes de automóviles. Sus acciones han caído bruscamente en el comercio previo al mercado, junto con otros fabricantes de automóviles. General Motors dijo la semana pasada que los impuestos aumentarían los costos de hasta $ 5 mil millones este año. Ford se unió a los fabricantes de automóviles europeos, incluidos Mercedes-Benz y Stellantis para desechar su pronóstico, citando la incertidumbre sobre los aranceles.

El imperio criptográfico de Trump está complicando la nueva legislación amigable para el sector. La Ley Genius, que busca establecer directrices para las llamadas stablecoins, se ha encontrado con la oposición de los senadores demócratas que argumentan que podría beneficiar directamente al negocio de divisas digitales de la familia Trump, citando informes por los tiempos. En otros lugares, un grupo de comerciantes obtuvo una ganancia de casi $ 100 millones comprando una memoria vinculada a Melania Trump minutos antes de que se hiciera pública en enero, según el Financial Times.

La administración Trump intensifica su disputa con Harvard. Los funcionarios federales han descalificado a la universidad de futuras subvenciones de investigación, en otra táctica aparentemente destinada a llevar a la escuela a la mesa de negociación sobre supervisión adicional. En relación con los funcionarios de Francia y Bruselas están tratando de beneficiarse de los enfrentamientos de Trump con la academia al ofrecer grandes incentivos financieros para atraer a los científicos estadounidenses a Europa para realizar su trabajo.

Las acciones de entregao saltan en una venta de $ 3.9 mil millones a Doordash. La transacción dejaría que Doordash, un gigante en la industria de la entrega de alimentos de los Estados Unidos, se expandiera más a Europa y Oriente Medio. Por separado, Wonder, el propietario de Grubhub, ha cerrado una ronda de fondos que lo valora en más de $ 7 mil millones, informa Bloomberg.

La amenaza del presidente Trump de extender los aranceles a Hollywood representa un nuevo frente en su guerra comercial mundial.

Casi todos sus gravámenes se han centrado en los productos fabricados, desde juguetes hasta acero. Pero el arancel del 100 por ciento propuesto sobre las películas producidas fuera de los servicios de objetivos de los Estados Unidos, que representa más del 70 por ciento del PIB del país y es el principal motor de crecimiento de la economía de los Estados Unidos. El sector jumbo disfruta de un excedente comercial.

Las preguntas están girando sobre la idea de Trump. “¿Son solo películas que se muestran en los teatros de EE. UU., O incluye películas transmitidas sobre Netflix/Disney+, o películas originales lanzadas en TV de pago regular? ¿O podría referirse a incentivos de creación de contenido?” Jeff Wlodarczak, jefe de investigación fundamental, escribió en un correo electrónico a Dealbook.

La autoridad legal de Trump también parece turbia. Escribió en Truth Social que es necesaria una tarifa porque Hollywood está siendo “devastado”, calificando la situación como una “amenaza de seguridad nacional”. El cine, dijo, es “¡mensajes y propaganda!”

Trump planea conocer a los líderes de la industria. “Quiero asegurarme de que estén contentos con eso, porque todos estamos sobre trabajos”, dijo el lunes en la Oficina Oval.

Una cosa está clara: dañaría el resultado final de Hollywood. Disparar en los Estados Unidos es costoso. Las reglas sindicales requieren mano de obra calificada relativamente alta en costo, y los estudios de cine han aprovechado las exenciones fiscales en el extranjero. (Los grupos laborales se han quejado de perder trabajo a los equipos internacionales). Las producciones estadounidenses con presupuestos de más de $ 40 millones cayeron un 26 por ciento el año pasado en comparación con hace dos años, según datos citados en el Wall Street Journal del grupo de investigación ProdPro.

En un esfuerzo por mantener las producciones en los Estados Unidos, 38 estados han dado más de $ 25 mil millones en incentivos fiscales. El gobernador Gavin Newsom de California propuso recientemente duplicar el programa de incentivos del estado. Algunos han criticado estas iniciativas como acuerdos de pérdida de dinero para los contribuyentes.

La economía de la industria de los medios ya estaba revuelta. Las aplicaciones de video, los podcasts e Internet más ampliamente han alejado al público de los puntos de venta tradicionales. La transmisión ahora domina a Hollywood, pero sus márgenes son delgados en comparación con las ganancias de grasas que la industria tradicional de la televisión de pago disfrutó durante décadas. Los presupuestos de producción se han adelgazado.

Netflix podría recibir un golpe del 20 por ciento a sus ganancias, Jason Bazinet, un analista de Citigroup, escribió en una nota de investigación, y agregó que en el peor de los casos, los aranceles podrían costarle al gigante de la transmisión $ 3 mil millones adicionales al año.

Las compañías de producción ya estaban bajo la presión de la venta de entradas, que han caído un 22 por ciento desde 2019, según cifras de eMarketer.

¿A quién más podría doler? Canadá, Gran Bretaña, Australia y Nueva Zelanda se han convertido en lugares populares de filmación para las producciones de Hollywood. Funcionarios en Australia y Nueva Zelanda prometieron apoyar a sus industrias cinematográficas frente al último gambito arancelario de Trump.

Eso plantea la posibilidad de represalias. Si los aranceles de Trump cortan el crecimiento internacional de la televisión y el cine, ¿podrían otros países represalias? En abril, China limitó el número de películas estadounidenses permitidas en el país cuando Trump anunció su plan de tarifas más amplio en abril.

Hollywood depende de los mercados extranjeros para más de las tres cuartas partes de sus ingresos de taquilla.

¿Quién estaba en la habitación? El actor Jon Voight, uno de los asesores de Hollywood de Trump, discutió la creación de incentivos federales para mantener producciones en los Estados Unidos, según el Journal.


– El Banco central europeoque publicó una nueva investigación que muestra cómo las amenazas arancelarias del presidente Trump han llevado a los consumidores de la UE a rechazar los productos estadounidenses, con consecuencias potencialmente duraderas para las empresas estadounidenses. En un signo de ese turno, Ventas de Teslas continuó sumergiendo en Europa el mes pasado.


Gran parte del comentario público el lunes que salió de la Conferencia Global del Instituto Milken en Los Ángeles, una peregrinación anual de la costa oeste para Wall Street y Silicon Valley, se centró en la guerra comercial del presidente Trump.

Estas son algunas de las declaraciones más notables del primer día:

“Los aranceles están diseñados para alentar a empresas como la suya a invertir directamente en los Estados Unidos”.

Secretario del Tesoro Scott Bessent dijo a los CEO e inversores que asistieron que la agenda económica de Trump, incluidos los recortes de impuestos planificados y la desregulación, reforzaría el crecimiento a largo plazo.

“Hemos hecho daño a la marca estadounidense: la marca para la estabilidad, la previsibilidad, la regularidad … nos veo moviéndonos de lo que era hiper-excepcionalismo a meramente excepcional”.

Marc RowanCEO de Apollo Global Management, dijo que las consecuencias de la pelea de tarifas han obligado a su empresa a alejar su enfoque de inversión de las empresas “crecientes y de riesgo” a empresas más establecidas.

“Si es el 10 por ciento, la mayoría de los clientes con los que hablamos dicen:” Sí, podemos absorber eso “. Si es el 25 por ciento, no tanto “.

Jane FraserCEO de Citigroup, dijo que muchos de los clientes del prestamista pueden soportar aranceles que no son excesivos. Pero agregó que muchos dijeron que la incertidumbre comercial los ha obligado a detener algunas inversiones y contratación.

“Lo correcto, en mi opinión, es que nos detenemos en China. Vamos a darle un poco más de tiempo. Tal vez son 180 días”.

Bill AckmanEl inversor multimillonario, pidió un tiempo de espera en la guerra comercial. Le dijo a Andrew que un detenido de seis meses repararía daños a la economía de los Estados Unidos, especialmente a las pequeñas empresas, y mejoraría las posibilidades de que la Casa Blanca llegue a un acuerdo con Beijing.


Dealbook quiere saber de ti

Nos gustaría saber cómo están afectando las tarifas a su negocio. ¿Ha cambiado de proveedor? ¿Los precios negociados más bajos? ¿Hacidas de inversiones o contratación? ¿Hizo planes para trasladar la fabricación a los Estados Unidos? ¿O los aranceles han ayudado a su negocio? Por favor Háganos saber lo que está haciendo.

Ofertas

Política, política y regulación

  • La administración Trump ha adoptado la misma posición que la Casa Blanca de Biden al pedirle a un juez federal que desestimara una demanda que busca restringir el acceso a la píldora abortiva mifepristone. (NYT)

  • Según los informes, el Secretario de Defensa Pete Hegseth utilizó múltiples grupos de chat de señal para comunicar el negocio oficial del Pentágono. También ha ordenado un recorte del 20 por ciento para las filas de alto nivel del ejército. (WSJ, NYT)

Lo mejor del resto

¡Nos gustaría sus comentarios! Envíe por correo electrónico pensamientos y sugerencias a truthbook@nytimes.com.

Continue Reading

Noticias

OpenAi Abandons Mover al estado con fines de lucro después de la reacción violenta. ¿Ahora que?

Published

on

Después de la reacción violenta, Operai ha abandonado los planes de reestructurar para eliminar el control de su entidad sin fines de lucro. Promotor Revisa la historia de las tensiones internas de OpenAI para obtener ganancias sobre su propósito fundador, inteligencia artificial en beneficio de la humanidad y qué preguntas quedan después del retiro de la empresa.


El lunes, Openai anunció que había abandonado planes controvertidos para arrojar control de su entidad matriz sin fines de lucro. Todavía convertirá su brazo con fines de lucro en una Corporación de Beneficios Públicos (PBC) según lo previsto. Mientras que un PBC señala que la empresa buscará objetivos más allá de la maximización de las ganancias, cómo la empresa persigue estos objetivos y las métricas que determinan si la empresa logra estos objetivos a discreción de la gestión de la empresa.

Operai se fundó originalmente como una organización sin fines de lucro en 2015 con “el objetivo de construir una inteligencia general artificial segura y beneficiosa para el beneficio de la humanidad”. La compañía recaudó millones de dólares en donaciones, incluso del cofundador Elon Musk, y en créditos y descuentos de Amazon, Google y Microsoft. Durante sus primeros años, Openai operó con donaciones.

Openai, que lanzó su popular chatbot chatgpt en 2022, ha encabezado el desarrollo de inteligencia artificial generativa. La IA generativa utiliza grandes conjuntos de datos para generar patrones que luego puede reiterar en texto, imágenes o video. En 2019, el CEO de Openai, Sam Altman, anunció que las ambiciones de la firma habían superado lo que las donaciones podrían apoyar y que Operai crearía un brazo con fines de lucro para atraer inversiones. La entidad con fines de lucro estaría completamente controlada por su padre sin fines de lucro. Tampoco sería obligatorio obtener ganancias o hacer distribuciones a sus inversores. Cualquier distribución que hizo se limitaría a 100 veces la inversión. Según Openai, en 2019, “la con fines de lucro [entity] recaudó una ronda inicial de más de $ 100 [million]seguido de $ 1 mil millones de Microsoft “.

Propósito frente a ganancias

En un reciente Capitalis no EPisode y pieza complementaria en PromotorRose Chan Loui discutió cómo este cambio a la estructura de OpenAi en 2019 comenzó una lucha interna continua entre el deseo de la empresa de comercializar y buscar ganancias para atraer inversiones por un lado, y salvaguardar su propósito fundador sin fines de lucro para garantizar que el desarrollo de IA beneficiara a la humanidad por el otro. Chan Loui es el Director Ejecutivo Fundador del Lowell Milken Center on Philanthropy y organizaciones sin fines de lucro en la Universidad de California, Los Ángeles.

En 2020, Openai hizo su primer producto, GPT-3, un modelo de aprendizaje de idiomas, disponible para el público. En 2022 llegaron las primeras iteraciones de ChatGPT y Dall-E, un generador de imágenes.

El lanzamiento de los principales productos generativos de IA comenzó a pagar dividendos para OpenAI en 2023. Varios fondos de cobertura invirtieron alrededor de $ 300 millones en la compañía que surgen. Microsoft prometió $ 10 mil millones. Sin embargo, la búsqueda de Altman para comercializar los productos de Openai y atraer más inversiones preocupó a algunos empleados de OpenAI de que el despliegue del producto fue apresurado y pone en peligro la seguridad de la tecnología.

Las tensiones entre el propósito y las ganancias estallaron en noviembre de 2023 cuando la junta directiva de OpenAI, después de las renuncias de varios de sus miembros, retiró a Altman de la empresa. Además de ignorar las preocupaciones de seguridad, la Junta afirmó que Altman no se había comunicado adecuadamente con ellos sobre el desarrollo de productos y fue abusivo de los empleados.

Sin embargo, la Junta restableció Altman días después después de la presión de los inversores y empleados de Operai, este último que amenazó con la renuncia masiva. Esos miembros de la junta renunciaron.

El regreso de Altman no terminó el debate sobre la dirección de OpenAi. En la primavera de 2024, Musk demandó a Altman y Openai por un incumplimiento de contrato, alegando que la empresa había violado su propósito de obtener beneficios públicos sobre las ganancias. Los expertos dudaron de que la demanda de Musk se mantendría en los tribunales debido a la posición legal y legal en los compromisos de fundación de OpenAi, pero no necesariamente porque estaba equivocado. Inicialmente, Musk dejó caer la demanda, solo para revivirla varios meses después.

La agitación en torno a la eliminación de Altman y la demanda de Musk pidieron preguntas sobre si una organización sin fines de lucro puede abandonar su propósito. PromotorEl director de la facultad, Luigi Zingales, escribió en Promotor Después de la demanda de Musk de que dicha asignación permitiría que las empresas aprovechen las leyes fiscales injustas, de las cuales OpenAi se benefició como una organización sin fines de lucro, y traicionaría las expectativas de los donantes, que le dieron a OpenAI millones de dólares en fondos iniciales para priorizar los beneficios sociales de la IA. Prescientamente, el profesor de derecho de Harvard, Roberto Tallarita, preguntó qué mecanismos de gobierno corporativo pueden evitar que una empresa se desvíe de su pronunciado propósito social, y descubrió que la respuesta es ninguno.

En diciembre de 2024, Openai anunció que se reestructuraría una vez más. El brazo sin fines de lucro ya no tendría un control del 100% sobre la entidad con fines de lucro. En cambio, tendría una participación minoritaria. Mientras tanto, la entidad con fines de lucro se convertiría en un PBC. Según OpenAi, la reestructuración la abriría a miles de millones de dólares en inversiones para crear una empresa más sostenible.

Reacción y reversión

Según Chan Loui, la pregunta que rodea su reestructuración propuesta era si el propósito de OpenAi de desarrollar IA generativo “en beneficio de toda la humanidad […] Puede sobrevivir a los enormes incentivos y presión para permitir que la compañía, ahora valorada en $ 300 mil millones, maximice las ganancias sin restricción, enriqueciendo a sus empleados e inversores, mientras que deja a un lado el control de su padre sin fines de lucro “.

En una carta abierta publicada en abril, un grupo de ex empleados de Operai, líderes civiles y académicos, incluidos Zingales, respondió: “No”. La carta amonestó los planes de Openai para reestructurarse y fue dirigida a los Fiscales Generales de California, donde OpenAi tiene su sede, y Delaware, donde se incorpora legalmente. La carta advirtió a los Fiscales Generales que el plan de OpenAI para reestructurar amenazaba con violar su propósito fundador, que OpenAi no había explicado cómo su reestructuración avanzaría en su propósito fundador, y que su reestructuración propuesta eliminaría las salvaguardas corporativas que aseguraban a OpenAi perseguiría su propósito prosocial original.

El 5 de mayo, Openai anunció que abandonaría los planes para reestructurarse para eliminar el estado de control de la entidad sin fines de lucro. En cambio, la entidad con fines de lucro aún se convertiría en PBC, pero permanecerá bajo el control de la organización sin fines de lucro. En su explicación, Openai escribió: “Tomamos la decisión de la organización sin fines de lucro de retener el control de OpenAi después de escuchar a los líderes cívicos y participar en un diálogo constructivo con las oficinas del Fiscal General de Delaware y el Fiscal General de California”.

¿Ahora que?

Quedan preguntas sobre lo que implicará la cara de OpenAi. El tema de atraer inversión todavía tiene tanto peso como antes. Según el New York Timesuna inversión reciente de SoftBank tiene una opción para que la compañía tenedora de inversión japonesa reduzca su inversión de $ 40 mil millones a $ 20 mil millones si OpenAI no completa su reestructuración planificada para fin de año. Altman dijo a los periodistas que SoftBank sigue comprometido con su completa inversión bajo la reestructuración propuesta revisada.

Sin embargo, Bloomberg informó que Microsoft, el patrocinador más importante de Openai, aún no ha aprobado el nuevo plan de reestructuración. Las dos empresas están en negociaciones.

“Tendremos que saber más sobre la estructura y la implementación de la gobernanza para determinar si esta nueva propuesta realmente conserva el propósito de la organización sin fines de lucro”, dijo Chan Loui Promotor Después de Operai anunció su giro en U. “Dado que parece que la organización sin fines de lucro tendrá una gran participación de capital (pero no en la mayoría) en la Corporación de Beneficios Públicos de Delaware (PBC), necesitará los derechos de voto descomunales sobre los temas que implican los planes de la Compañía e implementación del desarrollo de la IA”.

Chan Loui explica además que “¡También querremos escuchar más sobre el proceso de gobernanza, como: ¿Quién elegirá a los miembros de la junta de la organización sin fines de lucro y la PBC? ¿Cómo se reemplazarán los miembros iniciales de la junta?

Para aquellos que criticaron la reestructuración propuesta por OpenAI en diciembre pasado, la reversión de la empresa es una victoria para la integridad del gobierno corporativo y la ley fiscal. Sin embargo, el diablo está en los detalles, y Altman ha demostrado ser decidido a comercializar OpenAi.

Divulgación del autor: El autor no informa conflictos de intereses. Puede leer nuestra política de divulgación aquí.

Los artículos representan las opiniones de sus escritores, no necesariamente las de la Universidad de Chicago, la Escuela de Negocios Booth o su facultad.

Continue Reading

Trending