On Nov. 30, 2022, traffic to OpenAI’s website peaked at a number a little north of zero. It was a startup so small and sleepy that the owners didn’t bother tracking their web traffic. It was a quiet day, the last the company would ever know. Within two months, OpenAI was being pounded by more than 100 million visitors trying, and freaking out about, ChatGPT. Nothing has been the same for anyone since, particularly Sam Altman. In his most wide-ranging interview as chief executive officer, Altman explains his infamous four-day firing, how he actually runs OpenAI, his plans for the Trump-Musk presidency and his relentless pursuit of artificial general intelligence—the still-theoretical next phase of AI, in which machines will be capable of performing any intellectual task a human can do. Edited for clarity and length.
Your team suggested this would be a good moment to review the past two years, reflect on some events and decisions, to clarify a few things. But before we do that, can you tell the story of OpenAI’s founding dinner again? Because it seems like the historic value of that event increases by the day.
Everyone wants a neat story where there’s one moment when a thing happened. Conservatively, I would say there were 20 founding dinners that year [2015], and then one ends up being entered into the canon, and everyone talks about that. The most important one to me personally was Ilya 1 and I at the Counter in Mountain View [California]. Just the two of us.
1 Ilya Sutskever is an OpenAI co-founder and one of the leading researchers in the field of artificial intelligence. As a board member he participated in Altman’s November 2023 firing, only to express public regret over his decision a few days later. He departed OpenAI in May 2024.
And to rewind even back from that, I was always really interested in AI. I had studied it as an undergrad. I got distracted for a while, and then 2012 comes along. Ilya and others do AlexNet. 2 I keep watching the progress, and I’m like, “Man, deep learning seems real. Also, it seems like it scales. That’s a big, big deal. Someone should do something.”
2 AlexNet, created by Alex Krizhevsky, Sutskever and Geoffrey Hinton, used a deep convolutional neural network (CNN)—a powerful new type of computer program—to recognize images far more accurately than ever, kick-starting major progress in AI.
So I started meeting a bunch of people, asking who would be good to do this with. It’s impossible to overstate how nonmainstream AGI was in 2014. People were afraid to talk to me, because I was saying I wanted to start an AGI effort. It was, like, cancelable. It could ruin your career. But a lot of people said there’s one person you really gotta talk to, and that was Ilya. So I stalked Ilya at a conference, got him in the hallway, and we talked. I was like, “This is a smart guy.” I kind of told him what I was thinking, and we agreed we’d meet up for a dinner. At our first dinner, he articulated—not in the same words he’d use now—but basically articulated our strategy for how to build AGI.
What from the spirit of that dinner remains in the company today?
Kind of all of it. There’s additional things on top of it, but this idea that we believed in deep learning, we believed in a particular technical approach to get there and a way to do research and engineering together—it’s incredible to me how well that’s worked. Usually when you have these ideas, they don’t quite work, and there were clearly some things about our original conception that didn’t work at all. Structure. 3 All of that. But [believing] AGI was possible, that this was the approach to bet on, and if it were possible it would be a big deal to society? That’s been remarkably true.
3 OpenAI was founded in 2015 as a nonprofit with the mission to ensure that AGI benefits all of humanity. This would become, er, problematic. We’ll get to it.
One of the strengths of that original OpenAI group was recruiting. Somehow you managed to corner the market on a ton of the top AI research talent, often with much less money to offer than your competitors. What was the pitch?
The pitch was just come build AGI. And the reason it worked—I cannot overstate how heretical it was at the time to say we’re gonna build AGI. So you filter out 99% of the world, and you only get the really talented, original thinkers. And that’s really powerful. If you’re doing the same thing everybody else is doing, if you’re building, like, the 10,000th photo-sharing app? Really hard to recruit talent. Convince me no one else is doing it, and appeal to a small, really talented set? You can get them all. And they all wanna work together. So we had what at the time sounded like an audacious or maybe outlandish pitch, and it pushed away all of the senior experts in the field, and we got the ragtag, young, talented people who are good to start with.
How quickly did you guys settle into roles?
Most people were working on it full time. I had a job, 4 so at the beginning I was doing very little, and then over time I fell more and more in love with it. And then, by 2018, I had drunk the Kool-Aid. But it was like a Band of Brothers approach for a while. Ilya and Greg 5 were kind of running it, but everybody was doing their thing.
4 In 2014, Altman became the CEO of Y Combinator, the startup accelerator that helped launch Airbnb, Dropbox and Stripe, among others.
5 Greg Brockman is a co-founder of OpenAI and its current president.
It seems like you’ve got a romantic view of those first couple of years.
Well, those are the most fun times of OpenAI history for sure. I mean, it’s fun now, too, but to have been in the room for what I think will turn out to be one of the greatest periods of scientific discovery, relative to the impact it has on the world, of all time? That’s a once-in-a-lifetime experience. If you’re very lucky. If you’re extremely lucky.
In 2019 you took over as CEO. How did that come about?
I was trying to do OpenAI and [Y Combinator] at the same time, which was really hard. I just got transfixed by this idea that we were actually going to build AGI. Funnily enough, I remember thinking to myself back then that we would do it in 2025, but it was a totally random number based off of 10 years from when we started. People used to joke in those days that the only thing I would do was walk into a meeting and say, “Scale it up!” Which is not true, but that was kind of the thrust of that time period.
The official release date of ChatGPT is Nov. 30, 2022. Does that feel like a million years ago or a week ago?
[Laughs] I turn 40 next year. On my 30th birthday, I wrote this blog post, and the title of it was “The days are long but the decades are short.” Somebody this morning emailed me and said, “This is my favorite blog post, I read it every year. When you turn 40, will you write an update?” I’m laughing because I’m definitely not gonna write an update. I have no time. But if I did, the title would be “The days are long, and the decades are also f—ing very long.” So it has felt like a very long time.
As that first cascade of users started showing up, and it was clear this was going to be a colossal thing, did you have a “holy s—” moment?
So, OK, a couple of things. No. 1, I thought it was gonna do pretty well! The rest of the company was like, “Why are you making us launch this? It’s a bad decision. It’s not ready.” I don’t make a lot of “we’re gonna do this thing” decisions, but this was one of them.
YC has this famous graph that PG 6 used to draw, where you have the squiggles of potential, and then the wearing off of novelty, and then this long dip, and then the squiggles of real product market fit. And then eventually it takes off. It’s a piece of YC lore. In the first few days, as [ChatGPT] was doing its thing, it’d be more usage during the day and less at night. The team was like, “Ha ha ha, it’s falling off.” But I had learned one thing during YC, which is, if every time there’s a new trough it’s above the previous peak, there’s something very different going on. It looked like that in the first five days, and I was like, “I think we have something on our hands that we do not appreciate here.”
6 Paul Graham, the co-founder of Y Combinator and a philosopher king-type on the subject of startups and technology.
And that started off a mad scramble to get a lot of compute 7—which we did not have at the time—because we had launched this with no business model or thoughts for a business model. I remember a meeting that December where I sort of said, “I’ll consider any idea for how we’re going to pay for this, but we can’t go on.” And there were some truly horrible ideas—and no good ones. So we just said, “Fine, we’re just gonna try a subscription, and we’ll figure it out later.” That just stuck. We launched with GPT-3.5, and we knew we had GPT-4 [coming], so we knew that it was going to be better. And as I started talking to people who were using it about the things they were using it for, I was like, “I know we can make these better, too.” We kept improving it pretty rapidly, and that led to this global media consciousness [moment], whatever you want to call it.
7 In AI, “compute” is commonly used as a noun, referring to the processing power and resources—such as central processing units (CPUs), graphics processing units (GPUs) and tensor processing units (TPUs)—required to train, run or develop machine-learning models. Want to know how Nvidia Corp.’s Jensen Huang got rich? Compute.
Are you a person who enjoys success? Were you able to take it in, or were you already worried about the next phase of scaling?
A very strange thing about me, or my career: The normal arc is you run a big, successful company, and then in your 50s or 60s you get tired of working that hard, and you become a [venture capitalist]. It’s very unusual to have been a VC first and have had a pretty long VC career and then run a company. And there are all these ways in which I think it’s bad, but one way in which it has been very good for me is you have the weird benefit of knowing what’s gonna happen to you, because you’ve watched and advised a bunch of other people through it. And I knew I was both overwhelmed with gratitude and, like, “F—, I’m gonna get strapped to a rocket ship, and my life is gonna be totally different and not that fun.” I had a lot of gallows humor about it. My husband 8 tells funny stories from that period of how I would come home, and he’d be like, “This is so great!” And I was like, “This is just really bad. It’s bad for you, too. You just don’t realize it yet, but it’s really bad.” [Laughs]
8 Altman married longtime partner Oliver Mulherin, an Australian software engineer, in early 2024. They’re expecting a child in March 2025.
You’ve been Silicon Valley famous for a long time, but one consequence of GPT’s arrival is that you became world famous with the kind of speed that’s usually associated with, like, Sabrina Carpenter or Timothée Chalamet. Did that complicate your ability to manage a workforce?
It complicated my ability to live my life. But in the company, you can be a well-known CEO or not, people are just like, “Where’s my f—ing GPUs?”
I feel that distance in all the rest of my life, and it’s a really strange thing. I feel that when I’m with old friends, new friends—anyone but the people very closest to me. I guess I do feel it at work if I’m with people I don’t normally interact with. If I have to go to one meeting with a group that I almost never meet with, I can kind of tell it’s there. But I spend most of my time with the researchers, and man, I promise you, come with me to the research meeting right after this, and you will see nothing but disrespect. Which is great.
Do you remember the first moment you had an inkling that a for-profit company with billions in outside investment reporting up to a nonprofit board might be a problem?
There must have been a lot of moments. But that year was such an insane blur, from November of 2022 to November of 2023, I barely remember it. It literally felt like we built out an entire company from almost scratch in 12 months, and we did it in crazy public. One of my learnings, looking back, is everybody says they’re not going to screw up the relative ranking of important versus urgent, 9 and everybody gets tricked by urgent. So I would say the first moment when I was coldly staring at reality in the face—that this was not going to work—was about 12:05 p.m. on whatever that Friday afternoon was. 10
9 Dwight Eisenhower apparently said “What is important is seldom urgent, and what is urgent is seldom important” so often that it gave birth to the Eisenhower Matrix, a time management tool that splits tasks into four quadrants:
Urgent and important: Tasks to be done immediately.
Important but not urgent: Tasks to be scheduled for later.
Urgent but not important: Tasks to be delegated.
Not urgent and not important: Tasks to be eliminated.
Understanding the wisdom of the Eisenhower Matrix—then ignoring it when things get hectic—is a startup tradition.
10 On Nov. 17, 2023, at approximately noon California time, OpenAI’s board informed Altman of his immediate removal as CEO. He was notified of his firing roughly 5 to 10 minutes before the public announcement, during a Google Meet session, while he was watching the Las Vegas Grand Prix.
When the news emerged that the board had fired you as CEO, it was shocking. But you seem like a person with a strong EQ. Did you detect any signs of tension before that? And did you know that you were the tension?
I don’t think I’m a person with a strong EQ at all, but even for me this was over the line of where I could detect that there was tension. You know, we kind of had this ongoing thing about safety versus capability and the role of a board and how to balance all this stuff. So I knew there was tension, and I’m not a high-EQ person, so there’s probably even more.
A lot of annoying things happened that first weekend. My memory of the time—and I may get the details wrong—so they fired me at noon on a Friday. A bunch of other people quit Friday night. By late Friday night I was like, “We’re just going to go start a new AGI effort.” Later Friday night, some of the executive team was like, “Um, we think we might get this undone. Chill out, just wait.”
Saturday morning, two of the board members called and wanted to talk about me coming back. I was initially just supermad and said no. And then I was like, “OK, fine.” I really care about [OpenAI]. But I was like, “Only if the whole board quits.” I wish I had taken a different tack than that, but at the time it felt like a just thing to ask for. Then we really disagreed over the board for a while. We were trying to negotiate a new board. They had some ideas I thought were ridiculous. I had some ideas they thought were ridiculous. But I thought we were [generally] agreeing. And then—when I got the most mad in the whole period—it went on all day Sunday. Saturday into Sunday they kept saying, “It’s almost done. We’re just waiting for legal advice, but board consents are being drafted.” I kept saying, “I’m keeping the company together. You have all the power. Are you sure you’re telling me the truth here?” “Yeah, you’re coming back. You’re coming back.”
And then Sunday night they shock-announce that Emmett Shear was the new CEO. And I was like, “All right, now I’m f—ing really done,” because that was real deception. Monday morning rolls around, all these people threaten to quit, and then they’re like, “OK, we need to reverse course here.”
The board says there was an internal investigation that concluded you weren’t “consistently candid” in your communications with them. That’s a statement that’s specific—they think you were lying or withholding some information—but also vague, because it doesn’t say what specifically you weren’t being candid about. Do you now know what they were referring to?
I’ve heard different versions. There was this whole thing of, like, “Sam didn’t even tell the board that he was gonna launch ChatGPT.” And I have a different memory and interpretation of that. But what is true is I definitely was not like, “We’re gonna launch this thing that is gonna be a huge deal.” And I think there’s been an unfair characterization of a number of things like that. The one thing I’m more aware of is, I had had issues with various board members on what I viewed as conflicts or otherwise problematic behavior, and they were not happy with the way that I tried to get them off the board. Lesson learned on that.
You recognized at some point that the structure of [OpenAI] was going to smother the company, that it might kill it in the crib. Because a mission-driven nonprofit could never compete for the computing power or make the rapid pivots necessary for OpenAI to thrive. The board was made up of originalists who put purity over survival. So you started making decisions to set up OpenAI to compete, which required being a little sneaky, which the board—
I don’t think I was doing things that were sneaky. I think the most I would say is, in the spirit of moving really fast, the board did not understand the full picture. There was something that came up about “Sam owning the startup fund, and he didn’t tell us about this.” And what happened there is because we have this complicated structure: OpenAI itself could not own it, nor could someone who owned equity in OpenAI. And I happened to be the person who didn’t own equity in OpenAI. So I was temporarily the owner or GP 11 of it until we got a structure set up to transfer it. I have a different opinion about whether the board should have known about that or not. But should there be extra clarity to communicate things like that, where there’s even the appearance of doing stuff? Yeah, I’ll take that feedback. But that’s not sneaky. It’s a crazy year, right? It’s a company that’s moving a million miles an hour in a lot of different ways. I would encourage you to talk to any current board member 12 and ask if they feel like I’ve ever done anything sneaky, because I make it a point not to do that.
11 General partner. According to a Securities and Exchange Commission filing on March 29, 2024, the new general partner of OpenAI’s startup fund is Ian Hathaway. The fund has roughly $175 million available to invest in AI-focused startups.
12 OpenAI’s current board is made up of Altman and:
I think the previous board was genuine in their level of conviction and concern about AGI going wrong. There’s a thing that one of those board members said to the team here during that weekend that people kind of make fun of her for, 13 which is it could be consistent with the mission of the nonprofit board to destroy the company. And I view that—that’s what courage of convictions actually looks like. I think she meant that genuinely. And although I totally disagree with all specific conclusions and actions, I respect conviction like that, and I think the old board was acting out of misplaced but genuine conviction in what they believed was right. And maybe also that, like, AGI was right around the corner and we weren’t being responsible with it. So I can hold respect for that while totally disagreeing with the details of everything else.
13 Former OpenAI board member Helen Toner is reported to have said there are circumstances in which destroying the company “would actually be consistent with the mission” of the board. Altman had previously confronted Toner—the director of strategy at Georgetown University’s Center for Security and Emerging Technology—about a paper she wrote criticizing OpenAI for releasing ChatGPT too quickly. She also complimented one of its competitors, Anthropic, for not “stoking the flames of AI hype” by waiting to release its chatbot.
You obviously won, because you’re sitting here. But just practicing a bit of empathy, were you not traumatized by all of this?
I totally was. The hardest part of it was not going through it, because you can do a lot on a four-day adrenaline rush. And it was very heartwarming to see the company and kind of my broader community support me. But then very quickly it was over, and I had a complete mess on my hands. And it got worse every day. It was like another government investigation, another old board member leaking fake news to the press. And all those people that I feel like really f—ed me and f—ed the company were gone, and now I had to clean up their mess. It was about this time of year [December], actually, so it gets dark at like 4:45 p.m., and it’s cold and rainy, and I would be walking through my house alone at night just, like, f—ing depressed and tired. And it felt so unfair. It was just a crazy thing to have to go through and then have no time to recover, because the house was on fire.
When you got back to the company, were you self-conscious about big decisions or announcements because you worried about how your character may be perceived? Actually, let me put that more simply. Did you feel like some people may think you were bad, and you needed to convince them that you’re good?
It was worse than that. Once everything was cleared up, it was all fine, but in the first few days no one knew anything. And so I’d be walking down the hall, and [people] would avert their eyes. It was like I had a terminal cancer diagnosis. There was sympathy, empathy, but [no one] was sure what to say. That was really tough. But I was like, “We got a complicated job to do. I’m gonna keep doing this.”
Can you describe how you actually run the company? How do you spend your days? Like, do you talk to individual engineers? Do you get walking-around time?
Let me just call up my calendar. So we do a three-hour executive team meeting on Mondays, and then, OK, yesterday and today, six one-on-ones with engineers. I’m going to the research meeting right after this. Tomorrow is a day where there’s a couple of big partnership meetings and a lot of compute meetings. There’s five meetings on building up compute. I have three product brainstorm meetings tomorrow, and I’ve got a big dinner with a major hardware partner after. That’s kind of what it looks like. A few things that are weekly rhythms, and then it’s mostly whatever comes up.
How much time do you spend communicating, internally and externally?
Way more internal. I’m not a big inspirational email writer, but lots of one-on-one, small-group meetings and then a lot of stuff over Slack.
Oh, man. God bless you. You get into the muck?
I’m a big Slack user. You can get a lot of data in the muck. I mean, there’s nothing that’s as good as being in a meeting with a small research team for depth. But for breadth, man, you can get a lot that way.
You’ve previously discussed stepping in with a very strong point of view about how ChatGPT should look and what the user experience should be. Are there places where you feel your competency requires you to be more of a player than a coach?
At this scale? Not really. I had dinner with the Sora 14 team last night, and I had pages of written, fairly detailed suggestions of things. But that’s unusual. Or the meeting after this, I have a very specific pitch to the research team of what I think they should do over the next three months and quite a lot of granular detail, but that’s also unusual.
14 Sora is OpenAI’s advanced visual AI generator, released to the public on Dec. 9, 2024.
We’ve talked a little about how scientific research can sometimes be in conflict with a corporate structure. You’ve put research in a different building from the rest of the company, a couple of miles away. Is there some symbolic intent behind that?
Uh, no, that’s just logistical, space planning. We will get to a big campus all at once at some point. Research will still have its own area. Protecting the core of research is really critical to what we do.
The normal way a Silicon Valley company goes is you start up as a product company. You get really good at that. You build up to this massive scale. And as you build up this massive scale, revenue growth naturally slows down as a percentage, usually. And at some point the CEO gets the idea that he or she is going to start a research lab to come up with a bunch of new ideas and drive further growth. And that has worked a couple of times in history. Famously for Bell Labs and Xerox PARC. Usually it doesn’t. Usually you get a very good product company and a very bad research lab. We’re very fortunate that the little product company we bolted on is the fastest-growing tech company maybe ever—certainly in a long time. But that could easily subsume the magic of research, and I do not intend to let that happen.
We are here to build AGI and superintelligence and all the things that come beyond that. There are many wonderful things that are going to happen to us along the way, any of which could very reasonably distract us from the grand prize. I think it’s really important not to get distracted.
As a company, you’ve sort of stopped publicly speaking about AGI. You started talking about AI and levels, and yet individually you talk about AGI.
I think “AGI” has become a very sloppy term. If you look at our levels, our five levels, you can find people that would call each of those AGI, right? And the hope of the levels is to have some more specific grounding on where we are and kind of like how progress is going, rather than is it AGI, or is it not AGI?
What’s the threshold where you’re going to say, “OK, we’ve achieved AGI now”?
The very rough way I try to think about it is when an AI system can do what very skilled humans in important jobs can do—I’d call that AGI. There’s then a bunch of follow-on questions like, well, is it the full job or only part of it? Can it start as a computer program and decide it wants to become a doctor? Can it do what the best people in the field can do or the 98th percentile? How autonomous is it? I don’t have deep, precise answers there yet, but if you could hire an AI as a remote employee to be a great software engineer, I think a lot of people would say, “OK, that’s AGI-ish.”
Now we’re going to move the goalposts, always, which is why this is hard, but I’ll stick with that as an answer. And then when I think about superintelligence, the key thing to me is, can this system rapidly increase the rate of scientific discovery that happens on planet Earth?
You now have more than 300 million users. What are you learning from their behavior that’s changed your understanding of ChatGPT?
Talking to people about what they use ChatGPT for, and what they don’t, has been very informative in our product planning. A thing that used to come up all the time is it was clear people were trying to use ChatGPT for search a lot, and that actually wasn’t something that we had in mind when we first launched it. And it was terrible for that. But that became very clearly an important thing to build. And honestly, since we’ve launched search in ChatGPT, I almost don’t use Google anymore. And I don’t think it would have been obvious to me that ChatGPT was going to replace my use of Google before we launched it, when we just had an internal prototype.
Another thing we learned from users: how much people are relying on it for medical advice. Many people who work at OpenAI get really heartwarming emails when people are like, “I was sick for years, no doctor told me what I had. I finally put all my symptoms and test results into ChatGPT—it said I had this rare disease. I went to a doctor, and they gave me this thing, and I’m totally cured.” That’s an extreme example, but things like that happen a lot, and that has taught us that people want this and we should build more of it.
Your products have had a lot of prices, from $0 to $20 to $200—Bloomberg reported on the possibility of a $2,000 tier. How do you price technology that’s never existed before? Is it market research? A finger in the wind?
We launched ChatGPT for free, and then people started using it a lot, and we had to have some way to pay for it. I believe we tested two prices, $20 and $42. People thought $42 was a little too much. They were happy to pay $20. We picked $20. Probably it was late December of 2022 or early January. It was not a rigorous “hire someone and do a pricing study” thing.
There’s other directions that we think about. A lot of customers are telling us they want usage-based pricing. You know, “Some months I might need to spend $1,000 on compute, some months I want to spend very little.” I am old enough that I remember when we had dial-up internet, and AOL gave you 10 hours a month or five hours a month or whatever your package was. And I hated that. I hated being on the clock, so I don’t want that kind of a vibe. But there’s other ones I can imagine that still make sense, that are somehow usage-based.
What does your safety committee look like now? How has it changed in the past year or 18 months?
One thing that’s a little confusing—also to us internally—is we have many different safety things. So we have an internal-only safety advisory group [SAG] that does technical studies of systems and presents a view. We have an SSC [safety and security committee], which is part of the board. We have the DSP 15 with Microsoft. And so you have an internal thing, a board thing and a Microsoft joint board. We are trying to figure out how to streamline that.
15 The Deployment Safety Board, with members from OpenAI and Microsoft, approves any model deployment over a certain capability threshold.
And are you on all three?
That’s a good question. So the SAG sends their reports to me, but I don’t think I’m actually formally on it. But the procedure is: They make one, they send it to me. I sort of say, “OK, I agree with this” or not, send it to the board. The SSC, I am not on. The DSP, I am on. Now that we have a better picture of what our safety process looks like, I expect to find a way to streamline that.
Has your sense of what the dangers actually might be evolved?
I still have roughly the same short-, medium- and long-term risk profiles. I still expect that on cybersecurity and bio stuff, 16 we’ll see serious, or potentially serious, short-term issues that need mitigation. Long term, as you think about a system that really just has incredible capability, there’s risks that are probably hard to precisely imagine and model. But I can simultaneously think that these risks are real and also believe that the only way to appropriately address them is to ship product and learn.
16 In September 2024, OpenAI acknowledged that its latest AI models have increased the risk of misuse in creating bioweapons. In May 2023, Altman joined hundreds of other signatories to a statement highlighting the existential risks posed by AI.
When it comes to the immediate future, the industry seems to have coalesced around three potential roadblocks to progress: scaling the models, chip scarcity and energy scarcity. I know they commingle, but can you rank those in terms of your concern?
We have a plan that I feel pretty good about on each category. Scaling the models, we continue to make technical progress, capability progress, safety progress, all together. I think 2025 will be an incredible year. Do you know this thing called the ARC-AGI challenge? Five years ago this group put together this prize as a North Star toward AGI. They wanted to make a benchmark that they thought would be really hard. The model we’re announcing on Friday 17 passed this benchmark. For five years it sat there, unsolved. It consists of problems like this. 18 They said if you can score 85% on this, we’re going to consider that a “pass.” And our system—with no custom work, just out of the box—got an 87.5%. 19 And we have very promising research and better models to come.
17 OpenAI introduced Model o3 on Dec. 20. It should be available to users in early 2025. The previous model was o1, but the Information reported that OpenAI skipped over o2 to avoid a potential conflict with British telecommunications provider 02.
18 On my laptop, Altman called up the ARC-AGI website, which displayed a series of bewildering abstract grids. The abstraction is the point; to “solve” the grids and achieve AGI, an AI model must rely more on reason than its training data.
19 According to ARC-AGI: “OpenAI’s new o3 system—trained on the ARC-AGI-1 Public Training set—has scored a breakthrough 75.7% on the Semi-Private Evaluation set at our stated public leaderboard $10k compute limit. A high-compute (172x) o3 configuration scored 87.5%.”
We have been hard at work on the whole [chip] supply chain, all the partners. We have people to build data centers and make chips for us. We have our own chip effort here. We have a wonderful partnership with Nvidia, just an absolutely incredible company. And we’ll talk more about this next year, but now is the time for us to scale chips.
Fusion is going to work. Um. On what time frame?
Soon. Well, soon there will be a demonstration of net-gain fusion. You then have to build a system that doesn’t break. You have to scale it up. You have to figure out how to build a factory—build a lot of them—and you have to get regulatory approval. And that will take, you know, years altogether? But I would expect [Helion 20] will show you that fusion works soon.
In the short term, is there any way to sustain AI’s growth without going backward on climate goals?
Yes, but none that is as good, in my opinion, as quickly permitting fusion reactors. I think our particular kind of fusion is such a beautiful approach that we should just race toward that and be done.
A lot of what you just said interacts with the government. We have a new president coming. You made a personal $1 million donation to the inaugural fund. Why?
He’s the president of the United States. I support any president.
I understand why it makes sense for OpenAI to be seen supporting a president who’s famous for keeping score of who’s supporting him, but this was a personal donation. Donald Trump opposes many of the things you’ve previously supported. Am I wrong to think the donation is less an act of patriotic conviction and more an act of fealty?
I don’t support everything that Trump does or says or thinks. I don’t support everything that Biden says or does or thinks. But I do support the United States of America, and I will work to the degree I’m able to with any president for the good of the country. And particularly for the good of what I think is this huge moment that has got to transcend any political issues. I think AGI will probably get developed during this president’s term, and getting that right seems really important. Supporting the inauguration, I think that’s a relatively small thing. I don’t view that as a big decision either way. But I do think we all should wish for the president’s success.
He’s said he hates the Chips Act. You supported the Chips Act.
I actually don’t. I think the Chips Act was better than doing nothing but not the thing that we should have done. And I think there’s a real opportunity to do something much better as a follow-on. I don’t think the Chips Act has been as effective as any of us hoped.
Elon 21 is clearly going to be playing some role in this administration. He’s suing you. He’s competing with you. I saw your comments at DealBook that you think he’s above using his position to engage in any funny business as it relates to AI.
21 C’mon, how many Elons do you know?
But if I may: In the past few years he bought Twitter, then sued to get out of buying Twitter. He replatformed Alex Jones. He challenged Zuckerberg to a cage match. That’s just kind of the tip of the funny-business iceberg. So do you really believe that he’s going to—
Oh, I think he’ll do all sorts of bad s—. I think he’ll continue to sue us and drop lawsuits and make new lawsuits and whatever else. He hasn’t challenged me to a cage match yet, but I don’t think he was that serious about it with Zuck, either, it turned out. As you pointed out, he says a lot of things, starts them, undoes them, gets sued, sues, gets in fights with the government, gets investigated by the government. That’s just Elon being Elon. The question was, will he abuse his political power of being co-president, or whatever he calls himself now, to mess with a business competitor? I don’t think he’ll do that. I genuinely don’t. May turn out to be proven wrong.
When the two of you were working together at your best, how would you describe what you each brought to the relationship?
Maybe like a complementary spirit. We don’t know exactly what this is going to be or what we’re going to do or how this is going to go, but we have a shared conviction that this is important, and this is the rough direction to push and how to course-correct.
I’m curious what the actual working relationship was like.
I don’t remember any big blowups with Elon until the fallout that led to the departure. But until then, for all of the stories—people talk about how he berates people and blows up and whatever, I hadn’t experienced that.
Are you surprised by how much capital he’s been able to raise, specifically from the Middle East, for xAI?
No. No. They have a lot of capital. It’s the industry people want. Elon is Elon.
Let’s presume you’re right and there’s positive intent from Elon and the administration. What’s the most helpful thing the Trump administration can do for AI in 2025?
US-built infrastructure and lots of it. The thing I really deeply agree with the president on is, it is wild how difficult it has become to build things in the United States. Power plants, data centers, any of that kind of stuff. I understand how bureaucratic cruft builds up, but it’s not helpful to the country in general. It’s particularly not helpful when you think about what needs to happen for the US to lead AI. And the US really needs to lead AI.
OpenAI siempre ha sido excelente para captar la atención en las noticias. Sus anuncios a menudo vienen acompañados de afirmaciones grandes y audaces. Por ejemplo, anunciaron GPT-2 pero dijeron que era demasiado peligroso lanzarlo. O su campaña “12 días de Navidad”, donde mostraron un producto nuevo todos los días durante 12 días.
Ahora, Sam Altman ha compartido sus pensamientos sobre el año pasado, centrándose en la dramática telenovela de la sala de juntas en torno a su despido y regreso. También hizo una predicción audaz:
“Ahora sabemos cómo construir AGI como se entiende habitualmente. En 2025, creemos que los agentes de IA se unirán a la fuerza laboral y cambiarán la forma en que trabajan las empresas”.
AGI (Inteligencia General Artificial) significa crear una IA que sea tan inteligente y general como un humano. A diferencia de la IA estrecha, que está diseñada para tareas específicas como traducir idiomas, jugar al ajedrez o reconocer rostros, AGI puede manejar cualquier tarea intelectual y adaptarse en diferentes áreas. mientras no creo “AGI está cerca” Creo que la IA se unirá a la fuerza laboral, pero tal vez no de la manera que Altman imagina.
¿Está AGI cerca? No, al menos no el AGI que nosotros (o Sam) imaginamos
La llegada de AGI en 2025 parece muy improbable. La IA actual, como ChatGPT, funciona reconociendo patrones y haciendo predicciones, no comprendiendo realmente. Por ejemplo, completar la frase “La vida es como una caja de…” con “chocolates” se basa en probabilidades, no en razonamiento.
No creo que la AGI se produzca en 2025, y muchos expertos están de acuerdo. Demis Hassabis, con quien trabajé en Google, predice que AGI podría llegar alrededor de 2035. Ray Kurzweil estima que 2032, y Jürgen Schmidhuber, director de IDSIA, sugiere más cerca de 2050. Los escépticos son muchos y el cronograma sigue siendo incierto.
¿Importa cuándo? La IA ya es poderosa.
Quizás no importe exactamente cuándo llegará AGI. Incluso Sam Altman recientemente restó importancia a la “G” en AGI, diciendo:
“Creo que alcanzaremos el AGI antes de lo que la mayoría de la gente piensa y importará mucho menos”.
Estoy de acuerdo con esto hasta cierto punto. La IA ya tiene capacidades impresionantes. Por ejemplo, la IA de Netflix conoce tus preferencias cinematográficas mejor que tu pareja. Incluso se ha bromeado sobre los algoritmos de TikTok por reconocer la orientación sexual de alguien antes que ellos. La IA sobresale en el reconocimiento de patrones y, en muchos casos, es mejor que los humanos.
Sam Altman ve que la IA “se une a la fuerza laboral”
El punto más importante del memorando de Sam es su creencia de que la IA “se unirá a la fuerza laboral”. Estoy completamente de acuerdo en que esto va a suceder. Como escribí en mi actualización del agente de IA, para que la IA tenga éxito en el lugar de trabajo, necesita dos cosas clave: (1) acceso a herramientas y (2) acceso a datos. Estos son los pilares para que la IA sea realmente eficaz en entornos empresariales. Sin embargo, aunque Sam a menudo vincula esta idea con AGI, es posible que OpenAI no lidere la tarea de proporcionar estas soluciones de fuerza laboral de IA.
La primera posición de Microsoft: acceso a los usuarios
¿Quién tiene las herramientas laborales? Microsoft. Microsoft. Microsoft. Están en la pole position. La mayoría de la gente ya utiliza productos de Microsoft, les guste o no, y la IA se está integrando profundamente en estas herramientas, con copilotos apareciendo por todas partes.
En 2023 y 2024, muchas nuevas empresas lanzaron impresionantes servicios de inteligencia artificial para trabajos de oficina, solo para ser rápidamente eclipsadas por gigantes como Microsoft y Google, que tienen acceso directo a los clientes. Tomemos como ejemplo a Jasper.ai, una herramienta de inteligencia artificial para redactar textos que alguna vez fue famosa. Como señalé en esta publicación de LinkedIn, características similares ahora están integradas directamente en los productos de Google y Microsoft, lo que hace cada vez más difícil competir para los jugadores más pequeños.
El poder del acceso a los datos
La IA necesita datos para ser verdaderamente eficaz. Si está buscando respuestas sobre los procesos internos de una empresa o información valiosa a partir de documentos, herramientas generales como ChatGPT no serán suficientes. Lo que necesitamos son herramientas que puedan leer y resumir documentos de la empresa, diseñadas específicamente para uso empresarial. Como dije antes, 2025 será el año de la BÚSQUEDA, especialmente la búsqueda empresarial. Las herramientas que pueden responder preguntas, resumir contenido y ayudar a los usuarios a navegar por información compleja cambiarán las reglas del juego.
¿Quién tiene acceso a este tipo de datos? Microsoft es un gran actor, pero no está solo. Salesforce, por ejemplo, posee una enorme cantidad de datos valiosos: interacciones con los clientes, debates, documentos de procesos, estrategias de marketing y más. ¿Salesforce quiere que los agentes de IA ayuden a desbloquear este potencial? Absolutamente.
No sorprende que el director ejecutivo de Salesforce, Marc Benioff, haya criticado recientemente a Microsoft. Llamó a su asistente de IA, Copilot, “decepcionante” y dijo: “Simplemente no funciona y no ofrece ningún nivel de precisión”. Incluso lo llamó “Clippy 2.0”, el insulto más divertido que he escuchado en mucho tiempo, antes de lanzar la propia solución de inteligencia artificial de Salesforce, Agent Forces.
¿OpenAI es “simplemente” la herramienta más inteligente?
OpenAI no tiene el mismo nivel de acceso a datos o alcance al consumidor que Microsoft, ni tiene el tesoro de datos comerciales de Salesforce. Entonces, ¿cuál es su ángulo? Afirman ser la herramienta más inteligente del mercado, y probablemente lo sean, aunque personalmente considero que Claude 3.5 de Anthropic es actualmente mejor que GPT-4 de OpenAI.
OpenAI apuesta por su capacidad para superar a todos los demás con tecnología superior. Es por eso que Sam Altman afirma con seguridad que veremos AGI. ¿Qué hay detrás de esa audaz afirmación? Razonamiento o, como lo llama OpenAI, Razonamiento.
OpenAI y razonamiento
OpenAI lanzó recientemente o1, un modelo diseñado para mostrar capacidades de razonamiento avanzadas a través de un proceso iterativo de autollamada:
Iteración y reflexión: el modelo genera un resultado, lo evalúa o critica y lo refina en una nueva ronda de razonamiento.
Bucle de retroalimentación: esto crea un circuito de retroalimentación donde el modelo revisa sus resultados, los critica y los mejora aún más.
En esencia, GPT con o1 no sólo proporciona respuestas: planifica, critica el plan y lo mejora continuamente.
Lo que es especialmente digno de mención es el cambio de paradigma que esto representa. En lugar de simplemente lanzar un modelo más grande como GPT-5, la próxima generación de modelos de IA se centra en “pensar más” durante la inferencia. Esta capacidad de procesar de forma iterativa puede ser a lo que se refiere Sam Altman cuando dice: “Ahora sabemos cómo construir AGI”.
¿El razonamiento es razón suficiente?
Pero, ¿el “razonamiento” por sí solo hace que OpenAI entre en juego? OpenAI todavía necesita acceso a los datos y una fuerte presencia de usuarios, similar a Salesforce o Microsoft. Para solucionar este problema, OpenAI lanzó la aplicación de escritorio ChatGPT para macOS. Esta aplicación ahora puede leer código directamente desde herramientas centradas en desarrolladores como VS Code, Xcode, TextEdit, Terminal e iTerm2. Esto significa que los desarrolladores ya no necesitan copiar y pegar su código en ChatGPT, una solución común hasta ahora. Es una herramienta realmente útil y una medida inteligente para integrarse más profundamente en el flujo de trabajo del desarrollador.
Chatear con modelos de lenguaje grandes cuesta dinero
Cada llamada a un modelo de lenguaje grande (LLM) cuesta dinero. Para los grandes usuarios de ChatGPT, es posible que la suscripción de $ 20 ni siquiera cubra el costo de su uso. OpenAI recaudó recientemente 6.600 millones de dólares en una ronda de financiación Serie E, un impulso muy necesario para sostener sus operaciones. Si bien Agentforce genera ingresos sólidos de sus clientes y Microsoft disfruta de un enorme fondo de guerra financiera, OpenAI aún se encuentra en las primeras etapas para lograr que las empresas y los usuarios paguen lo suficiente para compensar los elevados costos del desarrollo de IA de vanguardia.
Su nivel premium de $200 por mes, que incluye la versión ampliada de O1, es un paso en esta dirección. ¿Pero vale la pena el precio? Quizás es por eso que AGI sigue siendo parte de la conversación: ayuda a justificar el posicionamiento premium. Sin embargo, la carrera por crear modelos superiores está lejos de terminar. Incluso O1 pronto podría ser superado por alternativas de código abierto, como hemos visto antes con Meta’s Llama.
Hablando de Meta, estoy seguro de que veremos sus intentos de monetizar los modelos de IA en 2025. En última instancia, el mayor desafío para estos actores sigue siendo claro: justificar enormes costos sin asegurar un flujo de ingresos constante y confiable.
Sam tiene razón: los agentes de IA estarán en la fuerza laboral
En 2025, veremos más agentes de IA ingresar a la fuerza laboral, transformando los flujos de trabajo al simplificar, mejorar y automatizar tareas en todas las industrias. Estos no serán modelos AGI que lo abarquen todo, sino modelos más pequeños y especializados diseñados para flujos de trabajo dedicados. La IA ampliará y mejorará los procesos paso a paso, combinando la IA tradicional, la recuperación de contexto y un diseño de usuario sólido para abordar desafíos como la seguridad, las alucinaciones y el control del usuario.
El éxito dependerá de la entrega de valor a través de soluciones bien integradas, fáciles de usar y diseñadas éticamente, como se describe en mi marco para crear herramientas de IA listas para la empresa. Para Sam Altman, la pregunta estratégica clave no será lograr AGI sino cómo fijar el precio de los modelos base de OpenAI para clientes empresariales como Microsoft o Salesforce, especialmente si OpenAI termina compitiendo directamente con ellos.
Pero, ¿cómo trabajaremos con esos nuevos colegas de IA?
Las empresas emergerán como ganadoras en la carrera por mejores modelos, mejores datos y mejores integraciones. Su principal objetivo debería ser formar a los empleados y clientes para que trabajen de forma eficaz con sus nuevos colegas de IA. En mi curso certificado de eCornell sobre soluciones de IA, vi de primera mano cómo la productividad se disparó una vez que los estudiantes aprendieron a comunicarse con un copiloto de IA. Inicialmente, muchos lucharon por lograr resultados, pero una guía paso a paso sobre cómo interactuar con la IA marcó una diferencia significativa.
¿Por qué? Porque incluso con capacidades de razonamiento y planificación, la IA aún no es verdaderamente “general”, por mucho revuelo que genere Sam Altman. Los estudiantes tuvieron que aprender cuándo confiar en la IA y cuándo aplicar el juicio humano. Creo que 2025 será el año en que las empresas se den cuenta de esta necesidad e inviertan mucho en educación sobre IA.
El equipo de consumidores de Alphabet está preparado para mejorar los televisores que ejecutan su sistema operativo Google TV integrando Gemini AI en su sistema de control de voz Google Assistant. Bloomberg ha informado.
Esta actualización tiene como objetivo mejorar la interacción del usuario con comandos de voz más naturales y capacidades mejoradas de búsqueda de contenido, incluida una integración más profunda de YouTube.
La actualización Gemini, que se espera que se implemente más adelante en 2025, permitirá a los usuarios entablar conversaciones con televisores de terceros sin necesidad de la frase desencadenante “Hola Google” para cada comando.
Google demostró esta característica en la conferencia de tecnología CES.
Además, Google mostró la capacidad de recuperar contenido de forma más natural, como solicitar videos de un viaje reciente guardados en la cuenta de Google Photos de un usuario.
Se afirma que esta actualización es la primera vez que Google lleva Gemini a televisores de terceros que ejecutan su sistema operativo, incluidos los de Sony Group, Hisense Home Appliances Group y TCL Technology Group, luego de su debut en la caja de transmisión propia de Google el pasado año. año.
Acceda a los perfiles de empresa más completos del mercado, impulsados por GlobalData. Ahorre horas de investigación. Obtenga una ventaja competitiva.
Perfil de la empresa: muestra gratuita
¡Gracias!
Su correo electrónico de descarga llegará en breve
Confiamos en la calidad única de nuestros perfiles de empresa. Sin embargo, queremos que tome la decisión más beneficiosa para su negocio, por lo que ofrecemos una muestra gratuita que puede descargar enviando el siguiente formulario.
Por GlobalData
Google TV compite con otros sistemas operativos de televisión, incluidos los de Samsung Electronics, Amazon.com y Roku.
La compañía también presentó un nuevo modo “siempre encendido” para televisores, que utiliza sensores para detectar la presencia del usuario y mostrar información personalizada, como noticias y pronósticos del tiempo.
TCL será el primer fabricante en ofrecer este modo siempre activo a finales de este año, seguido de Hisense en 2026.
Esta función tiene como objetivo proporcionar a los usuarios información relevante cuando están cerca de su televisor, mejorando aún más la experiencia del usuario.
En diciembre de 2024, Google anunció planes para integrar Gemini AI en su plataforma de realidad extendida (XR), Android XR, a través de los auriculares Project Moohan XR de Samsung.
¡Suscríbete a nuestro resumen diario de noticias!
Ofrezca a su negocio una ventaja con nuestros conocimientos líderes de la industria.
El invierno puede ser una época difícil para mantenerse motivado y tener pensamientos positivos. Los días más cortos y las temperaturas gélidas son especialmente difíciles para mí porque me encanta el sol y estar al aire libre. Aunque todavía trato de salir y salir a correr cuando el clima lo permite, a menudo me siento deprimido y tiendo a pensar negativamente.
Si bien los terapeutas profesionales no son rival para ChatGPT, en caso de necesidad, a menudo utilizo ChatGPT para explorar estrategias para desarrollar la fortaleza mental mientras desafío los pensamientos negativos durante los meses de invierno.
Aprecio el modo de voz avanzado de ChatGPT porque los usuarios pueden tener una conversación humana sobre cualquier cosa, incluso pensamientos desanimados y desmotivados. Esto es lo que sucedió cuando compartí mis pensamientos con ChatGPT y las sugerencias que me dio.
1. Considere las alegrías invernales simples
Inmediato:“¿Qué pequeños placeres o actividades acogedoras puedes sugerir para traer calidez y alegría durante la temporada de invierno?”
Objetivo: Identificar alegrías sencillas que mejoran la satisfacción diaria y contrarrestan la tristeza invernal.
Prefiero ChatGPT Advanced Voice “Sol” porque la IA es relajada y tranquila. Chatear con esta IA se siente como si estuviera hablando con un buen amigo que está lleno de ideas y tiene la capacidad de mantenerse tranquilo independientemente de cualquier situación que le presente.
Cuando mencioné que estaba luchando por estar atrapado en el interior a pesar de la hermosa nieve que caía afuera, me sugirió practicar la atención plena y reflexionar sobre mi día ideal de invierno.
La meditación de atención plena implica centrarse en el momento presente, reconociendo pensamientos y sentimientos sin juzgar. Le pedí orientación a ChatGPT sobre cómo iniciar una práctica de atención plena.
Proporcionó un enfoque paso a paso, que incluía reservar tiempo dedicado, encontrar un espacio tranquilo y concentrarse en la respiración. Incluso me dijo que la atención plena podría ser un estiramiento rápido o una pausa para bailar durante el día para levantarme el ánimo.
2. Replantear los pensamientos negativos
Inmediato:“Esto es lo que me molesta. ¿Puedes darme una charla de ánimo y explicarme cómo puedo mantener una actitud positiva?”
Obtenga acceso instantáneo a noticias de última hora, las reseñas más recientes, excelentes ofertas y consejos útiles.
Objetivo: Para inspirar la planificación de actividades agradables que combatan los sentimientos de estancamiento y le brinden algo que esperar.
Los patrones de pensamiento negativos pueden ser omnipresentes durante los meses de invierno. Busqué consejo en ChatGPT sobre lo que puedo hacer para alejar mis pensamientos de mi actual mentalidad negativa y pesimista hacia algo más positivo.
Además de compartir formas de replantear mis pensamientos, la IA también agregó que lo estaba “yendo muy bien” y que era una especie de máquina de exagerar para mí. Me sentí bien tener una charla de ánimo al mediodía, especialmente porque no me la esperaba. El modo de voz avanzado ChatGPT es excelente cuando necesitas una perspectiva más positiva. Simplemente escuchar: “¡Tú puedes hacer esto!” incluso desde un chatbot, es excepcionalmente estimulante.
La aplicación de la técnica de ChatGPT me permitió cambiar mi forma de pensar de “no puedo manejar esto” a “puedo aprender a manejar este desafío”, fomentando la resiliencia y una perspectiva más positiva.
3. Realizar actividad física
Inmediato: “En verano disfruto [list your favorite activities]. ¿Puedes sugerir actividades similares que pueda disfrutar durante los meses de invierno?”
Objetivo: Explorar formas de mejorar su apreciación de la temporada de invierno y crear una lista de opciones para mejorar su estado de ánimo.
Se sabe que el ejercicio físico mejora el estado de ánimo y reduce la ansiedad. Siempre me ha gustado correr y el ejercicio físico en general, pero incluso hacer las cosas que amamos puede resultar difícil cuando estamos abrumados por pensamientos negativos.
Por eso le pedí a ChatGPT sugerencias de ejercicios en interiores adecuados para el invierno. Tengo una cinta de correr, pero buscaba actividades que me ayudaran a tranquilizarme.
Recomendó actividades como yoga, ejercicios de peso corporal y rutinas de baile que se pueden realizar en casa. Desde HIIT (entrenamiento en intervalos de alta intensidad) hasta Pilates y entrenamiento de fuerza, ChatGPT ofreció una variedad de formas de incorporar estos ejercicios a mi rutina.
Esto no sólo mejoró mi salud física, sino que también mejoró significativamente mi estado de ánimo, combatiendo eficazmente la tristeza invernal.
4. Establecer una práctica de gratitud
Inmediato: “¿Puedes ayudarme a enumerar diez cosas por las que estar agradecido en este momento? Para cada una, explica por qué es importante o significativa”.
Objetivo: Cambiar el enfoque de la negatividad a la positividad, mejorando el estado de ánimo general mediante la práctica de la gratitud.
Habiendo experimentado la alegría de llevar un diario de gratitud, sé que tomar nota de las cosas por las que estoy agradecido cada día puede marcar una verdadera diferencia. En lugar de centrarme en las cosas que no puedo cambiar, poner énfasis en todo lo bueno que ya tengo hace maravillas con mi estado de ánimo.
Sin embargo, ChatGPT llevó esta práctica un poco más allá al sugerirme que combine la gratitud con otro hábito. Esto fue un cambio de vida. Sugirió combinar la gratitud con un hábito que ya tengo, como tomar café. Por ejemplo, “Mientras se prepara el café, pensaré en algo por lo que estoy agradecido”.
Esta práctica me ayudó a apreciar los elementos positivos de mi vida, reduciendo el impacto de los pensamientos negativos y fomentando una perspectiva más optimista durante la temporada de invierno.
5. Establecer objetivos realistas
Inmediato:“Aquí están mis tres objetivos que quiero alcanzar para finales de este invierno. ¿Puedes describir los pasos que podría seguir para lograrlos?”
Objetivo: Ayudar a establecer metas alcanzables que proporcionen dirección y un sentido de propósito durante los meses de invierno.
Establecer y alcanzar metas pequeñas y realistas puede proporcionar una sensación de logro y propósito. He usado ChatGPT para ayudarme a establecer resoluciones y objetivos en mi lista de deseos de 5 años, pero consultar ChatGPT para conocer estrategias de establecimiento de objetivos para cada día me ayudó a mantenerme concentrado y organizado.
Lograr pequeñas metas cada día aumentó mi autoestima y me dio motivación a pesar de los días fríos y oscuros. Al utilizar ChatGPT como consultor para estrategias de establecimiento de metas, pude formular metas INTELIGENTES y establecer objetivos alcanzables.
Ya sea para dedicar tiempo a una manualidad que había descuidado o para intentar cocinar un plato nuevo, el apoyo de ChatGPT cada día ha cambiado las reglas del juego.
6. Planifica un día de cuidado personal
Inmediato: “¿Puede sugerirme ideas para una rutina enriquecedora de cuidado personal en invierno, incluidas actividades que me ayuden a sentirme mejor?”
Objetivo: Ayudar a establecer una rutina que priorice el bienestar y combata los desafíos estacionales.
Después de las fiestas, parece que todo el mundo vuelve a su rutina, lo que hace más difícil recuperarse del estrés y la ansiedad de los viajes, las cargas financieras o los eventos sociales incómodos que ocurrieron durante la temporada navideña.
El aislamiento social puede exacerbar los pensamientos negativos, por eso le pedí a ChatGPT ideas para mantener las conexiones sociales durante el invierno. Sugirió reuniones virtuales, unirse a comunidades en línea y programar visitas periódicas con amigos y familiares.
Si bien ya los hago, me surgió una nueva idea de consultarme más a menudo. Al hacerlo, es más probable que me sienta cómodo acercándome a conocidos, personas con las que podría convertirme en amigos más cercanos para el verano.
ChatGPT también me ayudó a comprender la importancia de participar en actividades que me ayuden a sentirme conectado, lo cual también es una forma de cuidado personal. Por esa razón, me he conectado más a menudo con nuestra vecina, que también es mamá.
7. Probar algo nuevo
Inmediato: “Esta es mi idea de un día de invierno perfecto, de principio a fin. Según mis recursos y mi situación actual, ¿pueden ayudarme a planificar algo similar?”
Objetivo: Inspirar una visión de felicidad y fomentar la incorporación de elementos de un día ideal a la vida real.
ChatGPT sugirió que una de las mejores formas de ahuyentar la tristeza es probar algo nuevo. Ya sea un viaje al parque de trampolines o una caminata en la nieve por los pintorescos bosques, probar nuevas actividades puede ayudar a romper la monotonía de un largo invierno.
Esta sugerencia me pareció muy personal. Sé que puedo adoptar rutinas que a veces me hacen funcionar en piloto automático, especialmente durante los días de semana. Me di cuenta de que cambiar las cosas y probar algo nuevo puede ser una de las mejores formas de romper con la previsibilidad de la vida durante el invierno. Le pedí ideas a ChatGPT y le di mi código postal para que pudiera brindarme ideas cerca de mí.
¿ChatGPT ayudó?
La utilización de ChatGPT como recurso ofreció estrategias prácticas para desarrollar la fuerza mental y desafiar los pensamientos negativos durante los meses de invierno.
Al implementar la meditación de atención plena, replantear pensamientos negativos, realizar actividad física, establecer una práctica de gratitud, establecer metas realistas, mantener conexiones sociales y buscar apoyo profesional, experimenté una mejora notable en mi bienestar mental.
Si bien la IA puede proporcionar una orientación valiosa, es esencial adaptar estas estrategias a las necesidades individuales y consultar a profesionales cuando sea necesario. Adoptar estas prácticas puede conducir a una mentalidad más resiliente, lo que le permitirá afrontar los desafíos del invierno con mayor facilidad y positividad.
Nota: Las estrategias mencionadas se basan en consejos generales y experiencia personal. Para obtener apoyo personalizado de salud mental, se recomienda consultar a un profesional autorizado.
This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.
Strictly Necessary Cookies
Strictly Necessary Cookie should be enabled at all times so that we can save your preferences for cookie settings.
If you disable this cookie, we will not be able to save your preferences. This means that every time you visit this website you will need to enable or disable cookies again.