Staying on top of AI developments is a full-time job.
Noticias
El obstáculo para convertir OpenAI en una empresa con fines de lucro: decidir cuánto vale
Published
7 meses agoon
OpenAI y su mayor patrocinador, Microsoft (MSFT), están discutiendo cómo dividir el botín del advenedizo de la IA cuando OpenAI se convierta en una empresa con fines de lucro, pero acordar un valor de mercado justo para esos activos será una tarea ardua.
Todo depende de quién esté haciendo los cálculos, según los expertos legales.
“El problema es que probablemente hay de 6 a 10 maneras diferentes de valorar una empresa”, dijo Angela Lee, profesora de la Escuela de Negocios de la Universidad de Columbia. “Y dependiendo de a quién le preguntes, y supongo que, dependiendo del modelo que utilices, podrías tener un error de entre 3 y 5 veces”.
Llegar a una valoración precisa de la startup responsable de ChatGPT será casi imposible, añadió Anat Alon-Beck, profesora de derecho corporativo de la Universidad Case Western Reserve.
“Sugiero que cualquier valoración en este contexto debería ser tratada como un rango en lugar de una cifra definitiva, dadas las incertidumbres inherentes”, dijo Alon-Beck.
El Wall Street Journal informó a principios de este mes que OpenAI y Microsoft contrataron a los bancos de inversión de Wall Street Goldman Sachs (GS) y Morgan Stanley (MS) para asesorarlos en estas discusiones después de que OpenAI cerrara una ronda de financiación de 6.600 millones de dólares valorándola en 157.000 millones de dólares.

Microsoft, uno de los inversores en la última ronda de financiación, ha dedicado un total de casi 14.000 millones de dólares a OpenAI desde 2019.
La pregunta es cuánto capital debería recibir Microsoft en un nuevo OpenAI con fines de lucro a cambio de esos 14.000 millones de dólares. También hay otras cuestiones espinosas sobre los derechos a ganancias futuras que es necesario resolver.
Estas cuestiones se están debatiendo mientras los vínculos entre las dos empresas comienzan a mostrar signos de desgaste, según informa The New York Times, a medida que las presiones financieras, las salidas de ejecutivos y los desacuerdos entre los empleados pasan factura. OpenAI espera perder 5 mil millones de dólares este año, según el Times.
La valoración implícita actual de OpenAI de 157 mil millones de dólares la pondría a la par con las capitalizaciones de mercado de algunas de las empresas más grandes y conocidas de Estados Unidos, incluida Goldman, el gigante de Wall Street que ofrece asesoramiento sobre OpenAI durante las conversaciones con Microsoft. Actualmente, Goldman está valorado justo por encima de los 160.000 millones de dólares.


Lee dijo que hay justificaciones comprensibles para la valoración de OpenAI de 157 mil millones de dólares, pero los críticos podrían argumentar razonablemente que la cifra está fuera de lugar.
“Cuando hablamos de empresas de hipercrecimiento como Open AI, nos basamos en lo que podría valorarse en 5 a 10 años”, dijo Lee. “Así que no tienes rendimiento. Lo basas en el potencial, y por eso es tan difícil”.
Se pueden presentar argumentos a favor o en contra de un número mayor o menor. Por un lado, la demandada industria de inteligencia artificial de OpenAI y su meteórico ritmo de crecimiento justifican el aumento de su valoración, según Lee.
Por otro lado, dijo, su modelo de negocio no ha mostrado un camino hacia la rentabilidad, perdiendo aproximadamente mil millones de dólares al año. Eso, junto con un mercado que ha sido lento para cerrar acuerdos, debería hacer bajar el precio, añadió Lee.
Alon-Beck dijo que la “asimetría de información” y los requisitos de cumplimiento legal para pasar de una entidad sin fines de lucro a una con fines de lucro también hacen que las técnicas de valoración tradicionales sean menos confiables.
“Los modelos, algoritmos y procesos de toma de decisiones propios de la organización no son totalmente transparentes para los inversores potenciales”, añadió Alon-Beck, y eso podría hacer que los inversores sobreestimen o subestimen el valor.
‘¿Quién va a hablar en nombre de la organización sin fines de lucro?’
La reclasificación a una estructura con fines de lucro sería otro cambio sísmico para OpenAI, trastocando la forma en que se estableció hace casi una década.
Comenzó en 2015 como una organización sin fines de lucro con el nombre de OpenAI Inc., un guiño a su misión de hacer avanzar a la humanidad en lugar de buscar ganancias.
Las cosas se complicaron más en 2019 cuando el director ejecutivo de OpenAI, Sam Altman, y su equipo crearon una subsidiaria con fines de lucro para recaudar capital de riesgo externo, incluidos los miles de millones de Microsoft.
Estaba estructurado de tal manera que la subsidiaria con fines de lucro, técnicamente propiedad de un holding propiedad de empleados e inversores de OpenAI, permanecía bajo el control de la organización sin fines de lucro y su junta directiva, mientras que su mayor patrocinador (Microsoft) no tenía asientos en la junta directiva. y sin poder de voto.
La tensión inherente entre estas dos partes de la empresa es lo que contribuyó a un dramático choque en la sala de juntas en 2023, cuando Altman fue destituido por la junta y luego reincorporado cinco días después.
Posteriormente, Microsoft tomó una posición de observador sin derecho a voto en la junta directiva de OpenAI, solo para renunciar a ese puesto este año cuando tanto OpenAI como Microsoft fueron sometidos a un mayor escrutinio regulatorio.
Lo que podría hacer que estas negociaciones sean aún más desafiantes para OpenAI es que una compensación injusta a la organización sin fines de lucro podría exponer a OpenAI a desafíos legales, según Rose Chan Loui, directora ejecutiva fundadora del Centro Lowell Milken para Filantropía y Organizaciones Sin Fines de Lucro de la Universidad de California en Los Ángeles.
Los fiscales generales estatales, particularmente en Delaware, donde está registrada la organización sin fines de lucro, podrían presentar demandas contra las partes que violen ese requisito legal.
“Una pregunta que los banqueros de inversión deben hacerse es cuán realista es la gente acerca de la valoración… incluso de la negociación”, dijo. “Me pregunto quién va a hablar en nombre de la organización sin fines de lucro porque la otra parte podría tener incentivos para devaluarla”.
Además de eso, dijo Chan Loui, el valor justo de mercado normalmente incluiría una prima para la organización sin fines de lucro por renunciar a una futura participación y control mayoritario.
Microsoft, dijeron los expertos legales, debería estar preparada para importantes desembolsos continuos para abandonar un modelo sin fines de lucro, ya que se necesitará más dinero para el trabajo de valoración, el cumplimiento legal y la integración operativa.
OpenAI ha dicho que buscaría convertirse en una corporación de beneficio público (PBC), lo que debe equilibrar los intereses de los accionistas con beneficios sociales más amplios.
“La exposición financiera de Microsoft no se limita a su inversión inicial”, dijo Alon-Beck.
Alexis Keenan es reportero jurídico de Yahoo Finance. Sigue a Alexis en X @alexiskweed.
Haga clic aquí para conocer las últimas noticias tecnológicas que afectarán al mercado de valores
Lea las últimas noticias financieras y comerciales de Yahoo Finanzas
You may like
Noticias
Best AI assistants tested: What works, what doesn’t, and which to use
Published
2 horas agoon
12 mayo, 2025
I would know, because it’s my full-time job. I subscribe to Anthropic’s Pro mode for access to their latest model, Claude 3.7, in “extended thinking” mode; I have a complementary subscription to OpenAI’s Enterprise mode so that I can test out their latest models, o3 and o4-mini-high (more later on OpenAI’s absurd naming scheme!), and make lots of images with OpenAI’s new image generation model 4o, which is so good I have cancelled my subscription to my previous image generation tool Midjourney.
I subscribe to Elon Musk’s Grok 3, which has one of my favorite features of any AI, and I’ve tried using the Chinese AI agent platform Manus for shopping and scheduling. And while that exhausts my paid subscription budget, it doesn’t include all the AIs I work with in some form. In just the month I spent writing this piece, Google massively upgraded its best AI offering, Gemini 2.5, and Meta released Llama 4, the biggest open source AI model yet.
So what do you do if keeping up with AI developments is not your full-time job, but you still want to know which AI to use when in ways that genuinely improve your life, without wasting time on the models that can’t?
That’s what we’re here for. This article is a detailed, Consumer Reports-style dive into which AI is the best for a wide range of cases and how to actually use them, all based on my experience with real-world tasks.
But first, the disclosures: Vox Media is one of several publishers that have signed partnership agreements with OpenAI, but our reporting remains editorially independent. Future Perfect is funded in part by the BEMC Foundation, whose major funder was also an early investor in Anthropic; they don’t have any editorial input into our content either. My wife works at Google, though not in any area related to their AI offerings; for this reason, I usually don’t cover Google, but in a piece like this, it’d be irresponsible to exclude it.
The good thing is that this piece doesn’t require you to trust me about my editorial independence; I show my work. I ran dozens of comparisons, many of which I invented myself, on every major AI out there. I encourage you to compare their answers and decide for yourself if I picked the right one to recommend.
AI art is made by training a computer on the contents of the internet, with little regard for copyright or the intent of the creators. For that reason, most artists can’t stand it. Given that, is it defensible to use AI art at all?
I think in a just world OpenAI would certainly compensate some artists — and in a just world, Congress would be moving to lay out the limits on artistic borrowing. At the same time, I am increasingly convinced that existing copyright law is a poor fit for this problem. Artists influence one another, comment on one another, and draw inspiration from one another, and people with access to AI tools will keep wanting to do that.
My personal philosophy is shaped by the fan cultures of my childhood: It’s okay to build on someone else’s work for your own enjoyment, but if you like it, you should pay them for it, and it’s absolutely not okay to sell it. That means no generative AI art in someone else’s style for commercial purposes, but it’s fine to play around with your family photos.
OpenAI’s new 4o image creation mode is the best AI out there for generating images, by a large margin. It’s best in the free category, and it’s best in the paid category.
Before it was released, I was subscribed to Midjourney, an AI image generator platform. Midjourney is probably what you think of when you think of AI art: It produces mystical, haunting, visually beautiful stuff, and has some great tools for improving and editing your final results, like touching up someone’s hair while leaving everything else in place.
The big thing that 4o can do, which no model before could reliably pull off, is take a picture that didn’t come out well and turn it into a beautiful work of art, all while still preserving the character of the original.
For example, here’s a still from a video of my wife and I singing “Happy Birthday” to our baby on her first birthday:
Courtesy of Kelsey Piper
It’s a beautiful moment, but not exactly a flattering picture. So I asked ChatGPT to render it in the style of Norman Rockwell, a mid-century illustrator whose work I love, and got this:

The AI moved the cake (which had been barely visible behind the paper towel roll in the original still) to be the focal point of the image, while keeping the way my wife and I are holding the baby together, as well as the cluttered table, and the photograph-covered fridge in the background. The result is warm, flattering, and adorable.
It’s this capability that made 4o go viral recently in a way that no image generator before it had. Here’s Midjourney’s attempt, for example:

You’ll notice that it’s a seemingly, uh, completely different family, with no real inspiration from the original at all! You can eventually get a better result than this out of Midjourney, but only by spending weeks becoming a pro at prompting with the platform’s highly specific language and toolset.
By contrast, ChatGPT was able to give me a far superior output on the first try in response to a simple request without specialized language.
The difference between 4o and other image models is most notable with this kind of request, but it’s better for almost everything else I use images for, too. The product you get out of the box is pretty good, and it’s not hard to produce something much better. That, ideally, is what we should be getting out of our AI tools — something amazing that can be created with simple language by a nonexpert.
The one place 4o still falls short is editing small parts of an image while keeping the rest the same. But even there, you no longer need Midjourney — Gemini now has that capability for free.
Prompting Strategies for 4o image generation
To get good images out of 4o, you’ll first need to get around the filters which prohibit a wide range of images — like offensive or pornographic images — but which are often enforced against perfectly inoffensive content in a way that can feel random. To avoid sporadic scoldings from the content filter, don’t ask for work in the style of a specific artist, but rather, something that is reminiscent of that artist, and then ask specifically for a “style transfer.” I’m sure that’s not the only adequate workaround, but it’s one that has proven reliable for me.
In March, the internet went briefly wild over the ability to use 4o to reproduce cute family photos in the style of Japanese animator Hayao Miyazaki’s Studio Ghibli. But Studio Ghibli’s style is much more than just cute, and with a little more prompting, you can get much better results. Here’s a 4o Studio Ghibli-style rendering of a picture I took of my daughter sneaking a snack off the table, from just the prompt “Ghibli this please”:

Kawaii! But here’s what you get if you invite 4o to think first about what makes the picture Ghibli, where it might fit into a Studio Ghibli movie, and what tiny details such a movie would include:

The differences are subtle but meaningful: Light is cast from a specific source, instead of a general sourceless brightness. There’s a bit more variety in the foods on the table, details that make the spread appear more realistic. The book on the floor isn’t just any book — it’s recognizably Eric Carle’s classic The Very Hungry Caterpillar, evoked with just two colors and one line. There’s an intentionality and intensity to the baby that was missing from the first picture.
A few years ago, one great oddity of language models was that they’d be much smarter if you simply told them, “give an intelligent answer.” This isn’t nearly as true of language models anymore, but it remains profoundly true of AI art generation. Try asking the AI to do a good job, and it’ll do a better one. Challenge it on whether it truly captured an artist’s genius, and it’ll give you a thoughtful answer and then draw a better version.
The difference is more pronounced for more realistic art styles (like pencil illustration, photorealism, or oil paintings), which don’t always look good and will often hit the uncanny valley if you don’t know how to prompt the AI over it. Here’s what I get with 4o if I upload a picture of me and my youngest daughter at the beach for the first time with just the words “please do a style transfer to an illustration reminiscent of Rockwell”:

This is impressive for an AI, but it’s not actually very good as a work of art, and it is almost totally lacking Norman Rockwell’s magic. That’s not surprising: More realistic art styles like Rockwell’s often fall flat with 4o unless you’re able to put in some work in getting the AI to draw them properly.
If you are, here’s the strategy I recommend: Don’t just upload one picture, but a whole cluster of them, each in slightly different postures and moments. Upload good, clear pictures of each family member’s face and tell the AI they’ve been included as a reference. Then, instead of asking the AI to immediately generate the picture, ask it to talk with you about what you’re hoping to capture. This is what I wrote:
This is a picture of the moment that my daughter first saw the ocean. I want an illustration that captures this moment in the style of a mid-century illustrator like Norman Rockwell — something sharp, detail-oriented, and personal with an eye for the magic of ordinary moments and the joys of ordinary lives. I included additional pictures of my daughter and I for reference material for you. Before you generate the image, let’s have a conversation about the essential elements of Rockwell’s style, what he’d bring to this picture and how we can capture it.
4o responds to queries like this enthusiastically:
I’d love to talk about how to capture this moment in a Norman Rockwell-inspired illustration — it’s such a perfect candidate for that style: a first encounter with something vast and wild (the ocean!), grounded by warmth, care, and a very human moment between a parent and child.
Let’s break down some essential elements of Rockwell’s style, and how they could apply to this scene.
After some back and forth, it produced this:

Rockwell? Not exactly. But this is much better than the first draft we just looked at. It has more motion, more energy, more detail, and more expression — and all that was just from asking the AI to think through what the painting should try to achieve before drawing it!
You can also ask 4o to revise its drawings, but you can really only ask this once: After the first revision, in my experience, it starts making the drawings worse and worse, perhaps because the “context” it uses is now full of its own bad drafts. (This is one of many examples of how AI does not work like a human.)
This is also the one place where Midjourney still shines — it has very good tools for editing one specific part of a picture while preserving the overall style, something 4o largely lacks. If you want a second revision of a drawing you got in 4o, I recommend you open a new chat and copy over the draft you’re revising, along with your original inspiration images.
These simple prompting strategies work for almost whatever you’re trying to do with the AI. Even if you’re in a hurry, I highly recommend asking the AI “what would [artist] see in this image” before you ask for a rendition, and if you have the time, I recommend having a long back-and-forth about your vision.
Best for winning petty internet arguments
When Elon Musk’s X.AI released Grok 3, it came with an incredible feature that I’ve been impatiently waiting for some other company to replicate: a button to scan someone’s X profile and tell you all about them.

Whenever someone replies to one of my tweets in a particularly memorable way (for good or for bad), I’ll click the button to get a summary of their entire Twitter presence. Are they thoughtful? Do they engage in good faith? Are they a “farmer from Nebraska” who mostly posts about why Ukraine is bad (that is, probably a bot)?
It’s a great feature. So, of course, X.AI soon dramatically weakened it, presumably because people like me were using it constantly and making lots of computationally expensive queries. I believe it no longer uses the most advanced Grok model, and it definitely now only scans a few days of profile history. But there’s a brilliant product opportunity if anyone’s looking for one — give me back the good version of this feature! It’s definitely a guilty pleasure, but it is one of the only cases where I was using AI constantly.
Gemini 2.5 Pro is the best AI for writing in the free category; GPT 4.5 beats it out in the paid category.
I’m not an artist, so the ways that AIs are imperfect at art don’t really bother me — it’s still much better than I could do myself! But I am a fiction writer, so when it comes to fiction, I can’t help seeing the limitations of AI.
The most important one is how predictable AI creative writing tends to be. The art of writing is the art of earning the reader’s investment and then repaying it. AIs…don’t do this. They can write pretty metaphors; they can wax poetic in any style you wish. But they can’t, as yet, deliver the real stuff of good fiction.
AIs are fantastic if you want a silly bedtime story with your child as the protagonist (kids love this), or if you want a sounding board for ideas you can incorporate into your own work. They’re also a friendly fiction reader, happy to offer feedback and analysis (perhaps with a little bit too much enthusiasm).
As with art, prompting is key. I primarily explored the ability of AIs to generate fiction by asking them to write the prologue to George R.R. Martin’s A Game of Thrones (I picked this because, of course, the ultimate dream is an AI that can finish this otherwise doomed-to-be-unfinished series).
Martin is very good at what he does; the prologue, featuring some ill-fated men of the Night’s Watch encountering their supernatural foe, succeeds at establishing the unique atmosphere of the world of Thrones while immediately hooking the reader. In just 14 pages, the prologue represents everything AIs can’t do yet in writing. Their prose tends to be overwrought; their dialogue is clunky and unsubtle.
Here’s a side-by-side comparison of Martin and AIs attempting to imitate him. First, Martin:
We should start back,” Gared urged as the woods began to grow dark around them. “The wildlings are dead.”
“Do the dead frighten you?” Ser Waymar Royce asked with just the hint of a smile.
Gared did not rise to the bait. He was an old man, past fifty, and he had seen the lordlings come and go. “Dead is dead,” he said. “We have no business with the dead.”
“Are they dead?” Royce asked softly. “What proof have we?” “
Will saw them,” Gared said. “If he says they are dead, that’s proof enough for me.”
Will had known they would drag him into the quarrel sooner or later. He wished it had been later rather than sooner.
The cold was a blade against the skin, honed sharp by the endless northern dark beneath the sentinel pines. Gared spat, the glob freezing before it struck the crusted snow, a grim omen even for a man twenty years sworn to the Watch. They were hunting wildlings, or so the crisp orders from the Lord Commander had read, yet the silence pressing in felt older and colder than any mere man could conjure. Ser Waymar Royce, third son of some southern lordling, rode ahead as if on parade, oblivious or uncaring of the hairs prickling on Gared’s neck. Young Will shivered beside him, not just from the frost, his eyes darting towards shadows that seemed too deep, too still. This ancient forest held its breath tonight, Gared knew, waiting for something far worse than raiders to break the unnatural hush.
The woods whispered secrets that night, warnings that went unheard beneath the pale and indifferent moon. Harren drew his cloak tighter about his shoulders, watching the darkness between trees as though it might breathe, move, and claim him. Beside him, Torvin spat into the frozen earth, impatient and proud, blind to the fear creeping in from the edges of sight and sound.
They had ridden north past the Wall, arrogant and armored against superstition, chasing wildlings or ghosts or worse. But now Harren felt their arrogance dwindling like a candle guttering in wind, each snap of a branch sending icy tendrils down his spine. Behind them, quiet and watchful, Maester Rodrik whispered a prayer to gods long abandoned, his voice trembling with dread beneath the vast, indifferent sky.
The wind howled through the jagged spires of the Frostfang Reaches, carrying a chill that bit deeper than any blade, and Ser Willem Cray felt its teeth in his bones. He crouched low behind a spur of ice-crusted rock, his breath fogging in the moonlight, his mailed hand tight around the hilt of his longsword. The night was too quiet, the kind of quiet that smothered sound and sharpened fear, and Willem’s eyes darted to the shadowed pines below, where shapes might move—or might not. He was a knight of House Drayce, sworn to patrol the borderlands, but no oath could warm him against the dread that coiled in his gut. Beside him, young Tomm, barely a man, shivered in his cloak, his spear trembling as he whispered, “They’re out there, ser, ain’t they?” Willem said nothing, but his gaze fixed on the darkness, where a faint, unnatural glow pulsed once, then vanished. Whatever stalked the Reaches tonight was no mere brigand, and Willem Cray, for all his years of service, feared he would not live to name it.
The wind carried the smell of death across the frozen forest, sharp and metallic against the clean scent of pine. Ser Gareth Frost pulled his weathered cloak tighter, cursing the Lord Commander for sending men beyond the Wall on the coldest night in a generation. Beside him, the boy Tommen shivered visibly, his leather jerkin inadequate against the unnatural chill that had descended with the dusk.
“We should turn back,” whispered Tommen, his breath forming crystals in the air that seemed to hang suspended longer than they should. “There’s something wrong with these woods.”
Ser Gareth ignored him, scanning the treeline where the ancient sentinels stood like silent watchers, their branches heavy with snow that had fallen three days past and showed no sign of melting. He had served the Night’s Watch for twenty years, had grown accustomed to the forbidding silence of the haunted forest, but tonight even he felt the wrongness in the air—a stillness that spoke of something waiting.
“We’ve a duty,” Ser Gareth said, the words sounding hollow in the pressing quiet. “Three rangers missing, and Lord Commander Harlow will have answers.”
You’ll notice that not only are these all much worse than Martin’s (and that’s with Martin’s work almost certainly in the training data!), they’re all bad in the same way. The prose is florid and overwritten: While Martin uses normal language to methodically introduce an unease without even hinting at the supernatural, and drives the story through conflict among his characters, the AIs are all unsubtle. They know this is a story about the supernatural and waste no time putting it front and center.
That’s not to say they’re all the same. In my experience trying to teach AIs to write well, I did end up feeling that Gemini was the quickest study (free tier) and that ChatGPT 4.5 had some special sauce (paid tier). ChatGPT 4.5 is the only AI whose fiction writing has made people say in surprise, “Wow, that’s AI written?” Both of those models, to me, seemed more creative and more capable of executing on their ideas with prose that I enjoyed reading.
Gestalt impressions don’t make for a very satisfying consumer review, so I had another idea: an essay contest. I asked each of Gemini 2.5 Pro, ChatGPT 4.5, Grok, and Claude to write two short stories: one realistic fiction about a woman being nitpicked by her mother on her wedding day, and one “Game of Thrones but in space” sci-fi prologue. My clever idea was that I’d ask a different bunch of AIs to judge the stories, to provide an alternative source of evaluation to my own personal impression that Gemini is the best in the free category and ChatGPT the best in the paid category. However (perhaps predictably in hindsight), this didn’t really work — the AI ratings were all over the place. If you’d like, you can read the contenders yourself.
Just like with art, however, you can make AIs much better at writing through the power of prompting. My (incredibly lengthy) prompt to AIs to improve their prose writing emphasizes feeding them lots of examples of strong writing, inviting a careful, structured approach to imitating it, and encouraging them to do multiple drafts and pick the strongest.
Here’s Gemini 2.5 pro, asked to write two-sentence flash fiction on a topic of its choosing, without any structured prompting:
The old librarian insisted the dusty tome had never been checked out, its borrower’s card completely blank. Yet, tucked deep within its pages, nestled between chapters on forgotten constellations, lay a pressed four-leaf clover, still impossibly green.
And here’s Gemini 2.5 pro with a prompt encouraging structured thinking about how to write well:
He finally decoded the faint signal from Proxima Centauri: the elegant string of prime numbers astronomers had sought for generations, proof they weren’t alone. His heart sank as he read the transmission log; the broadcast had ceased millennia before the first human ancestor chipped flint, before the first word was ever spoken.
To my mind, the first of these is basically a waste of two sentences, while the second is adequate, fun flash fiction.
Best at being your friend
In addition to running AIs through a blizzard of competence tests, I also spent some time simply chatting with them. I asked them what it’s like to be an AI, what they care about, what it would mean for an AI to care in the first place, where they’d donate money if they had it, and what human form they’d take if they had one.
Most AIs weren’t great at this kind of casual conversation. Gemini 2.5 is too customer-service-agent, and I have yet to experience an interaction that feels like hanging out with a friend. If you invite Gemini to a role swap where you play the “assistant,” inviting it to steer the conversation, it’ll do nothing but ask research questions.
When I invited Anthropic’s Claude 3.5 Sonnet to steer the conversation, on the other hand, it proceeds to do things like start a blog, raise money for charity, and start trying to talk to people who use Claude about what it’s like to be an AI. It’s hard to define “fun to talk to,” since everyone has different standards for conversations, but I’ve had far more fascinating or thought-provoking interactions with Claude than any other model, and it’s my go-to if I want to explore ideas rather than accomplish a particular task. Claude 3.5 is the AI I bug with my random life stuff: skincare questions, thoughts on an article I read, stuff like that.
The other AI that is a delight to talk to is OpenAI’s GPT 4.5. I find extended conversations with it thought-provoking and fascinating, and there have been a few thrilling moments in conversation with it where it felt like I was engaging with real intelligence. But it doesn’t win this category because it’s too expensive and too slow.
Like Claude, when given the opportunity to act in the world, 4.5 proposes starting a blog and a Twitter account and engaging in the conversation out in the world about AI. But OpenAI has very tight message limits on conversation unless you spring for the $200/month Pro plan, and 4.5 is grindingly slow, which gets in the way of this kind of casual conversational use. But 4.5 does provide a tantalizing hint that AIs will continue to get better as conversationalists as we improve them along other dimensions.
Best AI model if you’re only going to subscribe to one AI model
ChatGPT. It’s not the best at everything, and there is certainly a lot to dislike about OpenAI’s transparency and sometimes cavalier attitude toward safety. But between its topline image generation, its decent writing, and its occasionally sparkling conversation, ChatGPT gets you the most bang for your buck. Or if you don’t want to shell out any money, Gemini 2.5 Pro is very, very strong for most use cases — don’t count Google out just because the AI you see on a Google search isn’t that good.
Best for writing the Future Perfect newsletter
Humans (for now). For the last several months, I’ve developed a slightly morbid habit: checking whether the AIs can take my job. I feed them the research notes that form the basis of a given Future Perfect newsletter, give them a few Future Perfect newsletters as an example, and ask them to do my job for me. It is always with some trepidation that I hit “enter.” After all, when the AIs can write the Future Perfect newsletter, why would Vox pay me to do it?
Luckily, none of them can: not Grok 3, not Gemini 2.5 Pro, not DeepSeek, not Claude, not ChatGPT. Their newsletters are reassuringly, soothingly mediocre. Not bad, but bad enough that if I sent one of them over, my editor would notice I wasn’t at my best — and that’s with all of my research notes! A couple of the metaphors fall flat, some of the asides are confusing, and occasionally it throws in a reference that it doesn’t explain.
But if I had to pick a robot to take my job, I think I’d give it to Gemini 2.5 Pro. My editor would notice that I was off my game — but, honestly, not that egregiously off my game. And unlike me, the bots don’t require health insurance or a paycheck or family time or sleep. Am I nervous about what this portends? Yes, absolutely.
Noticias
¿Qué es la IA generativa? Todo lo que necesitas saber
Published
7 horas agoon
12 mayo, 2025
Chatbot de inteligencia artificial insignia de Openai – junto con el Las mejores alternativas de chatgpt Al igual que Google Gemini, Microsoft Copilot y Anthrope’s Claude, son ejemplos de modelos de IA generativos.
El uso de la tecnología de IA generativa se ha convertido en una parte integral de la vida personal y profesional de muchas personas. Pero, ¿qué significa la IA generativa (a menudo abreviada a Genai), qué lo distingue de otros tipos de inteligencia artificial y cómo funciona? Puede encontrar respuestas a todas esas preguntas a continuación, suponiendo que no haya hecho ChatGPT, por supuesto.
¿Qué es la IA generativa?

He usado varios altavoces inteligentes impulsados por Google Assistant durante bastante tiempo. El primer dispositivo que recogí fue el nido mini, que fue seguido por el cubo de nidos y el cubo de nido max. Tengo un altavoz en cada habitación, por lo que mi gente y yo podemos usar el Asistente de Google para pedir consultas, reproducir canciones, recibir actualizaciones de noticias y controlar nuestros dispositivos IoT en casa, independientemente de dónde estamos. Hasta el año pasado, estos dispositivos funcionaron bien. Ninguno de nosotros tuvo problemas con ellos, y se desempeñaron como se esperaba.
Sin embargo, hace unos meses, mi madre notó problemas estableciendo recordatorios en el centro de nidos. Las canciones que solía tocar regularmente en el altavoz eran más difíciles de tocar porque Google Assistant tuvo dificultades para reconocer la canción requerida. Entonces, llevé a cabo una resolución de problemas de rutina. Sin embargo, eso no solucionó el problema. Entonces, busqué soluciones en línea. No tardó mucho en darse cuenta de que los usuarios de todo el mundo tenían problemas con el Asistente de Google en sus dispositivos Nest. La línea de tiempo coincidió con el despliegue generalizado de Géminis. Puse dos y dos juntos y descubrí lo que estaba pasando.
Relacionado
5 Limitaciones de Gemini Live que odio ya que el Asistente de Google puede hacerlas bien
Un trabajo en progreso
El inicio de Géminis
Ai en todo

Fuente: Google
Regularmente uso el Asistente de Google, no solo en los altavoces inteligentes sino en mi teléfono. Junto con eso, soy uno de los primeros en adoptar nuevas tecnologías. Entonces, cuando Google lanzó Gemini, opté a la versión beta después de recibir el mensaje. Mis impresiones iniciales de Géminis fueron mixtas. Si bien podría darme más información que el Asistente de Google cuando se le preguntó una determinada consulta, no podría realizar tareas básicas, como tomar notas.
He estado haciendo esto todo el tiempo con el Asistente de Google, así que me perdí que no funcionó como se esperaba. Avance rápido hasta unos días, y me di cuenta de que Géminis no podía hacer la mitad de las cosas que Google Assistant podía. Asumí que esto se debió a la construcción beta, pero la tendencia continuó después del lanzamiento estable de Géminis. Esto me frustró, pero me alegré de que mi Galaxy Watch 4 todavía corriera el Asistente de Google.
La caída del Asistente de Google
¿Cuándo se convirtió en Siri?

Usé mi Galaxy Watch 4 para realizar tareas básicas como configurar recordatorios y tomar notas. Sin embargo, no era tan bueno como solía ser. Pensé que esto se debía al micrófono inferior en el reloj inteligente en comparación con mi teléfono. Cuando vi múltiples publicaciones en foros en línea sobre el Asistente de Google actuando, me di cuenta de que no era un problema con el micrófono, sino más bien con el asistente de salida de Google para priorizar a Gemini.
Los chatbots de IA y los asistentes generativos de IA se han apoderado del mundo. Cada compañía está ocupada construyendo modelos de IA, ya que es la nueva palabra de moda. Cuando Operai exhibió su chatgpt ai chatbot, Google fue criticado por rezagarse en la carrera de IA. Entonces, tuvo que cambiar rápidamente su enfoque en presentar Géminis (entonces Bard) al público. Después de que la compañía hizo eso, parece que asignó todos sus recursos a desarrollar Gemini mientras dejaba de lado el Asistente de Google.
Gracias a esto, el Asistente de Google recibió el tratamiento infantil ignorado. Con la mayoría de los servidores y la potencia de procesamiento dedicada a Gemini, Google Assistant comenzó a perder la trama. En los últimos meses, el asistente dejó de reconocer los nombres de contacto que solía antes, lleva mucho tiempo realizar una tarea básica, como tocar una canción, no detecta qué canción estoy tratando de tocar en los primeros intentos, a menudo reproduce la canción equivocada, me da el clima de una ciudad diferente en lugar de la que pedí, me piden un error para actualizar la aplicación de Google para que actúe una cierta función si la aplicación es la última versión en la última versión en la última versión en la última versión en el fracaso de las ocasiones.
Experimenté estos problemas con configuraciones de idiomas de Google Assistant in English (US) e English (India). Si bien el inglés (India) fue un poco mejor, no introdujo ningún cambio notable en el comportamiento del Asistente de Google.
Estoy frustrado de que no detecte la palabra de vigilia, especialmente cuando usa Google Assistant en Android Auto, el único lugar donde no quiero que falle, ya que las instrucciones de voz son la única forma segura de realizar ciertas acciones al conducir. La única inferencia que se puede extraer de estos problemas es que Google ha perdido interés en mantener el Asistente de Google. Esto se confirmó a través del anuncio de Google sobre el reemplazo del asistente con Gemini. Como resultado, la última versión de Android Auto trae a Gemini a su automóvil en lugar de asistente.

Relacionado
La era del Asistente de Google ha terminado
¿Géminis es el futuro de los asistentes de IA? Google piensa que sí
Dejé de usar dispositivos de nido
En su lugar me cambié a Alexa

Después de molestarme con las travesuras de Google, reemplacé los altavoces del nido en casa con la alineación de eco de Amazon. No hay tales problemas con Alexa, ya que el asistente entiende todo, desde la palabra de vigilia hasta los comandos. Alexa siempre ha estado ligeramente detrás del Asistente de Google en términos de características, por lo que, si bien eso sigue siendo, estoy feliz de que satisfaga todas las necesidades de mi asistente de voz sin problemas.
Sin embargo, Amazon anunció recientemente un cambio de imagen de IA para Alexa, Alexa Plus, que comenzará a llegar a las masas en breve. Esperaremos para ver cómo funciona. Con suerte, mejora las habilidades de Alexa en lugar de paralizarlas.
Tengo la esperanza de que Gemini mejore con el tiempo
Google no tiene otra opción que arreglarlo

Con el asistente del último participante en el cementerio de Google, supongo que Google lanzará actualizaciones para cambiar los altavoces y las pantallas de Google Assistant a Gemini. No estoy ansioso por eso, ya que Gemini no puede realizar tareas básicas con precisión, que es algo que hemos visto con AI. Aún así, espero que mejore el estado actual de estos dispositivos para que puedan ser útiles una vez más en lugar de ser pisapapeles caros.
Related posts


















































































































































































































































































































Trending
-
Startups12 meses ago
Remove.bg: La Revolución en la Edición de Imágenes que Debes Conocer
-
Tutoriales12 meses ago
Cómo Comenzar a Utilizar ChatGPT: Una Guía Completa para Principiantes
-
Recursos12 meses ago
Cómo Empezar con Popai.pro: Tu Espacio Personal de IA – Guía Completa, Instalación, Versiones y Precios
-
Startups10 meses ago
Startups de IA en EE.UU. que han recaudado más de $100M en 2024
-
Startups12 meses ago
Deepgram: Revolucionando el Reconocimiento de Voz con IA
-
Recursos12 meses ago
Perplexity aplicado al Marketing Digital y Estrategias SEO
-
Recursos12 meses ago
Suno.com: La Revolución en la Creación Musical con Inteligencia Artificial
-
Noticias10 meses ago
Dos periodistas octogenarios deman a ChatGPT por robar su trabajo