Staying on top of AI developments is a full-time job.
Noticias
How Mark Zuckerberg went all-in to make Meta a major AI player and threaten OpenAI’s dominance

It was the summer of 2023, and the question at hand was whether to release a Llama into the wild.
The Llama in question wasn’t an animal: Llama 2 was the follow-up release of Meta’s generative AI mode—a would-be challenger to OpenAI’s GPT-4. The first Llama had come out a few months earlier. It had originally been intended only for researchers, but after it leaked online, it caught on with developers, who loved that it was free—unlike the large language models (LLMs) from OpenAI, Google, and Anthropic—as well as state-of-the-art. Also unlike those rivals, it was open source, which meant researchers, developers, and other users could access the underlying code and its “weights” (which determine how the model processes information) to use, modify, or improve it.
Yann LeCun, Meta’s chief AI scientist, and Joelle Pineau, VP of AI research and head of Meta’s FAIR (Fundamental AI Research) team, wanted to give Llama 2 a wide open-source release. They felt strongly that open-sourcing Llama 2 would enable the model to become more powerful more quickly, at a lower
cost. It could help the company catch up in a generative AI race in which it was seen as lagging badly behind its rivals, even as the company struggled to recover from a pivot to the metaverse whose meager offerings and cheesy, legless avatars had underwhelmed investors and customers
But there were also weighty reasons not to take that path. Once customers got accustomed to a free product, how could you ever monetize it? And as other execs pointed out in debates on the topic, the legal repercussions were potentially ugly: What if someone hijacked the model to go on a hacking spree? It didn’t help that two earlier releases of Meta open-source AI products had backfired badly, earning the company tongue-lashings from everyone from scientists to U.S. senators.
It would fall to CEO Mark Zuckerberg, Meta’s founder and controlling shareholder, to break the deadlock. Zuckerberg has long touted open-source technology (Facebook itself was built on open-source software), but he likes to gather all opinions; he spoke to “everybody who was either for, anti, or in the middle” on the open-source question, recalls Ahmad Al-Dahle, Meta’s head of generative AI. But in the end it was Zuckerberg himself, LeCun says, who made the final decision to release Llama 2 as an open source model: “He said, ‘Okay, we’re just going to do it.’” On July 18, 2023, Meta released Llama 2 “free for research and commercial use.”
In a post on his personal Facebook page, Zuckerberg doubled down on his decision. He emphasized his belief that open-source drives innovation by enabling more developers to build with a given technology. “I believe it would unlock more progress if the ecosystem were more open,” he wrote.
The episode could have just been another footnote in the fast-unfolding history of artificial intelligence. But in hindsight, the release of Llama 2 marked a crucial crossroads for Meta and Zuckerberg—the beginning of a remarkable comeback, all thanks to tech named after a furry camelid. By the time Llama 3 models were released in April and July 2024, Llama had mostly caught up to its closed-source rivals in speed and accuracy. On several benchmarks, the largest Llama 3 model matched or outperformed the best proprietary models from OpenAI and Anthropic. One advantage in Llama’s favor: Meta uses publicly shared data from billions of Facebook and Instagram accounts to train its AI models.
The Llama story could be a pivotal chapter in the ongoing philosophical debate between open-source AI models (generally more transparent, flexible, and cost-effective, but potentially easier to abuse) and closed models (often more tightly controlled but lacking transparency and more costly to develop). Just as crucially, Llama is at the core of a complete strategic pivot on the part of Meta to go all in on generative AI. Zuckerberg is now seen as a champion of “democratizing tech” among Silicon Valley developers—just two years after he and his company were being questioned, and sometimes mocked, for going all in on the metaverse, and vilified for having contributed to political polarization, extremism, and harming the mental health of teenagers.
DeSean McClinton-Holland for Fortune
While ChatGPT remains the dominant gen AI tool in the popular imagination, Llama models now power many, if not most, of the Meta products that billions of consumers encounter every day. Meta’s AI assistant, which reaches across Facebook, Instagram, WhatsApp, and Messenger, is built with Llama, while users can create their own AI chatbot with AI Studio. Text-generation tools for advertisers are built on Llama. Llama helps power the conversational assistant that is part of Meta’s hit Ray-Ban glasses, and the feature in the Quest headset that lets users ask questions about their surroundings. The company is said to be developing its own AI-powered search engine. And outside its walls, Llama models have been downloaded over 600 million times on sites like open-source AI community Hugging Face.
Still, the pivot has perplexed many Meta watchers. The company has spent billions to build the Llama models: On its third-quarter earnings call, Meta announced that it projects capital expenditures for 2024 to reach as high as $40 billion, with a “significant” increase likely in 2025. Meanwhile, it’s giving Llama away for free to thousands of companies, including giants like Goldman Sachs, AT&T, and Accenture. Some investors are struggling to understand where and when, exactly, Meta’s revenue would start to justify the eye-watering spend.
Why open-sourcing Llama is good for Meta is “the big puzzle,” says Abhishek Nagaraj, associate professor at the University of California at Berkeley’s Haas School of Business, adding that it’s “hard to justify” from a purely economic standpoint.
Nonetheless, Llama’s contrarian success has allowed Zuckerberg to shrug off the lukewarm response to his metaverse ambitions and the company’s painful “year of efficiency” in late 2022 and early 2023. The rise of Llama has also given Zuckerberg a chance to address a longsimmering sore point in his otherwise meteoric career: the fact that Facebook, and now Meta, have so often seen their services and products constrained by rules imposed by Apple and Google—the rival giants whose app stores are Meta’s primary points of distribution in the mobiledevice era. As he wrote in a July blog post: “We must ensure that we always have access to the best technology, and that we’re not locking into a competitor’s closed ecosystem where they can restrict what we build.”
“We got incoming requests from people who said, ‘You have to open-source that stuff. It’s so valuable that you could create an entire industry, like a new internet.’”
Yann Lecun, describing reactions to the 2023 leak of Llama
With Llama, Meta and Zuckerberg have the chance to set a new industry standard. “I think we’re going to look back at Llama 3.1 as an inflection point in the industry, where open-source AI started to become the industry standard, just like Linux is,” he said on Meta’s July earnings call—invoking the open-source project that disrupted the dominance of proprietary operating systems like Microsoft Windows.
Perhaps it’s this possibility that is giving Zuckerberg some new swagger. At 40, two decades after he cofounded Facebook, he appears to be enjoying what many are calling his “Zuckaissance”—a personal and professional glow-up. His once close-cropped haircut has given way to lush curls, the drab hoodies are swapped for gold chains and oversize black T-shirts, and his hard-edged expressions have softened into relaxed smiles. He even found time in November to collaborate with T-Pain on a remake of the hip-hop hit “Get Low”—an anniversary gift to his wife, Priscilla Chan.
In the long run, OpenAI’s ChatGPT may be seen as the fiery spark that ignited the generative AI boom. But for now, at least, Llama’s own future’s so bright, Zuckerberg has gotta wear AI-powered Ray-Ban shades.
Meta’s work on AI began in earnest in 2013, when Zuckerberg handpicked LeCun, a longtime NYU professor and an AI luminary, to run Facebook’s new FAIR lab. LeCun recalls that when he began discussing the role, his first question was whether Facebook would open-source its work. “Nobody has a monopoly on good ideas,” he told Zuckerberg, “and we need to collaborate as much as we can.” LeCun was thrilled with the answer he got: “Oh, you don’t have to worry about it. We already open-source our platform software and everything.”
But prior to the generative AI boom, Meta’s use of AI was mostly behind the scenes—either research focused or integrated under the hood of its recommendation algorithms and content moderation. There were no big plans for a consumer-facing AI product like a chatbot—particularly not when Zuckerberg’s attention was focused on the metaverse.
Generative AI began to take off with OpenAI’s release of ChatGPT, just as the Meta pivot was looking particularly unwise. With metaverse spending through the roof and consumers utterly uninterested, Meta’s stock hit a seven-year low, inspiring headlines like, “How Much Trouble Is Mark Zuckerberg In?” The company began laying off thousands of employees.
Meta’s first widely noticed foray into gen AI didn’t fare much better. In November 2022, FAIR released a demo of an LLM chatbot, trained on scientific texts, called Galactica. Like previous FAIR models, Galactica was released as open-source, allowing free access to the “brains” of the model. This openness was meant to enable researchers to study how Galactica functioned.
But these were the days before the public was fully aware of LLMs’ tendency to hallucinate—to sometimes spit out answers that are convincing, confident, and wrong. Many scientists were appalled by the Galactica chatbot’s very unscientific output, which included citing research papers that didn’t exist, on topics such as how to make napalm in a bathtub; the benefits of eating crushed glass; and “why homosexuals are evil.” Critics called Galactica “unethical” and “the most dangerous thing Meta’s made yet.”
After three days of intense criticism, Meta researchers shut down Galactica. Twelve days later, OpenAI released ChatGPT, which quickly went viral around the world, tapping into the cultural zeitgeist (despite its own serious hallucination issues).
Bruised but undeterred, researchers at FAIR spent the winter fine-tuning a new family of generative AI models called LLaMA (short for Large Language Models Meta AI). After the Galactica backlash, Meta was cautious: Instead of fully opening the code and model weights to all, Meta required researchers to apply for access, and no commercial license was offered. When asked why, LeCun responded on X: “Because last time we made an LLM available to everyone…people threw vitriol at our face and told us this was going to destroy the fabric of society.”
Despite these restrictions, the full model leaked online within weeks, spreading across 4chan and various AI communities. “It felt a bit like Swiss cheese,” Nick Clegg, Meta’s president of global affairs, says of the failed attempt to keep Llama behind closed doors. Meta filed takedown requests against sites posting the model online in an attempt to control the spread. Some critics warned of serious repercussions and excoriated Meta: “Get ready for loads of personalized spam and phishing attacks,” cybersecurity researcher Jeffrey Ladish posted on X.
The consternation even reached Capitol Hill. In June 2023, two U.S. senators wrote a letter to Zuckerberg, criticizing Llama’s release and warning of its potential misuse for fraud, malware, harassment, and privacy violations. The letter said that Meta’s approach to distributing advanced AI “raises serious questions about the potential for misuse or abuse.”
But at the same time, LeCun says, he and other Meta leaders were taken aback by the sheer demand for the leaked Llama model from researchers and developers. These would-be users wanted the flexibility and control that would come with open access to a profoundly powerful LLM. A law firm, for example, could use it to train a specialized model for legal use—and own the intellectual property. A health care company could audit and manage the data behind the model, ensuring HIPAA compliance. Researchers could experiment and examine the inner workings of the model. “We got incoming requests from people who said, ‘You have to open-source that stuff. It’s so valuable that you could create an entire industry, like a new internet,’” LeCun says
Messages came directly to Zuckerberg, to CTO Andrew “Boz” Bosworth, and to LeCun, leading to weekly calls in which the leaders debated what they should do. Should they open-source the next release? Did the benefits outweigh the risks? By midsummer, Zuckerberg’s mind was made up, with backing from Pineau and LeCun—leading to the big July 2023 reveal.

Cayce Clifford for Fortune
Llama 2 was not entirely open. Meta did not disclose the datasets—including all that Facebook and Instagram material—used to train the model, which are widely regarded as its key competitive advantage. It also restricted usage by companies with more than 700 million monthly active users, primarily meant to deter Meta’s Big Tech competitors. But the source code and model weights could be downloaded, and Meta encouraged users to contribute improvements, bug fixes, and refinements of results to a collaborative community.
Even before the Llama 2 release, Zuckerberg had laid the groundwork to treat it like Meta’s next big thing. After the first Llama model was released, in February 2023, Zuckerberg quickly put together a team from across the company, including FAIR, to focus on accelerating generative AI R&D in order to deploy it in Meta app features and tools. He chose Ahmad Al-Dahle, a former Apple executive who had joined Meta in 2020 to work on metaverse products, to lead the new team.
At an internal all-hands meeting in June 2023, Zuckerberg shared his vision for Meta’s AI-powered future. Meta was building generative AI into all of its products, he said, and he reaffirmed the company’s commitment to an “open science-based approach” to AI research. “I had a big remit,” Al-Dahle says: “Develop state-of-theart models; put them in product at record speed.”
In other words: It was game on for Llama.
Meta’s strategy can seem counterintuitive, coming from a company with $135 billion in annual revenue. Open-source software has typically been seen as a way to democratize technology to the advantage of small startups or under-resourced teams— the kinds scrambling to compete with giants like Meta.
In a July 2024 blog post called “Open Source Is the Path Forward,” Zuckerberg made it clear that giving away Llama is not an altruistic move. Open-sourcing, he said, would give Meta a competitive edge in the AI race—and could eventually make Llama the go-to platform for generative AI. Just as important, he wrote: “Openly releasing Llama doesn’t undercut our revenue, sustainability, or ability to invest in research like it does for closed providers” like OpenAI or Google.
Now that Llama has had a year-plus to prove itself, some are finding Zuck’s case persuasive. Shweta Khajuria, an analyst at Wolfe Research who coverscMeta, calls releasing Llama as open-source “a stroke of genius” that will enable Meta to attract top talent, accelerate innovation on its own platform, develop new revenue sources, and extend its longevity. Already, she explains, open-sourcing Llama basically allowed Meta to quickly catch up to OpenAI, Google, and Anthropic, in part because thousands of developers are building and improving on Llama at a blistering pace. “If they had not open-sourced it, it probably would have taken a much longer time to be at bar with other frontier models,” she says.
Khajuria believes there will be plenty of new monetization opportunities for Meta down the line, such as subscription and advertising options for current Meta AI features based on Llama, as well as AI-powered in-app business messaging. “Meta benefits from having billions of users where Perplexity and Claude and ChatGPT don’t necessarily have that base,” she says. “Once they have a critical mass of users and usage around the world, they can monetize.”
Zuckerberg has also alluded to the fact that AI-generated content itself will be valuable (though others have criticized such content as “slop”). On the recent earnings call, Zuckerberg said: “I think we’re going to add a whole new category of content, which is AI-generated or AI-summarized content, or existing content pulled together by AI in some way, and I think that that’s gonna be very exciting for Facebook and Instagram and maybe Threads, or other kinds of feed experiences over time.”
Patrick Wendell is CTO and cofounder of data and AI company Databricks, which released Meta’s Llama 3.1 models on its platform in July. He sees Meta’s move as much more far-reaching. If the internet was the first big wave of technology, which enabled Facebook’s creation, and mobile was the second, dominated by Apple and Google, “I think [Zuckerberg’s] calculus is the third big wave is coming, and he does not want to have one or two companies completely control all access to AI,” Wendell says. “One way you can avoid that is by basically commoditizing the market, giving away the core IP for free…so no one gains a monopoly.”
Some critics argue that Meta shouldn’t be using the term “open-source” at all. Current versions of Llama still have restrictions that traditional open-source software doesn’t (including lack of access to datasets). In October, the Open Source Initiative, which coined the term, criticized Meta for “confusing” users and “polluting” the nomenclature, and noted that Google and Microsoft had dropped their use of the term (using the phrase “open weights” instead). Clegg, Meta’s global affairs chief, is blunt in his rebuttal: He says the debate reminds him of “folks who get very agitated about how vinyl is the only true kind of definition of good music.” Only a handful of scientific and low-performing models would fit the definition, he continues: “No one has copyright IP ownership over these two English words.”
Nomenclature aside, Meta is winning where it matters. Nathan Lambert, a research scientist at the nonprofit Allen Institute for AI, says that while definitions might be quibbled about, more than 90% of the open-source AI models currently in use are based on Llama. Open-source coders accept that Zuckerberg “has some corporate realities that will distort his messaging,” he says. “At the end of the day, the community needs Llama models.
Internally at Meta, Llama and revenue-generating businesses are increasingly inextricable. In January, Zuckerberg moved FAIR, the AI research group, into the same part of the company as the team deploying generative AI products across Meta’s apps. LeCun and Pineau now report directly to chief product officer Chris Cox, as does Al-Dahle. “I think it makes a lot of sense to put [FAIR] close to the family of app products,” says Pineau; she points out that even before the reshuffle, research her team worked on often ended up in Meta products just a few months later.
Zuckerberg also tasked FAIR with something far more ambitious: developing artificial general intelligence (AGI), a type of AI that possesses humanlike intelligence. The company prefers to use the term AMI (“advanced machine intelligence”), but whatever it’s called, Pineau says, Meta now has a “real road map” to create it—one that relies, presumably, on a thriving Llama. Meanwhile the company is hard at work on Llama 4 models currently being trained on a cluster of over 100,000 pricey Nvidia GPUs, a cluster that Zuckerberg recently said was “bigger than anything that I’ve seen reported for what others are doing.”
Not everyone loves the idea of a bigger-than-anything Llama. For years, Zuckerberg and his company have grappled with public mistrust over the way it has used other types of AI to personalize news feeds, moderate content, and target ads across Facebook, Instagram, and WhatsApp. Critics have accused its algorithms of exacerbating political polarization, adolescent mental-health crises, and the spread of misinformation (accusations Meta has denied or rebutted); it was perhaps inevitable that Llama would face extra scrutiny.
Zuckerberg “does not want to have one or two companies completely control all access to AI. One way you can avoid that is by giving away the core IP for free, so no one gains a monopoly.”
PATRICK WENDELL, cofounder and CTO, Databricks
Some critics fear that an open-source model like Llama is dangerous in the hands of malicious actors, precisely because it’s too open. Those concerns may grow in today’s tense geopolitical atmosphere. On Nov. 1, Reuters reported that China’s army had built AI applications for military use on the back of an early version of Llama.
An incoming Trump administration could make it even more complicated to keep Llama open. Trump’s economic nationalism would suggest that he would certainly not want China (or any other country) to access American-made state-of-the-art AI models. But Llama’s future may depend on who has Trump’s ear: Vice President–elect JD Vance has spoken out in support of open-source AI in the past, while Elon Musk’s xAI has open-sourced its chatbot Grok (and Musk famously cofounded OpenAI as an open-source lab).
Even some of Zuckerberg’s oldest friends have concerns about this kind of arms race. Dustin Moskovitz, a cofounder of Facebook and now CEO of Asana (and the founder of Open Philanthropy, one of the biggest funders of AI safety initiatives), says that while he is not against open-source LLMs, “I don’t think it’s appropriate to keep releasing ever more powerful versions.”
But Zuckerberg and his allies, both within Meta and without, argue that the risks of open-source models are actually less than those built behind proprietary closed doors. Preemptive regulation of theoretical harms of open-source AI will stifle innovation, they say. In a cowritten essay in August, Zuckerberg and Spotify cofounder Daniel Ek noted that open-source development is “the best shot at harnessing AI to drive progress and create economic opportunity and security for everyone.”
Whatever the outcome of Meta’s increasingly loud open-source activism, many argue that Zuckerberg is exactly the right messenger. His personal involvement in promoting Llama and open-source, insiders agree, is the key reason Meta has been able to move with such speed and focus. “He’s one of a few founder leaders left at these big tech companies,” says Clegg. “One of the great advantages of that means you have a very short line of command.”
Zuckerberg also has been active in recruiting AI talent, often reaching out personally. A March 2024 report said that Zuckerberg had been luring researchers from Google’s DeepMind with personal emails in messages that stressed how important AI was to the company.
Erik Meijer, who spent eight years at Meta leading a team focused on machine learning—before being laid off in November 2022—believes such a total shift is only possible with someone like Zuckerberg at the top. “It’s like pivoting a giant supertanker,” he says. “He’s a little bit like a cult hero inside the company, in a good sense, so I think that helps get all the noses in the same direction.” Zuckerberg’s new personal makeover, Meijer mused, is “maybe a very externally visible sign of renewal.”
Zuckerberg’s renewal, and Meta’s transformation, are sure to test investor patience due to skyrocketing capital expenditures. Khajuria, the Wolfe analyst, says investors will tolerate it for now “because Meta has laid the groundwork of telling folks what the opportunity is.” That said, if revenue does not begin accelerating, exiting 2025 into 2026, “I think investors will start losing patience,” she warns. (Zuckerberg is somewhat insulated from investor discontent; he controls about 61% of voting shares at Meta.)
One thing is clear, LeCun says: The kind of gamble Meta is taking, with its massive investment in GPUs and all things generative AI, requires a leader willing to take big swings. And Meta has not only that leader, but a massively profitable core business to fund the vision. As a result, Meta is back at the center of the most important conversation at the intersection of tech and business—and it’s not a conversation about legless metaverse avatars.
This article appears in the December 2024/January 2025 issue of Fortune as part of the 100 Most Powerful People in Business list.
Noticias
Columnista de humor versus chatgpt: ¿Soy más divertido que un robot?

Después de años de resistir y defender mis valores de ludita, finalmente me he dado.
Finalmente usé chatgpt. Y me preocupa que, algún día, sea mucho más divertido que yo.
Desde 2023, he publicado la columna Ask Lucas como una columna de consejos satíricos que a menudo se confunde con uno escrito por un sociópata que intenta incitar actos de violencia aleatorios en la comunidad.
Secretamente esperaba que publicar esta columna ayudara a envenenar el pozo digital que AI extrae al robar nuestras ideas. Inyectando literalmente cientos de columnas de Ask Lucas (destilaciones de idiotez pura y desprecio por la profesión de la columna de asesoramiento sagrado) en la web tiene que ser suficiente para cancelar un par de novelas de Octavia Butler, ¿verdad?
Bueno, hoy descubrimos exactamente lo que piensan los robots de mí. Llevé a Chatgpt a punta de pistola y lo alimenté con fuerza varios ejemplos de las peores columnas que jamás hayan maldecido las páginas del concesionario simple.
Entre episodios de adulación no solicitada e insincera, ChatGPT también hizo varias observaciones sobre la tradición que subyace en las columnas Ask Lucas. Una de esas observaciones, en palabras de Chatgpt: “Ohio es un paisaje maldito lleno de seres antiguos y miseria estacional”.
Todos los que viven aquí saben que esto es cierto, y me alegro de haber podido enseñarle algo sobre el Mordor del Medio Oeste.
Una cosa que encontré desalentador fue el despido de Chatgpt del verdadero propósito de esta columna: educar a mis lectores sobre las anguilas que acechan en la fontanería, esperando atacar cuando menos lo esperamos.
Esto es lo que se le ocurrió a Chatgpt:
“Querida Lucas (chatgpt): Recientemente me mudé a un nuevo apartamento y descubrí que mi vecino de arriba tiene una habitación que corre casi exclusivamente entre las 2 y las 4 a.m., es extrañamente ruidosa, y he perdido varias noches de sueño al sonido de sus muebles. No quiero ser el vecino de mal humor, pero tampoco quiero perder la cabeza. ¿Qué tengo que hacer?
Estimado lector (chatgpt): Ah, sí, las horas rituales. De 2 a 4 am es el horario estelar para que el Roomba realice sus deberes sagrados.
Podrías pensar que es una limpieza. Pero no, Roombas no limpian. Ellos cazan. Está escaneando el edificio para debilidades psíquicas, trazando líneas de Ley, y que golpea suavemente a la otomana de su vecino para mantener los portales sellados. No estás lidiando con una aspiradora de robot. Estás lidiando con un guardián de insomnio.
La gente ha olvidado que la mayoría de los dispositivos “inteligentes” más modernos son simplemente nuevas iteraciones de las herramientas medievales. La freidora de aire es un altar de sacrificio basado en la convección. El Apple Watch es un sol maldito que susurra recordatorios directamente en su sangre. ¿Y el Roomba? Ese es un golem de patrocinio de límites de tallado en sigilo con capacidades Bluetooth.
Si fuera inteligente, agradecería a su vecino por mantener a la bestia activa durante las horas de brujas. Lo último que quieres es que ese Roomba quede inactivo. Una vez inactivo durante demasiado tiempo, un Roomba comienza a soñar. Y cuando sueña, las paredes se adelgazan.
Pero no estás aquí por la verdad. Estás aquí por consejo. Muy bien.
Opción uno: confrontar a su vecino. Llama a su puerta, dígales que estás siendo perseguido por un disco de hockey en forma de disco con problemas de abandono. Pueden disculparse. O pueden mirarte a los ojos y decir: “Hacemos lo que debemos para mantenerlo ocupado”. Si eso sucede, vete. No empaces. No digas adiós. Solo muévete.
Opción dos: luchar contra la tecnología con tecnología. Consigue tu propia habitación. Ejecutarlo de 2 a 4 a.m., directamente debajo del suyo. Si lo hace bien, los dos dispositivos se reunirán en el medio: dos gladiadores de ojos muertos encerrados en un bulto para siempre, girar, recalibrar. Esto podría romper la maldición. O podría despertar algo. De cualquier manera, será más tranquilo.
Opción tres: acepta tu destino. Auriculares con cancelación de ruido, máquinas de ruido blanco, los mecanismos habituales de afrontamiento burgueses. O tapones para los oídos si eres pobre.
Al final, probablemente no se trata de la habitación. Se trata de la fragilidad de la paz en nuestro mundo moderno. Cada vez que nos sentimos a gusto, algo comienza a pitgar en el techo. Tal vez esta sea una prueba. Tal vez sea un presagio. O tal vez es solo el horrible gusto de su vecino en la programación de robots.
Dormir bien. O no lo hagas. Tu movimiento “.
Lucas (el humano): Para crear esta “columna”, alimenté cinco de mis columnas en ChatGPT y creé tres columnas, lo que permite que el robot cree la pregunta y la respuesta.
Esto no es horriblemente diferente de cómo trabajo, escribo muchas de mis propias preguntas, excepto que en lugar de tomar descansos de café innecesarios y antagonizar a mi esposa mientras intenta trabajar, Chatgpt simplemente se calla y hace el trabajo.
Por lo general, estas columnas me llevan alrededor de 3-4 horas, y ChatGPT pudo crear una imitación medio decente en solo segundos. La parte más larga del proceso fue decidir qué incorpora ingresar.
A los chelines corporativos les gusta llamar a esto “eficiencia”, pero en realidad es solo alejar el costo del tiempo de un humano y hacia la absurda cantidad de energía que se necesita para operar la máquina de plagio.
Dado que la mayor parte de la energía de Estados Unidos todavía proviene de combustibles fósiles, una forma de ver la IA es que encarna una especie de uroboros evolutivo, devorando los cadáveres de dinosaurios litentes para crear una inteligencia superior similar a la de un Dios.
Eso puede sonar poético, pero todo lo hace si usas un montón de palabras elegantes. La verdad no es tan romántica cuando recuerdas quiénes somos realmente. No vamos a usar esta inteligencia para explorar las estrellas o terminar las guerras. Vamos a hacer fotos de un aficionado a Donald Trump y publicarlas en Facebook para anunciar Ozempic fuera de marca.
También es un poco injusto para mí compararme con un robot. ChatGPT se limita a evitar consejos irresponsables o peligrosos. Sin embargo, no tengo tales compunciones. En esta columna, les dije a los lectores que embalsen sus autos en vehículos llenos de niños, disparen rifles de asalto salvajemente al cielo y maten las marmotas con bombas de automóviles.
Entonces te preguntaré, ¿soy más divertido que un robot?
Tenga en cuenta que hay una tercera opción, y es que ni esta columna, ni sus sustitutos de IA, son divertidos, y que debería dejar de hacer esto para preservar lo que queda de la cordura del noreste de Ohio.
Aquí están las columnas Ask Lucas que puse en ChatGPT para generar mi inevitable reemplazo.
Pregúntale a Lucas: Mi esposo sigue hablando consigo mismo mientras duerme.
Pregúntele a Lucas: Mi esposo me sigue sorprendiendo por la noche
Pregúntale a Lucas: No puedo tomarme otro mes de clima frío.
Pregúntale a Lucas: ¿Cómo les digo a otros conductores “lo siento!” si cometo un error en el camino?
Pregúntale a Lucas: ¿Cómo evito que mi novio obsesionado con los Browns arruine mis fines de semana este año?
Noticias
Best AI assistants tested: What works, what doesn’t, and which to use

I would know, because it’s my full-time job. I subscribe to Anthropic’s Pro mode for access to their latest model, Claude 3.7, in “extended thinking” mode; I have a complementary subscription to OpenAI’s Enterprise mode so that I can test out their latest models, o3 and o4-mini-high (more later on OpenAI’s absurd naming scheme!), and make lots of images with OpenAI’s new image generation model 4o, which is so good I have cancelled my subscription to my previous image generation tool Midjourney.
I subscribe to Elon Musk’s Grok 3, which has one of my favorite features of any AI, and I’ve tried using the Chinese AI agent platform Manus for shopping and scheduling. And while that exhausts my paid subscription budget, it doesn’t include all the AIs I work with in some form. In just the month I spent writing this piece, Google massively upgraded its best AI offering, Gemini 2.5, and Meta released Llama 4, the biggest open source AI model yet.
So what do you do if keeping up with AI developments is not your full-time job, but you still want to know which AI to use when in ways that genuinely improve your life, without wasting time on the models that can’t?
That’s what we’re here for. This article is a detailed, Consumer Reports-style dive into which AI is the best for a wide range of cases and how to actually use them, all based on my experience with real-world tasks.
But first, the disclosures: Vox Media is one of several publishers that have signed partnership agreements with OpenAI, but our reporting remains editorially independent. Future Perfect is funded in part by the BEMC Foundation, whose major funder was also an early investor in Anthropic; they don’t have any editorial input into our content either. My wife works at Google, though not in any area related to their AI offerings; for this reason, I usually don’t cover Google, but in a piece like this, it’d be irresponsible to exclude it.
The good thing is that this piece doesn’t require you to trust me about my editorial independence; I show my work. I ran dozens of comparisons, many of which I invented myself, on every major AI out there. I encourage you to compare their answers and decide for yourself if I picked the right one to recommend.
AI art is made by training a computer on the contents of the internet, with little regard for copyright or the intent of the creators. For that reason, most artists can’t stand it. Given that, is it defensible to use AI art at all?
I think in a just world OpenAI would certainly compensate some artists — and in a just world, Congress would be moving to lay out the limits on artistic borrowing. At the same time, I am increasingly convinced that existing copyright law is a poor fit for this problem. Artists influence one another, comment on one another, and draw inspiration from one another, and people with access to AI tools will keep wanting to do that.
My personal philosophy is shaped by the fan cultures of my childhood: It’s okay to build on someone else’s work for your own enjoyment, but if you like it, you should pay them for it, and it’s absolutely not okay to sell it. That means no generative AI art in someone else’s style for commercial purposes, but it’s fine to play around with your family photos.
OpenAI’s new 4o image creation mode is the best AI out there for generating images, by a large margin. It’s best in the free category, and it’s best in the paid category.
Before it was released, I was subscribed to Midjourney, an AI image generator platform. Midjourney is probably what you think of when you think of AI art: It produces mystical, haunting, visually beautiful stuff, and has some great tools for improving and editing your final results, like touching up someone’s hair while leaving everything else in place.
The big thing that 4o can do, which no model before could reliably pull off, is take a picture that didn’t come out well and turn it into a beautiful work of art, all while still preserving the character of the original.
For example, here’s a still from a video of my wife and I singing “Happy Birthday” to our baby on her first birthday:
Courtesy of Kelsey Piper
It’s a beautiful moment, but not exactly a flattering picture. So I asked ChatGPT to render it in the style of Norman Rockwell, a mid-century illustrator whose work I love, and got this:

The AI moved the cake (which had been barely visible behind the paper towel roll in the original still) to be the focal point of the image, while keeping the way my wife and I are holding the baby together, as well as the cluttered table, and the photograph-covered fridge in the background. The result is warm, flattering, and adorable.
It’s this capability that made 4o go viral recently in a way that no image generator before it had. Here’s Midjourney’s attempt, for example:

You’ll notice that it’s a seemingly, uh, completely different family, with no real inspiration from the original at all! You can eventually get a better result than this out of Midjourney, but only by spending weeks becoming a pro at prompting with the platform’s highly specific language and toolset.
By contrast, ChatGPT was able to give me a far superior output on the first try in response to a simple request without specialized language.
The difference between 4o and other image models is most notable with this kind of request, but it’s better for almost everything else I use images for, too. The product you get out of the box is pretty good, and it’s not hard to produce something much better. That, ideally, is what we should be getting out of our AI tools — something amazing that can be created with simple language by a nonexpert.
The one place 4o still falls short is editing small parts of an image while keeping the rest the same. But even there, you no longer need Midjourney — Gemini now has that capability for free.
Prompting Strategies for 4o image generation
To get good images out of 4o, you’ll first need to get around the filters which prohibit a wide range of images — like offensive or pornographic images — but which are often enforced against perfectly inoffensive content in a way that can feel random. To avoid sporadic scoldings from the content filter, don’t ask for work in the style of a specific artist, but rather, something that is reminiscent of that artist, and then ask specifically for a “style transfer.” I’m sure that’s not the only adequate workaround, but it’s one that has proven reliable for me.
In March, the internet went briefly wild over the ability to use 4o to reproduce cute family photos in the style of Japanese animator Hayao Miyazaki’s Studio Ghibli. But Studio Ghibli’s style is much more than just cute, and with a little more prompting, you can get much better results. Here’s a 4o Studio Ghibli-style rendering of a picture I took of my daughter sneaking a snack off the table, from just the prompt “Ghibli this please”:

Kawaii! But here’s what you get if you invite 4o to think first about what makes the picture Ghibli, where it might fit into a Studio Ghibli movie, and what tiny details such a movie would include:

The differences are subtle but meaningful: Light is cast from a specific source, instead of a general sourceless brightness. There’s a bit more variety in the foods on the table, details that make the spread appear more realistic. The book on the floor isn’t just any book — it’s recognizably Eric Carle’s classic The Very Hungry Caterpillar, evoked with just two colors and one line. There’s an intentionality and intensity to the baby that was missing from the first picture.
A few years ago, one great oddity of language models was that they’d be much smarter if you simply told them, “give an intelligent answer.” This isn’t nearly as true of language models anymore, but it remains profoundly true of AI art generation. Try asking the AI to do a good job, and it’ll do a better one. Challenge it on whether it truly captured an artist’s genius, and it’ll give you a thoughtful answer and then draw a better version.
The difference is more pronounced for more realistic art styles (like pencil illustration, photorealism, or oil paintings), which don’t always look good and will often hit the uncanny valley if you don’t know how to prompt the AI over it. Here’s what I get with 4o if I upload a picture of me and my youngest daughter at the beach for the first time with just the words “please do a style transfer to an illustration reminiscent of Rockwell”:

This is impressive for an AI, but it’s not actually very good as a work of art, and it is almost totally lacking Norman Rockwell’s magic. That’s not surprising: More realistic art styles like Rockwell’s often fall flat with 4o unless you’re able to put in some work in getting the AI to draw them properly.
If you are, here’s the strategy I recommend: Don’t just upload one picture, but a whole cluster of them, each in slightly different postures and moments. Upload good, clear pictures of each family member’s face and tell the AI they’ve been included as a reference. Then, instead of asking the AI to immediately generate the picture, ask it to talk with you about what you’re hoping to capture. This is what I wrote:
This is a picture of the moment that my daughter first saw the ocean. I want an illustration that captures this moment in the style of a mid-century illustrator like Norman Rockwell — something sharp, detail-oriented, and personal with an eye for the magic of ordinary moments and the joys of ordinary lives. I included additional pictures of my daughter and I for reference material for you. Before you generate the image, let’s have a conversation about the essential elements of Rockwell’s style, what he’d bring to this picture and how we can capture it.
4o responds to queries like this enthusiastically:
I’d love to talk about how to capture this moment in a Norman Rockwell-inspired illustration — it’s such a perfect candidate for that style: a first encounter with something vast and wild (the ocean!), grounded by warmth, care, and a very human moment between a parent and child.
Let’s break down some essential elements of Rockwell’s style, and how they could apply to this scene.
After some back and forth, it produced this:

Rockwell? Not exactly. But this is much better than the first draft we just looked at. It has more motion, more energy, more detail, and more expression — and all that was just from asking the AI to think through what the painting should try to achieve before drawing it!
You can also ask 4o to revise its drawings, but you can really only ask this once: After the first revision, in my experience, it starts making the drawings worse and worse, perhaps because the “context” it uses is now full of its own bad drafts. (This is one of many examples of how AI does not work like a human.)
This is also the one place where Midjourney still shines — it has very good tools for editing one specific part of a picture while preserving the overall style, something 4o largely lacks. If you want a second revision of a drawing you got in 4o, I recommend you open a new chat and copy over the draft you’re revising, along with your original inspiration images.
These simple prompting strategies work for almost whatever you’re trying to do with the AI. Even if you’re in a hurry, I highly recommend asking the AI “what would [artist] see in this image” before you ask for a rendition, and if you have the time, I recommend having a long back-and-forth about your vision.
Best for winning petty internet arguments
When Elon Musk’s X.AI released Grok 3, it came with an incredible feature that I’ve been impatiently waiting for some other company to replicate: a button to scan someone’s X profile and tell you all about them.

Whenever someone replies to one of my tweets in a particularly memorable way (for good or for bad), I’ll click the button to get a summary of their entire Twitter presence. Are they thoughtful? Do they engage in good faith? Are they a “farmer from Nebraska” who mostly posts about why Ukraine is bad (that is, probably a bot)?
It’s a great feature. So, of course, X.AI soon dramatically weakened it, presumably because people like me were using it constantly and making lots of computationally expensive queries. I believe it no longer uses the most advanced Grok model, and it definitely now only scans a few days of profile history. But there’s a brilliant product opportunity if anyone’s looking for one — give me back the good version of this feature! It’s definitely a guilty pleasure, but it is one of the only cases where I was using AI constantly.
Gemini 2.5 Pro is the best AI for writing in the free category; GPT 4.5 beats it out in the paid category.
I’m not an artist, so the ways that AIs are imperfect at art don’t really bother me — it’s still much better than I could do myself! But I am a fiction writer, so when it comes to fiction, I can’t help seeing the limitations of AI.
The most important one is how predictable AI creative writing tends to be. The art of writing is the art of earning the reader’s investment and then repaying it. AIs…don’t do this. They can write pretty metaphors; they can wax poetic in any style you wish. But they can’t, as yet, deliver the real stuff of good fiction.
AIs are fantastic if you want a silly bedtime story with your child as the protagonist (kids love this), or if you want a sounding board for ideas you can incorporate into your own work. They’re also a friendly fiction reader, happy to offer feedback and analysis (perhaps with a little bit too much enthusiasm).
As with art, prompting is key. I primarily explored the ability of AIs to generate fiction by asking them to write the prologue to George R.R. Martin’s A Game of Thrones (I picked this because, of course, the ultimate dream is an AI that can finish this otherwise doomed-to-be-unfinished series).
Martin is very good at what he does; the prologue, featuring some ill-fated men of the Night’s Watch encountering their supernatural foe, succeeds at establishing the unique atmosphere of the world of Thrones while immediately hooking the reader. In just 14 pages, the prologue represents everything AIs can’t do yet in writing. Their prose tends to be overwrought; their dialogue is clunky and unsubtle.
Here’s a side-by-side comparison of Martin and AIs attempting to imitate him. First, Martin:
We should start back,” Gared urged as the woods began to grow dark around them. “The wildlings are dead.”
“Do the dead frighten you?” Ser Waymar Royce asked with just the hint of a smile.
Gared did not rise to the bait. He was an old man, past fifty, and he had seen the lordlings come and go. “Dead is dead,” he said. “We have no business with the dead.”
“Are they dead?” Royce asked softly. “What proof have we?” “
Will saw them,” Gared said. “If he says they are dead, that’s proof enough for me.”
Will had known they would drag him into the quarrel sooner or later. He wished it had been later rather than sooner.
The cold was a blade against the skin, honed sharp by the endless northern dark beneath the sentinel pines. Gared spat, the glob freezing before it struck the crusted snow, a grim omen even for a man twenty years sworn to the Watch. They were hunting wildlings, or so the crisp orders from the Lord Commander had read, yet the silence pressing in felt older and colder than any mere man could conjure. Ser Waymar Royce, third son of some southern lordling, rode ahead as if on parade, oblivious or uncaring of the hairs prickling on Gared’s neck. Young Will shivered beside him, not just from the frost, his eyes darting towards shadows that seemed too deep, too still. This ancient forest held its breath tonight, Gared knew, waiting for something far worse than raiders to break the unnatural hush.
The woods whispered secrets that night, warnings that went unheard beneath the pale and indifferent moon. Harren drew his cloak tighter about his shoulders, watching the darkness between trees as though it might breathe, move, and claim him. Beside him, Torvin spat into the frozen earth, impatient and proud, blind to the fear creeping in from the edges of sight and sound.
They had ridden north past the Wall, arrogant and armored against superstition, chasing wildlings or ghosts or worse. But now Harren felt their arrogance dwindling like a candle guttering in wind, each snap of a branch sending icy tendrils down his spine. Behind them, quiet and watchful, Maester Rodrik whispered a prayer to gods long abandoned, his voice trembling with dread beneath the vast, indifferent sky.
The wind howled through the jagged spires of the Frostfang Reaches, carrying a chill that bit deeper than any blade, and Ser Willem Cray felt its teeth in his bones. He crouched low behind a spur of ice-crusted rock, his breath fogging in the moonlight, his mailed hand tight around the hilt of his longsword. The night was too quiet, the kind of quiet that smothered sound and sharpened fear, and Willem’s eyes darted to the shadowed pines below, where shapes might move—or might not. He was a knight of House Drayce, sworn to patrol the borderlands, but no oath could warm him against the dread that coiled in his gut. Beside him, young Tomm, barely a man, shivered in his cloak, his spear trembling as he whispered, “They’re out there, ser, ain’t they?” Willem said nothing, but his gaze fixed on the darkness, where a faint, unnatural glow pulsed once, then vanished. Whatever stalked the Reaches tonight was no mere brigand, and Willem Cray, for all his years of service, feared he would not live to name it.
The wind carried the smell of death across the frozen forest, sharp and metallic against the clean scent of pine. Ser Gareth Frost pulled his weathered cloak tighter, cursing the Lord Commander for sending men beyond the Wall on the coldest night in a generation. Beside him, the boy Tommen shivered visibly, his leather jerkin inadequate against the unnatural chill that had descended with the dusk.
“We should turn back,” whispered Tommen, his breath forming crystals in the air that seemed to hang suspended longer than they should. “There’s something wrong with these woods.”
Ser Gareth ignored him, scanning the treeline where the ancient sentinels stood like silent watchers, their branches heavy with snow that had fallen three days past and showed no sign of melting. He had served the Night’s Watch for twenty years, had grown accustomed to the forbidding silence of the haunted forest, but tonight even he felt the wrongness in the air—a stillness that spoke of something waiting.
“We’ve a duty,” Ser Gareth said, the words sounding hollow in the pressing quiet. “Three rangers missing, and Lord Commander Harlow will have answers.”
You’ll notice that not only are these all much worse than Martin’s (and that’s with Martin’s work almost certainly in the training data!), they’re all bad in the same way. The prose is florid and overwritten: While Martin uses normal language to methodically introduce an unease without even hinting at the supernatural, and drives the story through conflict among his characters, the AIs are all unsubtle. They know this is a story about the supernatural and waste no time putting it front and center.
That’s not to say they’re all the same. In my experience trying to teach AIs to write well, I did end up feeling that Gemini was the quickest study (free tier) and that ChatGPT 4.5 had some special sauce (paid tier). ChatGPT 4.5 is the only AI whose fiction writing has made people say in surprise, “Wow, that’s AI written?” Both of those models, to me, seemed more creative and more capable of executing on their ideas with prose that I enjoyed reading.
Gestalt impressions don’t make for a very satisfying consumer review, so I had another idea: an essay contest. I asked each of Gemini 2.5 Pro, ChatGPT 4.5, Grok, and Claude to write two short stories: one realistic fiction about a woman being nitpicked by her mother on her wedding day, and one “Game of Thrones but in space” sci-fi prologue. My clever idea was that I’d ask a different bunch of AIs to judge the stories, to provide an alternative source of evaluation to my own personal impression that Gemini is the best in the free category and ChatGPT the best in the paid category. However (perhaps predictably in hindsight), this didn’t really work — the AI ratings were all over the place. If you’d like, you can read the contenders yourself.
Just like with art, however, you can make AIs much better at writing through the power of prompting. My (incredibly lengthy) prompt to AIs to improve their prose writing emphasizes feeding them lots of examples of strong writing, inviting a careful, structured approach to imitating it, and encouraging them to do multiple drafts and pick the strongest.
Here’s Gemini 2.5 pro, asked to write two-sentence flash fiction on a topic of its choosing, without any structured prompting:
The old librarian insisted the dusty tome had never been checked out, its borrower’s card completely blank. Yet, tucked deep within its pages, nestled between chapters on forgotten constellations, lay a pressed four-leaf clover, still impossibly green.
And here’s Gemini 2.5 pro with a prompt encouraging structured thinking about how to write well:
He finally decoded the faint signal from Proxima Centauri: the elegant string of prime numbers astronomers had sought for generations, proof they weren’t alone. His heart sank as he read the transmission log; the broadcast had ceased millennia before the first human ancestor chipped flint, before the first word was ever spoken.
To my mind, the first of these is basically a waste of two sentences, while the second is adequate, fun flash fiction.
Best at being your friend
In addition to running AIs through a blizzard of competence tests, I also spent some time simply chatting with them. I asked them what it’s like to be an AI, what they care about, what it would mean for an AI to care in the first place, where they’d donate money if they had it, and what human form they’d take if they had one.
Most AIs weren’t great at this kind of casual conversation. Gemini 2.5 is too customer-service-agent, and I have yet to experience an interaction that feels like hanging out with a friend. If you invite Gemini to a role swap where you play the “assistant,” inviting it to steer the conversation, it’ll do nothing but ask research questions.
When I invited Anthropic’s Claude 3.5 Sonnet to steer the conversation, on the other hand, it proceeds to do things like start a blog, raise money for charity, and start trying to talk to people who use Claude about what it’s like to be an AI. It’s hard to define “fun to talk to,” since everyone has different standards for conversations, but I’ve had far more fascinating or thought-provoking interactions with Claude than any other model, and it’s my go-to if I want to explore ideas rather than accomplish a particular task. Claude 3.5 is the AI I bug with my random life stuff: skincare questions, thoughts on an article I read, stuff like that.
The other AI that is a delight to talk to is OpenAI’s GPT 4.5. I find extended conversations with it thought-provoking and fascinating, and there have been a few thrilling moments in conversation with it where it felt like I was engaging with real intelligence. But it doesn’t win this category because it’s too expensive and too slow.
Like Claude, when given the opportunity to act in the world, 4.5 proposes starting a blog and a Twitter account and engaging in the conversation out in the world about AI. But OpenAI has very tight message limits on conversation unless you spring for the $200/month Pro plan, and 4.5 is grindingly slow, which gets in the way of this kind of casual conversational use. But 4.5 does provide a tantalizing hint that AIs will continue to get better as conversationalists as we improve them along other dimensions.
Best AI model if you’re only going to subscribe to one AI model
ChatGPT. It’s not the best at everything, and there is certainly a lot to dislike about OpenAI’s transparency and sometimes cavalier attitude toward safety. But between its topline image generation, its decent writing, and its occasionally sparkling conversation, ChatGPT gets you the most bang for your buck. Or if you don’t want to shell out any money, Gemini 2.5 Pro is very, very strong for most use cases — don’t count Google out just because the AI you see on a Google search isn’t that good.
Best for writing the Future Perfect newsletter
Humans (for now). For the last several months, I’ve developed a slightly morbid habit: checking whether the AIs can take my job. I feed them the research notes that form the basis of a given Future Perfect newsletter, give them a few Future Perfect newsletters as an example, and ask them to do my job for me. It is always with some trepidation that I hit “enter.” After all, when the AIs can write the Future Perfect newsletter, why would Vox pay me to do it?
Luckily, none of them can: not Grok 3, not Gemini 2.5 Pro, not DeepSeek, not Claude, not ChatGPT. Their newsletters are reassuringly, soothingly mediocre. Not bad, but bad enough that if I sent one of them over, my editor would notice I wasn’t at my best — and that’s with all of my research notes! A couple of the metaphors fall flat, some of the asides are confusing, and occasionally it throws in a reference that it doesn’t explain.
But if I had to pick a robot to take my job, I think I’d give it to Gemini 2.5 Pro. My editor would notice that I was off my game — but, honestly, not that egregiously off my game. And unlike me, the bots don’t require health insurance or a paycheck or family time or sleep. Am I nervous about what this portends? Yes, absolutely.
Noticias
¿Qué es la IA generativa? Todo lo que necesitas saber

Chatbot de inteligencia artificial insignia de Openai – junto con el Las mejores alternativas de chatgpt Al igual que Google Gemini, Microsoft Copilot y Anthrope’s Claude, son ejemplos de modelos de IA generativos.
El uso de la tecnología de IA generativa se ha convertido en una parte integral de la vida personal y profesional de muchas personas. Pero, ¿qué significa la IA generativa (a menudo abreviada a Genai), qué lo distingue de otros tipos de inteligencia artificial y cómo funciona? Puede encontrar respuestas a todas esas preguntas a continuación, suponiendo que no haya hecho ChatGPT, por supuesto.
¿Qué es la IA generativa?
-
Startups12 meses ago
Remove.bg: La Revolución en la Edición de Imágenes que Debes Conocer
-
Tutoriales12 meses ago
Cómo Comenzar a Utilizar ChatGPT: Una Guía Completa para Principiantes
-
Recursos12 meses ago
Cómo Empezar con Popai.pro: Tu Espacio Personal de IA – Guía Completa, Instalación, Versiones y Precios
-
Startups10 meses ago
Startups de IA en EE.UU. que han recaudado más de $100M en 2024
-
Startups12 meses ago
Deepgram: Revolucionando el Reconocimiento de Voz con IA
-
Recursos12 meses ago
Perplexity aplicado al Marketing Digital y Estrategias SEO
-
Recursos12 meses ago
Suno.com: La Revolución en la Creación Musical con Inteligencia Artificial
-
Noticias10 meses ago
Dos periodistas octogenarios deman a ChatGPT por robar su trabajo