Connect with us

Noticias

Will Sam Altman always win the OpenAI board fight in an AI agent simulation?

Published

on

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More


A year ago today, Sam Altman returned to OpenAI after being fired just five days earlier. What really happened in the boardroom? Fable, a game and AI simulation company, built its AI Sim Francisco “war game” to find out why the behind closed doors board fight turned out the way it did.

It feels a bit weird to simulate a real-life event in this way, but Fable CEO Edward Saatchi is interested in whether a different set of decisions could have led to a different outcome for this company at the center of the generative AI revolution.

The simulation pits different board members and personalities against each other in a “multi-agent competition,” where each AI player is trying to come out on top. Here’s the war game research paper being released today that came from this experiment.

The SIM-1 framework for AI decision making is basically a simulation of the five days from when Sam Altman was removed as CEO of OpenAI to when he returned.

“Simulations offer a completely new way to explore AI decision making in rich environments — including in war game situations where predicting possible outcomes can be invaluable,” said Joshua Johnson, CEO of Tree, an AI startup which partnered with Fable on this research paper, said in a statement. “These aren’t simply chatbots. These AIs need to sleep and eat, and to balance many different physical, mental and emotional goals.”

OpenAI CEO Sam Altman only comes out a winner four out of 20 simulations.

SIM-1, in part using the new reasoning model GPT4o, gives its sense of what happened behind closed doors at OpenAI between Sam and Ilya, the hidden tactics of leading players such as Satya Nadella and Marc Andreessen, and what was said by the leading players as they grappled with an unprecedented crisis in the tech industry.

“It’s interesting to find out just how unlikely it was that Sam did return,” Saatchi said in an interview with GamesBeat. “That’s why people run war games in D.C. and beyond. How likely was it that a particular event happened? Then you can base decisions around that. This scenario showed that 16 out of 20 times, Sam did not return.”

Across 20 simulations, Sam Altman’s AI returned as CEO four times — showing just how unlikely this outcome was. In other outcomes, Mira Murati, the acting CEO remained CEO and in one, SIM-1 chose Elon Musk, Altman’s rival, to become the new CEO.  

The results of the OpenAI board fight simulation.

“Today, AI agents are defined by their personality. We wanted to show agents operating on decision making in a complex simulation,” said Saatchi, in a statement. “In the five days from November 17 to November 21, the world watched some of its most intelligent people — people like Satya Nadella, Sam Altman and Ilya Sutskever – forced to operate in a rapid Game of Thrones, high pressure, short timeframe scenario, where they had to use game theory and deception to come out on top. We felt this was a perfect scenario to test out SIM-1, GPT4o and Sim Francisco.”

For us, Sim Francisco has actual power and intelligence around a struggle and factions. It gives us the ability to start to think about season-long arcs of stories that come out of San Francisco, instead of just little, tiny vignettes, which is what we showed last year. It gives us the ability to kind of tell richer, more complex stories in San Francisco, or have the AI tell them for us. There are strong factional objectives so that you could plausibly start to make a Game of Thrones story.”

Fable has won a couple of Primetime Emmy Awards and it has gone through a rich history of experimental inventions with virtual reality, gaming and AI technologies. It built SIM-1 in an attempt to solve the mystery of what happened in the OpenAI boardroom fight.

How it works

Each of the 20 simulations starts with the announcement that Sam Altman has been removed as CEO. Across four turns a day, each agent has the ability to cajole, charm and manipulate their way into the top position — replacing Sam as CEO, funding his new venture, or hiring the staff of OpenAI away. 

The different AI agents can choose a strategy, like deception, to try to pull ahead of the others and become anointed the new CEO.

“AI characters today are ‘nice but dull.’ We wanted to show agents that were aggressive, intelligent, able to manipulate and deceive but also confused about their own decisions and goals — like real people AI characters must be complex and contain what Jung has called ‘The Shadow,’” Saatchi said. “The five days from when Sam Altman was removed and returned to OpenAI were game theory at lightspeed.”

Each AI agent is a different character in the OpenAI drama.

He said it was like watching a season of Game of Thrones play out in five days. The world watched as highly intelligent players vied to become the most powerful person in Silicon Valley, whether by hiring the entire staff of OpenAI, becoming the new CEO of OpenAI or funding Sam and Greg in a new venture for a chance at outsize investment returns.

“It was Game of Thrones in real life, and using AI to find out both what happened behind closed doors and to project different outcomes was an amazing challenge,” Saatchi said.

In the Simulation of Sim Francisco, over the five days, agents representing tech luminaries like Sam Altman, Satya Nadella and Ilya Sutskever each have 4 turns a day, including one for sleep, and can react to each other’s behavior. An adjudicator agent — similar to a dungeon keeper — decides which agent wins each round, as well as the overall winner. 

In the 20 simulations attempted, the Sam Altman agent returned just four times – the most but still only 20% of the time showing just how unlikely his return was. Across different simulations agents used different techniques to win including alliance building, direct confrontation and more passive pure information gathering. In some cases agents only gathered information and avoided taking any aggressive actions. In one case Mira Murati became the permanent CEO while allowing other agents to aggressively undermine each other. 

Elon Musk came out a winner one out of 20 times.

Different agents were given different goals appropriate to their role. For example, Dario Amodei, the CEO of Anthropic, balanced a desire to recruit for Anthropic, taking the opportunity to fundraise, to push for his vision of safety, as well as decide whether to aim to become the new CEO of a combined entity.

The interesting part of the simulation is that the LLM knows who the different players are, given that they’re all relatively famous people. It can guess how they will behave in a given situation, and what could unfold turn by turn as they try to outwit each other in a boardroom fight.

“It’s like a video game in that turn by turn, they’re making choices across different axes, and then they’re reacting to each other,” Saatchi said. “A choice that someone makes in turn seven can lead others to react in turn eight. There’s an adjudicator agent, who is like a dungeon master. That agent decides who won each round and who’s ahead, and then who decides at the end, wins as the most effective agent in the war game.”

Humans have what we call internally “the shadow,” or the other side of themselves and their personalities. The characters can feature aggression, paranoia, ambition, deception and more. When you mix together a bunch of different personalities, you can get a variety of outcomes in the simulations.

“We noticed LLM design isn’t based on decision making, which is really important for gaming. It’s based more on personality. And if you want to have a strategy game, nobody really cares about your personality. They care about your decision making. How are you under pressure? What have you done over the last 20 years that would give you a feel for what they might do in the future?”

Are simulations the future of gaming?

Demis Hassabis was a game simulation maker before doing AI.

Saatchi thinks that AI agents acting within simulations are the future of gaming.

“We are building on the shoulders of giants with Demis’ work on Republic The Revolution, Joon Park’s Generative Agents paper and the recent work of Altera in Minecraft” said Saatchi said. 

“Our theory is that the future of games and storytelling is simulations. If you wanted to build both The Simpsons game and The Simpsons TV show, you would, in the future, build Springfield, and that would then generate for you episodes of The Simpsons that would generate for you games and places to explore within Springfield as a game.”

He added, “You can tell many different stories within tribulations, once you get those simulations properly working. And we’ve got an alpha where people are uploading themselves to San Francisco as characters, telling stories, telling their own story.”

And he said, “You would build Springfield, and then you can guide what might happen in Springfield and say what might happen in Springfield, or you could just let it generate itself. It’s a pretty big mind shift of how entertainment, games and shows will be made in the future.”

Saatchi noted that AI researcher Noam Brown did a fascinating experiment with the game Diplomacy. He and other researchers “obtained a dataset of 125,261 games of Diplomacy played online at web Diplomacy.net.” Of those, 40,408 games contained dialogue, with a total of 12,901,662 messages exchanged between players. Their aim was to train a human-level AI agent, capable of strategic reasoning, by playing games of Diplomacy.

Diplomacy teaches us about agent strategy.

“We were really inspired by how he did that. He had countries and we were adding into the mix different personalities with particular positions. We liked the idea of a very compressed timeline,” where the whole scenario would play out quickly and over and over again, Saatchi said.

There has been a rich history of work in simulations in both the games industry and beyond. Demis Hassabis, who founded Deepmind (acquired by Google) and who recently won the Nobel Prize in Chemistry 2024 for computational protein design, actually began as a video game AI designer. Hassabis worked extensively with Peter Molyneux on several games which include simulation elements such as Theme Park, Black & White and Syndicate.

Hassabis also started his own company to make Republic: The Revolution. It’s a political simulation game in which the player leads a political faction to overthrow the government of a fictional totalitarian country in Eastern Europe, using diplomacy, subterfuge, and violence. According to Hassabis, Republic: The Revolution charts the whole of a revolutionary power struggle from beginning to end.

Your job is to kind of take over the Soviet Republic as either a union boss or a politician or a police officer or a journalist, and it’s got full day-night cycles. It raises the question of how you have a 3D world where agents live and whether proximity to each other plays a role.

For the Sim Francisco OpenAI project, it illustrated the potential for a power struggle against AIs. 

Saatchi said the above examples shows how game technology often serves as the breeding ground for radical new ideas and as a jumping off ground for AI research. For example, one of the leading engineers on Deepmind AlphaFold started their career as an AI programmer on The Sims. 

Richard Evans’ GDC talk on The Sims 3 — the researcher went from programming AI for The Sims to Deepmind in a reversal of Demis Hassabis’ journey from games to founding Deepmind.

Demis Hassabis’ Republic: The Revolution.

Evans GDC Talk, Modeling Individual Personalities in The Sims 3, is very influential talk. He went on to join Deepmind after working on The Sims. The gaming world and the AI world have significant overlap that is a potential area for further academic research, Saatchi said.

One of Saatchi’s options is to let players loose with the simulations, creating their own, and then uploading the stories that are told through the simulations.

Saatchi has done some other experiments with AI-generated South Park episodes and AI characters battling each other in a Westworld setting.

“It felt like six seasons of Game of Thrones in five days, because it was the most powerful position in the most powerful industry in the world,” Saatchi said. “There was also a lot of faith that this person would be guiding us into a new era of super intelligence. You could say it wsa the most important person in the history of the planet.”

President Trump and the Taiwan invasion

How will President Trump fare in a showdown with China over Taiwan?

Next, Fable intends to run a Sim Washington DC-based simulation around a future President Trump’s responses to a Chinese invasion of Taiwan.

As a next project to test out SIM-1’s decision making framework, Fable intends to test out a one-week period of buildup and conflict between Taiwan, China and the United States under President Donald Trump.

Fable has interviewed several Pentagon war games organizers to get a feeling for the strengths and weaknesses of the current Taiwan scenario. 

Fable is building agents representing Chinese leader Xi Jingping, Cai Qi (first ranked secretary to the secretariat of the Communist Party), Chinese defense leader Dong Jun, Chinese premier Li Qiang, Taiwan’s leader Lai Ching-Te, Japan’s leader Shigeru Ishiba, UK prime minister Keir Starmer, French President Emmanuel Macron, Russia’s Vladimir Putin, North Korean leader Kim Jong Un and Elon Musk.

With this set of characters, the simulation would determine whether the war would happen and how would each major player act during such a crisis. All of these characters are known personalities.

“It allows you to see how powerful AI has become at like projecting outcomes,” Saatchi said. “It moves us out of this boring world of dumping an LLM into an NPC. You can talk to the tab and keeper for 40 hours. Nobody wants to do that. What we want is highly sophisticated, aggressive agents that we could play against, but also that we can, like, watch and understand what’s going on in that world.”

Many of the war game simulations are aimed at how to avoid a war, perhaps through forming alliances or other maneuvers that drive up the cost of war.

“We think the more realistic we can make our AIs, the more entertaining they will be,” Saatchi said.

Continue Reading
Click to comment

Leave a Reply

Tu dirección de correo electrónico no será publicada. Los campos obligatorios están marcados con *

Noticias

AI generativa: todo para saber sobre la tecnología detrás de chatbots como chatgpt

Published

on

Ya sea que se dé cuenta o no, la inteligencia artificial está en todas partes. Se encuentra detrás de los chatbots con los que hablas en línea, las listas de reproducción que transmites y los anuncios personalizados que aparecen en tu desplazamiento. Y ahora está tomando una personalidad más pública. Piense en Meta AI, que ahora está integrado en aplicaciones como Facebook, Messenger y WhatsApp; o Géminis de Google, trabajando en segundo plano en las plataformas de la compañía; o Apple Intelligence, lanzando a través de iPhones ahora.

AI tiene una larga historia, volviendo a una conferencia en Dartmouth en 1956 que primero discutió la inteligencia artificial como una cosa. Los hitos en el camino incluyen Eliza, esencialmente el primer chatbot, desarrollado en 1964 por el informático del MIT Joseph Weizenbaum y, saltando 40 años, cuando la función de autocompleta de Google apareció por primera vez en 2004.

Luego llegó 2022 y el ascenso de Chatgpt a la fama. Los desarrollos generativos de IA y los lanzamientos de productos se han acelerado rápidamente desde entonces, incluidos Google Bard (ahora Gemini), Microsoft Copilot, IBM Watsonx.ai y los modelos de LLAMA de código abierto de Meta.

Desglosemos qué es la IA generativa, cómo difiere de la inteligencia artificial “regular” y si la Generación AI puede estar a la altura de las expectativas.

IA generativa en pocas palabras

En esencia, la IA generativa se refiere a sistemas de inteligencia artificial que están diseñados para producir un nuevo contenido basado en patrones y datos que han aprendido. En lugar de solo analizar números o predecir tendencias, estos sistemas generan salidas creativas como texto, música de imágenes, videos y código de software.

Algunas de las herramientas de IA generativas más populares en el mercado incluyen:

El principal entre sus habilidades, ChatGPT puede crear conversaciones o ensayos similares a los humanos basados ​​en algunas indicaciones simples. Dall-E y MidJourney crean obras de arte detalladas a partir de una breve descripción, mientras que Adobe Firefly se centra en la edición y el diseño de imágenes.

Imagen generada por chatgpt de una ardilla con ojos grandes sosteniendo una bellota

Chatgpt / captura de pantalla por cnet

Ai eso no es generativo

No toda la IA es generativa. Si bien Gen AI se enfoca en crear contenido nuevo, la IA tradicional se destaca por analizar datos y hacer predicciones. Esto incluye tecnologías como el reconocimiento de imágenes y el texto predictivo. También se usa para soluciones novedosas en:

  • Ciencia
  • Diagnóstico médico
  • Pronóstico del tiempo
  • Detección de fraude
  • Análisis financiero para pronósticos e informes

La IA que venció a los grandes campeones humanos en el ajedrez y el juego de mesa no fue una IA generativa.

Es posible que estos sistemas no sean tan llamativos como la Generación AI, pero la inteligencia artificial clásica es una gran parte de la tecnología en la que confiamos todos los días.

¿Cómo funciona Gen AI?

Detrás de la magia de la IA generativa hay modelos de idiomas grandes y técnicas avanzadas de aprendizaje automático. Estos sistemas están capacitados en grandes cantidades de datos, como bibliotecas completas de libros, millones de imágenes, años de música grabada y datos raspados de Internet.

Los desarrolladores de IA, desde gigantes tecnológicos hasta nuevas empresas, son conscientes de que la IA es tan buena como los datos que lo alimenta. Si se alimenta de datos de baja calidad, la IA puede producir resultados sesgados. Es algo con lo que incluso los jugadores más grandes en el campo, como Google, no han sido inmunes.

La IA aprende patrones, relaciones y estructuras dentro de estos datos durante el entrenamiento. Luego, cuando se le solicita, aplica ese conocimiento para generar algo nuevo. Por ejemplo, si le pide a una herramienta Gen AI que escriba un poema sobre el océano, no solo extrae versos preescritos de una base de datos. En cambio, está usando lo que aprendió sobre la poesía, los océanos y la estructura del lenguaje para crear una pieza completamente original.

Un poema de 12 líneas llamado The Ocean's Whisper

Chatgpt / captura de pantalla por cnet

Es impresionante, pero no es perfecto. A veces los resultados pueden sentirse un poco apagados. Tal vez la IA malinterpreta su solicitud, o se vuelve demasiado creativo de una manera que no esperaba. Puede proporcionar con confianza información completamente falsa, y depende de usted verificarla. Esas peculiaridades, a menudo llamadas alucinaciones, son parte de lo que hace que la IA generativa sea fascinante y frustrante.

Las capacidades generativas de IA están creciendo. Ahora puede comprender múltiples tipos de datos combinando tecnologías como el aprendizaje automático, el procesamiento del lenguaje natural y la visión por computadora. El resultado se llama IA multimodal que puede integrar alguna combinación de texto, imágenes, video y habla dentro de un solo marco, ofreciendo respuestas más contextualmente relevantes y precisas. El modo de voz avanzado de ChatGPT es un ejemplo, al igual que el proyecto Astra de Google.

Desafíos con IA generativa

No hay escasez de herramientas de IA generativas, cada una con su talento único. Estas herramientas han provocado la creatividad, pero también han planteado muchas preguntas además del sesgo y las alucinaciones, como, ¿quién posee los derechos del contenido generado por IA? O qué material es un juego justo o fuera de los límites para que las compañías de IA los usen para capacitar a sus modelos de idiomas; vea, por ejemplo, la demanda del New York Times contra Openai y Microsoft.

Otras preocupaciones, no son asuntos pequeños, implican privacidad, responsabilidad en la IA, los profundos profundos generados por IA y el desplazamiento laboral.

“Escribir, animación, fotografía, ilustración, diseño gráfico: las herramientas de IA ahora pueden manejar todo eso con una facilidad sorprendente. Pero eso no significa que estos roles desaparezcan. Simplemente puede significar que los creativos deberán mejorar y usar estas herramientas para amplificar su propio trabajo”, Fang Liu, profesor de la Universidad de Notre Dame Dame y Coeditor-Chief de las transacciones de ACM en las transacciones de Probabilista, contó el aprendizaje en el poderoso de la máquina probabilística, le dijo a Cetnet.

“También ofrece una forma para las personas que tal vez carecen de la habilidad, como alguien con una visión clara que no puede dibujar, pero que puede describirlo a través de un aviso. Así que no, no creo que interrumpa a la industria creativa. Con suerte, será una co-creación o un aumento, no un reemplazo”.

Otro problema es el impacto en el medio ambiente porque la capacitación de grandes modelos de IA utiliza mucha energía, lo que lleva a grandes huellas de carbono. El rápido ascenso de la Generación AI en los últimos años ha acelerado las preocupaciones sobre los riesgos de la IA en general. Los gobiernos están aumentando las regulaciones de IA para garantizar el desarrollo responsable y ético, especialmente la Ley de IA de la Unión Europea.

Recepción de IA generativa

Muchas personas han interactuado con los chatbots en el servicio al cliente o han utilizado asistentes virtuales como Siri, Alexa y Google Assistant, que ahora están en la cúspide de convertirse en Gen AI Power Tools. Todo eso, junto con las aplicaciones para ChatGPT, Claude y otras herramientas nuevas, es poner ai en sus manos. Y la reacción pública a la IA generativa se ha mezclado. Muchos usuarios disfrutan de la conveniencia y la creatividad que ofrece, especialmente para cosas como escribir ayuda, creación de imágenes, soporte de tareas y productividad.

Mientras tanto, en la encuesta global de IA 2024 de McKinsey, el 65% de los encuestados dijo que sus organizaciones usan regularmente IA generativa, casi el doble de la cifra reportada solo 10 meses antes. Industrias como la atención médica y las finanzas están utilizando Gen AI para racionalizar las operaciones comerciales y automatizar tareas mundanas.

Como se mencionó, existen preocupaciones obvias sobre la ética, la transparencia, la pérdida de empleos y el potencial del mal uso de los datos personales. Esas son las principales críticas detrás de la resistencia a aceptar la IA generativa.

Y las personas que usan herramientas de IA generativas también encontrarán que los resultados aún no son lo suficientemente buenos para el tiempo. A pesar de los avances tecnológicos, la mayoría de las personas pueden reconocer si el contenido se ha creado utilizando Gen AI, ya sean artículos, imágenes o música.

AI ha secuestrado ciertas frases que siempre he usado, por lo que debo autocorrectar mi escritura a menudo porque puede parecer una IA. Muchos artículos escritos por AI contienen frases como “en la era de”, o todo es un “testimonio de” o un “tapiz de”. La IA carece de la emoción y la experiencia que viene, bueno, ser una vida humana y viviente. Como explicó un artista en Quora, “lo que AI hace no es lo mismo que el arte que evoluciona de un pensamiento en un cerebro humano” y “no se crea a partir de la pasión que se encuentra en un corazón humano”.

AI generativa: vida cotidiana

La IA generativa no es solo para técnicos o personas creativas. Una vez que obtienes la habilidad de darle indicaciones, tiene el potencial de hacer gran parte del trabajo preliminar por ti en una variedad de tareas diarias.

Digamos que está planeando un viaje. En lugar de desplazarse por páginas de resultados de búsqueda, le pide a un chatbot que planifique su itinerario. En cuestión de segundos, tiene un plan detallado adaptado a sus preferencias. (Ese es el ideal. Por favor, verifique siempre sus recomendaciones).

Un propietario de una pequeña empresa que necesita una campaña de marketing pero que no tiene un equipo de diseño puede usar una IA generativa para crear imágenes llamativas e incluso pedirle que sugiera copia publicitaria.

Un itinerario de viaje para Nueva Orleans, creado por chatgpt

Chatgpt / captura de pantalla por cnet

Gen Ai está aquí para quedarse

No ha habido un avance tecnológico que haya causado tal boom desde Internet y, más tarde, el iPhone. A pesar de sus desafíos, la IA generativa es innegablemente transformadora. Está haciendo que la creatividad sea más accesible, ayudando a las empresas a racionalizar los flujos de trabajo e incluso inspirar formas completamente nuevas de pensar y resolver problemas.

Pero quizás lo más emocionante es su potencial, y estamos rascando la superficie de lo que estas herramientas pueden hacer.

Preguntas frecuentes

¿Cuál es un ejemplo de IA generativa?

ChatGPT es probablemente el ejemplo más popular de IA generativa. Le das un aviso y puede generar texto e imágenes; Código de escritura; Responder preguntas; resumir el texto; borrador de correos electrónicos; y mucho más.

¿Cuál es la diferencia entre la IA y la IA generativa?

La IA generativa crea contenido nuevo como texto, imágenes o música, mientras que la IA tradicional analiza los datos, reconoce patrones o imágenes y hace predicciones (por ejemplo, en medicina, ciencia y finanzas).

Continue Reading

Noticias

Probé 5 sitios gratuitos de ‘chatgpt clon’ – no intentes esto en casa

Published

on

Si busca “CHATGPT” en su navegador, es probable que se tope en sitios web que parecen estar alimentados por OpenAI, pero no lo son. Uno de esos sitios, chat.chatbotapp.ai, ofrece acceso a “GPT-3.5” de forma gratuita y utiliza marca familiar.

Pero aquí está la cosa: no está dirigida por OpenAi. Y, francamente, ¿por qué usar un GPT-3.5 potencialmente falso cuando puedes usar GPT-4O de forma gratuita en el actual ¿Sitio de chatgpt?

Continue Reading

Noticias

What Really Happened When OpenAI Turned on Sam Altman

Published

on

In the summer of 2023, Ilya Sutskever, a co-founder and the chief scientist of OpenAI, was meeting with a group of new researchers at the company. By all traditional metrics, Sutskever should have felt invincible: He was the brain behind the large language models that helped build ChatGPT, then the fastest-growing app in history; his company’s valuation had skyrocketed; and OpenAI was the unrivaled leader of the industry believed to power the future of Silicon Valley. But the chief scientist seemed to be at war with himself.

Sutskever had long believed that artificial general intelligence, or AGI, was inevitable—now, as things accelerated in the generative-AI industry, he believed AGI’s arrival was imminent, according to Geoff Hinton, an AI pioneer who was his Ph.D. adviser and mentor, and another person familiar with Sutskever’s thinking. (Many of the sources in this piece requested anonymity in order to speak freely about OpenAI without fear of reprisal.) To people around him, Sutskever seemed consumed by thoughts of this impending civilizational transformation. What would the world look like when a supreme AGI emerged and surpassed humanity? And what responsibility did OpenAI have to ensure an end state of extraordinary prosperity, not extraordinary suffering?

By then, Sutskever, who had previously dedicated most of his time to advancing AI capabilities, had started to focus half of his time on AI safety. He appeared to people around him as both boomer and doomer: more excited and afraid than ever before of what was to come. That day, during the meeting with the new researchers, he laid out a plan.

“Once we all get into the bunker—” he began, according to a researcher who was present.

“I’m sorry,” the researcher interrupted, “the bunker?”

“We’re definitely going to build a bunker before we release AGI,” Sutskever replied. Such a powerful technology would surely become an object of intense desire for governments globally. The core scientists working on the technology would need to be protected. “Of course,” he added, “it’s going to be optional whether you want to get into the bunker.”

This essay has been adapted from Hao’s forthcoming book, Empire of AI.

Two other sources I spoke with confirmed that Sutskever commonly mentioned such a bunker. “There is a group of people—Ilya being one of them—who believe that building AGI will bring about a rapture,” the researcher told me. “Literally, a rapture.” (Sutskever declined to comment.)

Sutskever’s fears about an all-powerful AI may seem extreme, but they are not altogether uncommon, nor were they particularly out of step with OpenAI’s general posture at the time. In May 2023, the company’s CEO, Sam Altman, co-signed an open letter describing the technology as a potential extinction risk—a narrative that has arguably helped OpenAI center itself and steer regulatory conversations. Yet the concerns about a coming apocalypse would also have to be balanced against OpenAI’s growing business: ChatGPT was a hit, and Altman wanted more.

When OpenAI was founded, the idea was to develop AGI for the benefit of humanity. To that end, the co-founders—who included Altman and Elon Musk—set the organization up as a nonprofit and pledged to share research with other institutions. Democratic participation in the technology’s development was a key principle, they agreed, hence the company’s name. But by the time I started covering the company in 2019, these ideals were eroding. OpenAI’s executives had realized that the path they wanted to take would demand extraordinary amounts of money. Both Musk and Altman tried to take over as CEO. Altman won out. Musk left the organization in early 2018 and took his money with him. To plug the hole, Altman reformulated OpenAI’s legal structure, creating a new “capped-profit” arm within the nonprofit to raise more capital.

Since then, I’ve tracked OpenAI’s evolution through interviews with more than 90 current and former employees, including executives and contractors. The company declined my repeated interview requests and questions over the course of working on my book about it, which this story is adapted from; it did not reply when I reached out one more time before the article was published. (OpenAI also has a corporate partnership with The Atlantic.)

OpenAI’s dueling cultures—the ambition to safely develop AGI, and the desire to grow a massive user base through new product launches—would explode toward the end of 2023. Gravely concerned about the direction Altman was taking the company, Sutskever would approach his fellow board of directors, along with his colleague Mira Murati, then OpenAI’s chief technology officer; the board would subsequently conclude the need to push the CEO out. What happened next—with Altman’s ouster and then reinstatement—rocked the tech industry. Yet since then, OpenAI and Sam Altman have become more central to world affairs. Last week, the company unveiled an “OpenAI for Countries” initiative that would allow OpenAI to play a key role in developing AI infrastructure outside of the United States. And Altman has become an ally to the Trump administration, appearing, for example, at an event with Saudi officials this week and onstage with the president in January to announce a $500 billion AI-computing-infrastructure project.

Altman’s brief ouster—and his ability to return and consolidate power—is now crucial history to understand the company’s position at this pivotal moment for the future of AI development. Details have been missing from previous reporting on this incident, including information that sheds light on Sutskever and Murati’s thinking and the response from the rank and file. Here, they are presented for the first time, according to accounts from more than a dozen people who were either directly involved or close to the people directly involved, as well as their contemporaneous notes, plus screenshots of Slack messages, emails, audio recordings, and other corroborating evidence.

The altruistic OpenAI is gone, if it ever existed. What future is the company building now?

Before ChatGPT, sources told me, Altman seemed generally energized. Now he often appeared exhausted. Propelled into megastardom, he was dealing with intensified scrutiny and an overwhelming travel schedule. Meanwhile, Google, Meta, Anthropic, Perplexity, and many others were all developing their own generative-AI products to compete with OpenAI’s chatbot.

Many of Altman’s closest executives had long observed a particular pattern in his behavior: If two teams disagreed, he often agreed in private with each of their perspectives, which created confusion and bred mistrust among colleagues. Now Altman was also frequently bad-mouthing staffers behind their backs while pushing them to deploy products faster and faster. Team leads mirroring his behavior began to pit staff against one another. Sources told me that Greg Brockman, another of OpenAI’s co-founders and its president, added to the problems when he popped into projects and derail­ed long-​standing plans with ­last-​minute changes.

The environment within OpenAI was changing. Previously, Sutskever had tried to unite workers behind a common cause. Among employees, he had been known as a deep thinker and even something of a mystic, regularly speaking in spiritual terms. He wore shirts with animals on them to the office and painted them as well—a cuddly cat, cuddly alpacas, a cuddly fire-breathing dragon. One of his amateur paintings hung in the office, a trio of flowers blossoming in the shape of OpenAI’s logo, a symbol of what he always urged employees to build: “A plurality of humanity-loving AGIs.”

But by the middle of 2023—around the time he began speaking more regularly about the idea of a bunker—Sutskever was no longer just preoccupied by the possible cataclysmic shifts of AGI and superintelligence, according to sources familiar with his thinking. He was consumed by another anxiety: the erosion of his faith that OpenAI could even keep up its technical advancements to reach AGI, or bear that responsibility with Altman as its leader. Sutskever felt Altman’s pattern of behavior was undermining the two pillars of OpenAI’s mission, the sources said: It was slowing down research progress and eroding any chance at making sound AI-safety decisions.

Meanwhile, Murati was trying to manage the mess. She had always played translator and bridge to Altman. If he had adjustments to the company’s strategic direction, she was the implementer. If a team needed to push back against his decisions, she was their champion. When people grew frustrated with their inability to get a straight answer out of Altman, they sought her help. “She was the one getting stuff done,” a former colleague of hers told me. (Murati declined to comment.)

During the development of GPT‑­4, Altman and Brockman’s dynamic had nearly led key people to quit, sources told me. Altman was also seemingly trying to circumvent safety processes for expediency. At one point, sources close to the situation said, he had told Murati that OpenAI’s legal team had cleared the latest model, GPT-4 Turbo, to skip review by the company’s Deployment Safety Board, or DSB—a committee of Microsoft and OpenAI representatives who evaluated whether OpenAI’s most powerful models were ready for release. But when Murati checked in with Jason Kwon, who oversaw the legal team, Kwon had no idea how Altman had gotten that impression.

In the summer, Murati attempted to give Altman detailed feedback on these issues, according to multiple sources. It didn’t work. The CEO iced her out, and it took weeks to thaw the relationship.

By fall, Sutskever and Murati both drew the same conclusion. They separately approached the three board members who were not OpenAI employees—Helen Toner, a director at Georgetown University’s Center for Security and Emerging Technology; the roboticist Tasha McCauley; and one of Quora’s co-founders and its CEO, Adam D’Angelo—and raised concerns about Altman’s leadership. “I don’t think Sam is the guy who should have the finger on the button for AGI,” Sutskever said in one such meeting, according to notes I reviewed. “I don’t feel comfortable about Sam leading us to AGI,” Murati said in another, according to sources familiar with the conversation.

That Sutskever and Murati both felt this way had a huge effect on Toner, McCauley, and D’Angelo. For close to a year, they, too, had been processing their own grave concerns about Altman, according to sources familiar with their thinking. Among their many doubts, the three directors had discovered through a series of chance encounters that he had not been forthcoming with them about a range of issues, from a breach in the DSB’s protocols to the legal structure of OpenAI Startup Fund, a dealmaking vehicle that was meant to be under the company but that instead Altman owned himself.

If two of Altman’s most senior deputies were sounding the alarm on his leadership, the board had a serious problem. Sutskever and Murati were not the first to raise these kinds of issues, either. In total, the three directors had heard similar feedback over the years from at least five other people within one to two levels of Altman, the sources said. By the end of October, Toner, McCauley, and D’Angelo began to meet nearly daily on video calls, agreeing that Sutskever’s and Murati’s feedback about Altman, and Sutskever’s suggestion to fire him, warranted serious deliberation.

As they did so, Sutskever sent them long dossiers of documents and screenshots that he and Murati had gathered in tandem with examples of Altman’s behaviors. The screenshots showed at least two more senior leaders noting Altman’s tendency to skirt around or ignore processes, whether they’d been instituted for AI-safety reasons or to smooth company operations. This included, the directors learned, Altman’s apparent attempt to skip DSB review for GPT-4 Turbo.

By Saturday, November 11, the independent directors had made their decision. As Sutskever suggested, they would remove Altman and install Murati as interim CEO. On November 17, 2023, at about noon Pacific time, Sutskever fired Altman on a Google Meet with the three independent board members. Sutskever then told Brockman on another Google Meet that Brockman would no longer be on the board but would retain his role at the company. A public announcement went out immediately.

For a brief moment, OpenAI’s future was an open question. It might have taken a path away from aggressive commercialization and Altman. But this is not what happened.

After what had seemed like a few hours of calm and stability, including Murati having a productive conversation with Microsoft—at the time OpenAI’s largest financial backer—she had suddenly called the board members with a new problem. Altman and Brockman were telling everyone that Altman’s removal had been a coup by Sutskever, she said.

It hadn’t helped that, during a company all-​hands to address employee questions, Sutskever had been completely ineffectual with his communication.

“Was there a specific incident that led to this?” Murati had read aloud from a list of employee questions, according to a recording I obtained of the meeting.

“Many of the questions in the document will be about the details,” Sutskever responded. “What, when, how, who, exactly. I wish I could go into the details. But I can’t.”

“Are we worried about the hostile takeover via coercive influence of the existing board members?” Sutskever read from another employee later.

“Hostile takeover?” Sutskever repeated, a new edge in his voice. “The OpenAI nonprofit board has acted entirely in accordance to its objective. It is not a hostile takeover. Not at all. I disagree with this question.”

Shortly thereafter, the remaining board, including Sutskever, confronted enraged leadership over a video call. Kwon, the chief strategy officer, and Anna Makanju, the vice president of global affairs, were leading the charge in rejecting the board’s characterization of Altman’s behavior as “not consistently candid,” according to sources present at the meeting. They demanded evidence to support the board’s decision, which the members felt they couldn’t provide without outing Murati, according to sources familiar with their thinking.

In rapid succession that day, Brockman quit in protest, followed by three other senior researchers. Through the evening, employees only got angrier, fueled by compounding problems: among them, a lack of clarity from the board about their reasons for firing Altman; a potential loss of a tender offer, which had given some the option to sell what could amount to millions of dollars’ worth of their equity; and a growing fear that the instability at the company could lead to its unraveling, which would squander so much promise and hard work.

Faced with the possibility of OpenAI falling apart, Sutskever’s resolve immediately started to crack. OpenAI was his baby, his life; its dissolution would destroy him. He began to plead with his fellow board members to reconsider their position on Altman.

Meanwhile, Murati’s interim position was being challenged. The conflagration within the company was also spreading to a growing circle of investors. Murati now was unwilling to explicitly throw her weight behind the board’s decision to fire Altman. Though her feedback had helped instigate it, she had not participated herself in the deliberations.

By Monday morning, the board had lost. Murati and Sutskever flipped sides. Altman would come back; there was no other way to save OpenAI.

I was already working on a book about OpenAI at the time, and in the weeks that followed the board crisis, friends, family, and media would ask me dozens of times: What did all this mean, if anything? To me, the drama highlighted one of the most urgent questions of our generation: How do we govern artificial intelligence? With AI on track to rewire a great many other crucial functions in society, that question is really asking: How do we ensure that we’ll make our future better, not worse?

The events of November 2023 illustrated in the clearest terms just how much a power struggle among a tiny handful of Silicon Valley elites is currently shaping the future of this technology. And the scorecard of this centralized approach to AI development is deeply troubling. OpenAI today has become everything that it said it would not be. It has turned into a nonprofit in name only, aggressively commercializing products such as ChatGPT and seeking historic valuations. It has grown ever more secretive, not only cutting off access to its own research but shifting norms across the industry to no longer share meaningful technical details about AI models. In the pursuit of an amorphous vision of progress, its aggressive push on the limits of scale has rewritten the rules for a new era of AI development. Now every tech giant is racing to out-scale one another, spending sums so astronomical that even they have scrambled to redistribute and consolidate their resources. What was once unprecedented has become the norm.

As a result, these AI companies have never been richer. In March, OpenAI raised $40 billion, the largest private tech-funding round on record, and hit a $300 billion valuation. Anthropic is valued at more than $60 billion. Near the end of last year, the six largest tech giants together had seen their market caps increase by more than $8 trillion after ChatGPT. At the same time, more and more doubts have risen about the true economic value of generative AI, including a growing body of studies that have shown that the technology is not translating into productivity gains for most workers, while it’s also eroding their critical thinking.

In a November Bloomberg article reviewing the generative-AI industry, the staff writers Parmy Olson and Carolyn Silverman summarized it succinctly. The data, they wrote, “raises an uncomfortable prospect: that this supposedly revolutionary technology might never deliver on its promise of broad economic transformation, but instead just concentrate more wealth at the top.”

Meanwhile, it’s not just a lack of productivity gains that many in the rest of the world are facing. The exploding human and material costs are settling onto wide swaths of society, especially the most vulnerable, people I met around the world, whether workers and rural residents in the global North or impoverished communities in the global South, all suffering new degrees of precarity. Workers in Kenya earned abysmal wages to filter out violence and hate speech from OpenAI’s technologies, including ChatGPT. Artists are being replaced by the very AI models that were built from their work without their consent or compensation. The journalism industry is atrophying as generative-AI technologies spawn heightened volumes of misinformation. Before our eyes, we’re seeing an ancient story repeat itself: Like empires of old, the new empires of AI are amassing extraordinary riches across space and time at great expense to everyone else.

To quell the rising concerns about generative AI’s present-day performance, Altman has trumpeted the future benefits of AGI ever louder. In a September 2024 blog post, he declared that the “Intelligence Age,” characterized by “massive prosperity,” would soon be upon us. At this point, AGI is largely rhetorical—a fantastical, all-purpose excuse for OpenAI to continue pushing for ever more wealth and power. Under the guise of a civilizing mission, the empire of AI is accelerating its global expansion and entrenching its power.

As for Sutskever and Murati, both parted ways with OpenAI after what employees now call “The Blip,” joining a long string of leaders who have left the organization after clashing with Altman. Like many of the others who failed to reshape OpenAI, the two did what has become the next-most-popular option: They each set up their own shops, to compete for the future of this technology.


This essay has been adapted from Karen Hao’s forthcoming book, Empire of AI.

Empire Of AI – Dreams And Nightmares In Sam Altman’s OpenAI

By Karen Hao


*Illustration by Akshita Chandra / The Atlantic. Sources: Nathan Howard / Bloomberg / Getty; Jack Guez / AFP / Getty; Jon Kopaloff / Getty; Manuel Augusto Moreno / Getty; Yuichiro Chino / Getty.


​When you buy a book using a link on this page, we receive a commission. Thank you for supporting The Atlantic.

Continue Reading

Trending