Connect with us

Noticias

ChatGPT’s MacOS desktop just got a whole lot better on the 11th day of OpenAI

Published

on

NurPhoto/Getty Images

With the holiday season upon us, many companies are finding ways to take advantage through deals, promotions, or other campaigns. OpenAI has found a way to participate with its “12 days of OpenAI” event series.

On Wednesday, OpenAI announced via an X post that starting on Dec. 5, the company would host 12 days of live streams and release “a bunch of new things, big and small,” according to the post. 

Also: I’m a ChatGPT power user – here’s why Canvas is its best productivity feature

Here’s everything you need to know about the campaign, as well as a round-up of every day’s drops. 

What are the ’12 days of OpenAI’?

OpenAI CEO Sam Altman shared more details about the event, which kicked off at 10 a.m. PT on Dec. 5 and will occur daily for 12 weekdays with a live stream featuring a launch or demo. The launches will be both “big ones” or “stocking stuffers,” according to Altman. 

What’s dropped so far?

Thursday, December 19

On the second to last day of ’12 days of OpenAI,’ the company focused on releases regarding its MacOS desktop app and its interoperability with other apps. 

  • Users can now use the desktop app on MacOS to see and automate their work with ChatGPT. There will be more releases of this nature in 2025, but until then, OpenAI has been introducing the three features below. 
  • Using the “Work with Apps” button, users can now work with many more coding apps. The list includes: BBEdit, MatLab, Nova, Script Editor, TextMate, Android Studio, AppCode, CLion, DataGrip, GoLand, IntelliJ IDEA, PHPStorm, PyCharm, RubyMine, RustRover, WebStorm, Prompt, and Warp. 
  • For users who use ChatGPT for writing, the desktop app now supports Apple Notes, Quip, and Notion. 
  • Lastly, the desktop app for MacOS now supports Advanced Voice Mode while working with other apps.
  • Features have already been shipped. All you have to do is have the latest version of the MacOS app and a Plus, Pro, Team, Enterprise, and Edu subscription. 
  • To ease privacy concerns, OpenAI says ChatGPT will only work with apps when manually prompted. When the feature is active, users know what will be attached to the message. 
  • “Day 12, we have something super special, so don’t miss it,” teased OpenAI about its upcoming Friday release. 

Wednesday, December 18

Have you ever wanted to use ChatGPT without a Wi-Fi connection? Now, all you have to do is place a phone call.  Here’s what OpenAI released on the 10th day:

  • By dialing 1-800-ChatGPT, you can now access the chatbot via a toll-free number. OpenAI encourages users to save ChatGPT in their contacts for easy access.
  • Users can call anywhere in the US; in other countries, users can message ChatGPT on WhatsApp. Users get 15 minutes of free ChatGPT calls per month.
  • In WhatsApp, users can enter a prompt via a text as they would with any other person in their contacts. In this experience, it is just a text message. 
  • The phone call feature works on any phone, from a smartphone to a flip phone — even a rotary phone.  
  • The presenters said it is meant to make ChatGPT more accessible to more users. 

Tuesday, December 17

The releases on the ninth day all focus on developer features and updates, dubbed “Mini Dev Day.”  These launches include:  

  • The o1 model is finally out of preview in the API with support for function calling, structured outputs, developer messages, vision capabilities, and lower latency, according to the company. 
  • o1 in the API also features a new parameter: “reasoning effort.” This parameter allows developers to tell the model how much effort is put into formulating an answer, which helps with cost efficiency. 
  • OpenAI also introduced WebRTC support for the Realtime API, which makes it easier for developers “to build and scale real-time voice products across platforms.”
  • The Realtime API also got a 60% audio token price drop, support for GPT-4o mini, and more control over responses.
  • The fine-tuning API now supports Preference Fine-Tuning, which allows users to “Optimize the model to favor desired behavior by reinforcing preferred responses and reducing the likelihood of unpreferred ones,” according to OpenAI.  
  • OpenAI also introduced new Go and Java SDKs in beta. 
  • An “AMA” (ask me anything) session will be held for an hour after the live stream on the OpenAI GitHub platform with the presenters. 

Monday, December 16 

The drops for the second Monday in the 12 days of OpenAI series all focused on Search in ChatGPT. 

  • The AI search engine is available to all users starting today, including all free users who are signed in anywhere they can access ChatGPT. The feature was previously only available to ChatGPT Plus users. 
  • The search experience, which allows users to browse the web from ChatGPT, got faster and better on mobile and now has an enriched map experience. The upgrades include image-rich visual results.
  • Search is integrated into Advance Voice mode, meaning you can now search as you talk to ChatGPT. To activate this feature, just activate Advance Voice the same way you regularly would and ask it your query verbally. It will then answer your query verbally by pulling from the web. 
  • OpenAI also teased developers, saying, “Tomorrow is for you,” and calling the upcoming livestream a “mini Dev Day.”

Friday, December 13

One of OpenAI’s most highly requested features has been an organizational feature to better keep track of your conversations. On Friday, OpenAI delivered a new feature called “Projects.”

  • Projects is a new way to organize and customize your chats in ChatGPT, meant to be a part of continuing to optimize the core experience of ChatGPT.
  • When creating a Project, you can include a title, a customized folder color, relevant project files, instructions for ChatGPT on how it can best help you with the project, and more in one place. 
  • In the Project, you can start a chat and add previous chats from the sidebar to your Project. It can also answer questions using your context in a regular chat format. The chats can be saved in the Project, making it easier to pick up your conversations later and know exactly what to look for where. 
  • It will be rolled out to Plus, Pro, and Teams users starting today. OpenAI says it’s coming to free users as soon as possible. Enterprise and Edu users will see it rolled out early next year. 

Thursday, December 12

When the live stream started, OpenAI addressed the elephant in the room — the fact that the company’s live stream went down the day before. OpenAI apologized for the inconvenience and said its team is working on a post-mortem to be posted later. 

Then it got straight into the news — another highly-anticipated announcement: 

  • Advanced Voice Mode now has screen-sharing and visual capabilities, meaning it can assist with the context of what it is viewing, whether that be from your phone camera or what’s on your screen. 
  • These capabilities build on what Advanced Voice could already do very well — engaging in casual conversation as a human would. The natural-like conversations can be interrupted, have multi-turns, and understand non-linear trains of thought. 
  • In the demo, the user gets directions from ChatGPT’s Advanced Voice on how to make a cup of coffee. As the demoer goes through the steps, ChatGPT is verbally offering insights and directions. 
  • There’s another bonus for the Christmas season: Users can access a new Santa voice. To activate it, all users have to do is click on the snowflake icon. Santa is rolling out throughout today everywhere that users can access ChatGPT voice mode. The first time you talk to Santa, your usage limits reset, even if you have reached the limit already, so you can have a conversation with him. 
  • Video and screen sharing are rolling out in the latest mobile apps starting today and throughout next week to all Team users and most Pro and Plus subscribers. Pro and Plus subscribers in Europe will get access “as soon as we can,” and Enterprise and Edu users will get access early next year. 

Wednesday, December 11

Apple released iOS 18.2 on Wednesday. The release includes integrations with ChatGPT across Siri, Writing Tools, and Visual Intelligence. As a result, the live stream focused on walking through the integration. 

  • Siri can now recognize when you ask questions outside its scope that could benefit from being answered by ChatGPT instead. In those instances, it will ask if you’d like to process the query using ChatGPT. Before any request is sent to ChatGPT, a message notifying the user and asking for permission will always appear, placing control in the user’s hands as much as possible. 
  • Visual Intelligence refers to a new feature for the iPhone 16 lineup that users can access by tapping the Camera Control button. Once the camera is open, users can point it at something and search the web with Google, or use ChatGPT to learn more about what they are viewing or perform other tasks such as translating or summarizing text. 
  • Writing Tools now features a new “Compose” tool, which allows users to create text from scratch by leveraging ChatGPT. With the feature, users can even generate images using DALL-E. 

All of the above features are subject to ChatGPT’s daily usage limits, the same way that users would reach limits while using the free version of the model on ChatGPT. Users can choose whether or not to enable the ChatGPT integration in Settings.

Read more about it here: iOS 18.2 rolls out to iPhones: Try these 6 new AI features today

Tuesday, December 10 

  • Canvas is coming to all web users, regardless of plan, in GPT-4o, meaning it is no longer just available in beta for ChatGPT Plus users.
  • Canvas has been built into GPT-4o natively, meaning you can just call on Canvas instead of having to go to the toggle on the model selector. 
  • The Canvas interface is the same as what users saw in beta in ChatGPT Plus, with a table on the left hand side that shows the Q+A exchange and a right-hand tab that shows your project, displaying all of the edits as they go, as well as shortcuts. 
  • Canvas can also be used with custom GPTs. It is turned on by default when creating a new one, and there is an option to add Canvas to existing GPTs. 
  • Canvas also has the ability to run Python code directly in Canvas, allowing ChatGPT to execute coding tasks such as fixing bugs. 

Read more about it here: I’m a ChatGPT power user – and Canvas is still my favorite productivity feature a month later

Monday, December 9

OpenAI teased the third-day announcement as “something you’ve been waiting for,” followed by the much-anticipated drop of its video model — Sora.  Here’s what you need to know:

  • Known as Sora Turbo, the video model is smarter than the February model that was previewed. 
  • Access is coming in the US later today; users need only ChatGPT Plus and Pro.
  • Sora can generate video-to-video, text-to-video, and more. 
  • ChatGPT Plus users can generate up to 50 videos per month at 480p resolution or fewer videos at 720p. The Pro Plan offers 10x more usage. 
  • The new model is smarter and cheaper than the previewed February model. 
  • Sora features an explore page where users can view each other’s creations. Users can click on any video to see how it was created. 
  • A live demo showed the model in use. The demo-ers entered a prompt and picked aspect ratio, duration, and even presets. I found the live demo video results to be realistic and stunning. 
  • OpenAI also unveiled Storyboard, a tool that lets users generate inputs for every frame in a sequence. 

Friday, December 6:

On the second day of “shipmas,” OpenAI expanded access to its Reinforcement Fine-Tuning Research Program:

  • The Reinforcement Fine-Tuning program allows developers and machine learning engineers to fine-tune OpenAI models to “excel at specific sets of complex, domain-specific tasks,” according to OpenAI. 
  • Reinforcement Fine-Tuning refers to a customization technique in which developers can define a model’s behavior by inputting tasks and grading the output. The model then uses this feedback as a guide to improve, becoming better at reasoning through similar problems, and enhancing overall accuracy.
  • OpenAI encourages research institutes, universities, and enterprises to apply to the program, particularly those that perform narrow sets of complex tasks, could benefit from the assistance of AI, and perform tasks that have an objectively correct answer. 
  • Spots are limited; interested applicants can apply by filling out this form. 
  • OpenAI aims to make Reinforcement Fine-Tuning publicly available in early 2025.

Thursday, December 5: 

OpenAI started with a bang, unveiling two major upgrades to its chatbot: a new tier of ChatGPT subscription, ChatGPT Pro, and the full version of the company’s o1 model. 

The full version of o1: 

  • Will be better for all kinds of prompts, beyond math and science
  • Will make major mistakes about 34% less often than o1-preview, while thinking about 50% faster
  • Rolls out today, replacing o1-preview to all ChatGPT Plus and now Pro users 
  • Lets users input images, as seen in the demo, to provide multi-modal reasoning (reasoning on both text and images) 

ChatGPT Pro:

  • Is meant for ChatGPT Plus superusers, granting them unlimited access to the best OpenAI has to offer, including unlimited access to OpenAI o1-mini, GPT-4o, and Advanced Mode
  • Features o1 pro mode, which uses more computing to reason through the hardest science and math problems 
  • Costs $200 per month 

Where can you access the live stream?

The live streams are held on the OpenAI website, and posted to its YouTube channel immediately after. To make access easier, OpenAI will also post a link to the live stream on its X account 10 minutes before it starts, which will be at approximately 10 a.m. PT/1 p.m. ET daily. 

Continue Reading
Click to comment

Leave a Reply

Tu dirección de correo electrónico no será publicada. Los campos obligatorios están marcados con *

Noticias

AI generativa: todo para saber sobre la tecnología detrás de chatbots como chatgpt

Published

on

Ya sea que se dé cuenta o no, la inteligencia artificial está en todas partes. Se encuentra detrás de los chatbots con los que hablas en línea, las listas de reproducción que transmites y los anuncios personalizados que aparecen en tu desplazamiento. Y ahora está tomando una personalidad más pública. Piense en Meta AI, que ahora está integrado en aplicaciones como Facebook, Messenger y WhatsApp; o Géminis de Google, trabajando en segundo plano en las plataformas de la compañía; o Apple Intelligence, lanzando a través de iPhones ahora.

AI tiene una larga historia, volviendo a una conferencia en Dartmouth en 1956 que primero discutió la inteligencia artificial como una cosa. Los hitos en el camino incluyen Eliza, esencialmente el primer chatbot, desarrollado en 1964 por el informático del MIT Joseph Weizenbaum y, saltando 40 años, cuando la función de autocompleta de Google apareció por primera vez en 2004.

Luego llegó 2022 y el ascenso de Chatgpt a la fama. Los desarrollos generativos de IA y los lanzamientos de productos se han acelerado rápidamente desde entonces, incluidos Google Bard (ahora Gemini), Microsoft Copilot, IBM Watsonx.ai y los modelos de LLAMA de código abierto de Meta.

Desglosemos qué es la IA generativa, cómo difiere de la inteligencia artificial “regular” y si la Generación AI puede estar a la altura de las expectativas.

IA generativa en pocas palabras

En esencia, la IA generativa se refiere a sistemas de inteligencia artificial que están diseñados para producir un nuevo contenido basado en patrones y datos que han aprendido. En lugar de solo analizar números o predecir tendencias, estos sistemas generan salidas creativas como texto, música de imágenes, videos y código de software.

Algunas de las herramientas de IA generativas más populares en el mercado incluyen:

El principal entre sus habilidades, ChatGPT puede crear conversaciones o ensayos similares a los humanos basados ​​en algunas indicaciones simples. Dall-E y MidJourney crean obras de arte detalladas a partir de una breve descripción, mientras que Adobe Firefly se centra en la edición y el diseño de imágenes.

Imagen generada por chatgpt de una ardilla con ojos grandes sosteniendo una bellota

Chatgpt / captura de pantalla por cnet

Ai eso no es generativo

No toda la IA es generativa. Si bien Gen AI se enfoca en crear contenido nuevo, la IA tradicional se destaca por analizar datos y hacer predicciones. Esto incluye tecnologías como el reconocimiento de imágenes y el texto predictivo. También se usa para soluciones novedosas en:

  • Ciencia
  • Diagnóstico médico
  • Pronóstico del tiempo
  • Detección de fraude
  • Análisis financiero para pronósticos e informes

La IA que venció a los grandes campeones humanos en el ajedrez y el juego de mesa no fue una IA generativa.

Es posible que estos sistemas no sean tan llamativos como la Generación AI, pero la inteligencia artificial clásica es una gran parte de la tecnología en la que confiamos todos los días.

¿Cómo funciona Gen AI?

Detrás de la magia de la IA generativa hay modelos de idiomas grandes y técnicas avanzadas de aprendizaje automático. Estos sistemas están capacitados en grandes cantidades de datos, como bibliotecas completas de libros, millones de imágenes, años de música grabada y datos raspados de Internet.

Los desarrolladores de IA, desde gigantes tecnológicos hasta nuevas empresas, son conscientes de que la IA es tan buena como los datos que lo alimenta. Si se alimenta de datos de baja calidad, la IA puede producir resultados sesgados. Es algo con lo que incluso los jugadores más grandes en el campo, como Google, no han sido inmunes.

La IA aprende patrones, relaciones y estructuras dentro de estos datos durante el entrenamiento. Luego, cuando se le solicita, aplica ese conocimiento para generar algo nuevo. Por ejemplo, si le pide a una herramienta Gen AI que escriba un poema sobre el océano, no solo extrae versos preescritos de una base de datos. En cambio, está usando lo que aprendió sobre la poesía, los océanos y la estructura del lenguaje para crear una pieza completamente original.

Un poema de 12 líneas llamado The Ocean's Whisper

Chatgpt / captura de pantalla por cnet

Es impresionante, pero no es perfecto. A veces los resultados pueden sentirse un poco apagados. Tal vez la IA malinterpreta su solicitud, o se vuelve demasiado creativo de una manera que no esperaba. Puede proporcionar con confianza información completamente falsa, y depende de usted verificarla. Esas peculiaridades, a menudo llamadas alucinaciones, son parte de lo que hace que la IA generativa sea fascinante y frustrante.

Las capacidades generativas de IA están creciendo. Ahora puede comprender múltiples tipos de datos combinando tecnologías como el aprendizaje automático, el procesamiento del lenguaje natural y la visión por computadora. El resultado se llama IA multimodal que puede integrar alguna combinación de texto, imágenes, video y habla dentro de un solo marco, ofreciendo respuestas más contextualmente relevantes y precisas. El modo de voz avanzado de ChatGPT es un ejemplo, al igual que el proyecto Astra de Google.

Desafíos con IA generativa

No hay escasez de herramientas de IA generativas, cada una con su talento único. Estas herramientas han provocado la creatividad, pero también han planteado muchas preguntas además del sesgo y las alucinaciones, como, ¿quién posee los derechos del contenido generado por IA? O qué material es un juego justo o fuera de los límites para que las compañías de IA los usen para capacitar a sus modelos de idiomas; vea, por ejemplo, la demanda del New York Times contra Openai y Microsoft.

Otras preocupaciones, no son asuntos pequeños, implican privacidad, responsabilidad en la IA, los profundos profundos generados por IA y el desplazamiento laboral.

“Escribir, animación, fotografía, ilustración, diseño gráfico: las herramientas de IA ahora pueden manejar todo eso con una facilidad sorprendente. Pero eso no significa que estos roles desaparezcan. Simplemente puede significar que los creativos deberán mejorar y usar estas herramientas para amplificar su propio trabajo”, Fang Liu, profesor de la Universidad de Notre Dame Dame y Coeditor-Chief de las transacciones de ACM en las transacciones de Probabilista, contó el aprendizaje en el poderoso de la máquina probabilística, le dijo a Cetnet.

“También ofrece una forma para las personas que tal vez carecen de la habilidad, como alguien con una visión clara que no puede dibujar, pero que puede describirlo a través de un aviso. Así que no, no creo que interrumpa a la industria creativa. Con suerte, será una co-creación o un aumento, no un reemplazo”.

Otro problema es el impacto en el medio ambiente porque la capacitación de grandes modelos de IA utiliza mucha energía, lo que lleva a grandes huellas de carbono. El rápido ascenso de la Generación AI en los últimos años ha acelerado las preocupaciones sobre los riesgos de la IA en general. Los gobiernos están aumentando las regulaciones de IA para garantizar el desarrollo responsable y ético, especialmente la Ley de IA de la Unión Europea.

Recepción de IA generativa

Muchas personas han interactuado con los chatbots en el servicio al cliente o han utilizado asistentes virtuales como Siri, Alexa y Google Assistant, que ahora están en la cúspide de convertirse en Gen AI Power Tools. Todo eso, junto con las aplicaciones para ChatGPT, Claude y otras herramientas nuevas, es poner ai en sus manos. Y la reacción pública a la IA generativa se ha mezclado. Muchos usuarios disfrutan de la conveniencia y la creatividad que ofrece, especialmente para cosas como escribir ayuda, creación de imágenes, soporte de tareas y productividad.

Mientras tanto, en la encuesta global de IA 2024 de McKinsey, el 65% de los encuestados dijo que sus organizaciones usan regularmente IA generativa, casi el doble de la cifra reportada solo 10 meses antes. Industrias como la atención médica y las finanzas están utilizando Gen AI para racionalizar las operaciones comerciales y automatizar tareas mundanas.

Como se mencionó, existen preocupaciones obvias sobre la ética, la transparencia, la pérdida de empleos y el potencial del mal uso de los datos personales. Esas son las principales críticas detrás de la resistencia a aceptar la IA generativa.

Y las personas que usan herramientas de IA generativas también encontrarán que los resultados aún no son lo suficientemente buenos para el tiempo. A pesar de los avances tecnológicos, la mayoría de las personas pueden reconocer si el contenido se ha creado utilizando Gen AI, ya sean artículos, imágenes o música.

AI ha secuestrado ciertas frases que siempre he usado, por lo que debo autocorrectar mi escritura a menudo porque puede parecer una IA. Muchos artículos escritos por AI contienen frases como “en la era de”, o todo es un “testimonio de” o un “tapiz de”. La IA carece de la emoción y la experiencia que viene, bueno, ser una vida humana y viviente. Como explicó un artista en Quora, “lo que AI hace no es lo mismo que el arte que evoluciona de un pensamiento en un cerebro humano” y “no se crea a partir de la pasión que se encuentra en un corazón humano”.

AI generativa: vida cotidiana

La IA generativa no es solo para técnicos o personas creativas. Una vez que obtienes la habilidad de darle indicaciones, tiene el potencial de hacer gran parte del trabajo preliminar por ti en una variedad de tareas diarias.

Digamos que está planeando un viaje. En lugar de desplazarse por páginas de resultados de búsqueda, le pide a un chatbot que planifique su itinerario. En cuestión de segundos, tiene un plan detallado adaptado a sus preferencias. (Ese es el ideal. Por favor, verifique siempre sus recomendaciones).

Un propietario de una pequeña empresa que necesita una campaña de marketing pero que no tiene un equipo de diseño puede usar una IA generativa para crear imágenes llamativas e incluso pedirle que sugiera copia publicitaria.

Un itinerario de viaje para Nueva Orleans, creado por chatgpt

Chatgpt / captura de pantalla por cnet

Gen Ai está aquí para quedarse

No ha habido un avance tecnológico que haya causado tal boom desde Internet y, más tarde, el iPhone. A pesar de sus desafíos, la IA generativa es innegablemente transformadora. Está haciendo que la creatividad sea más accesible, ayudando a las empresas a racionalizar los flujos de trabajo e incluso inspirar formas completamente nuevas de pensar y resolver problemas.

Pero quizás lo más emocionante es su potencial, y estamos rascando la superficie de lo que estas herramientas pueden hacer.

Preguntas frecuentes

¿Cuál es un ejemplo de IA generativa?

ChatGPT es probablemente el ejemplo más popular de IA generativa. Le das un aviso y puede generar texto e imágenes; Código de escritura; Responder preguntas; resumir el texto; borrador de correos electrónicos; y mucho más.

¿Cuál es la diferencia entre la IA y la IA generativa?

La IA generativa crea contenido nuevo como texto, imágenes o música, mientras que la IA tradicional analiza los datos, reconoce patrones o imágenes y hace predicciones (por ejemplo, en medicina, ciencia y finanzas).

Continue Reading

Noticias

Probé 5 sitios gratuitos de ‘chatgpt clon’ – no intentes esto en casa

Published

on

Si busca “CHATGPT” en su navegador, es probable que se tope en sitios web que parecen estar alimentados por OpenAI, pero no lo son. Uno de esos sitios, chat.chatbotapp.ai, ofrece acceso a “GPT-3.5” de forma gratuita y utiliza marca familiar.

Pero aquí está la cosa: no está dirigida por OpenAi. Y, francamente, ¿por qué usar un GPT-3.5 potencialmente falso cuando puedes usar GPT-4O de forma gratuita en el actual ¿Sitio de chatgpt?

Continue Reading

Noticias

What Really Happened When OpenAI Turned on Sam Altman

Published

on

In the summer of 2023, Ilya Sutskever, a co-founder and the chief scientist of OpenAI, was meeting with a group of new researchers at the company. By all traditional metrics, Sutskever should have felt invincible: He was the brain behind the large language models that helped build ChatGPT, then the fastest-growing app in history; his company’s valuation had skyrocketed; and OpenAI was the unrivaled leader of the industry believed to power the future of Silicon Valley. But the chief scientist seemed to be at war with himself.

Sutskever had long believed that artificial general intelligence, or AGI, was inevitable—now, as things accelerated in the generative-AI industry, he believed AGI’s arrival was imminent, according to Geoff Hinton, an AI pioneer who was his Ph.D. adviser and mentor, and another person familiar with Sutskever’s thinking. (Many of the sources in this piece requested anonymity in order to speak freely about OpenAI without fear of reprisal.) To people around him, Sutskever seemed consumed by thoughts of this impending civilizational transformation. What would the world look like when a supreme AGI emerged and surpassed humanity? And what responsibility did OpenAI have to ensure an end state of extraordinary prosperity, not extraordinary suffering?

By then, Sutskever, who had previously dedicated most of his time to advancing AI capabilities, had started to focus half of his time on AI safety. He appeared to people around him as both boomer and doomer: more excited and afraid than ever before of what was to come. That day, during the meeting with the new researchers, he laid out a plan.

“Once we all get into the bunker—” he began, according to a researcher who was present.

“I’m sorry,” the researcher interrupted, “the bunker?”

“We’re definitely going to build a bunker before we release AGI,” Sutskever replied. Such a powerful technology would surely become an object of intense desire for governments globally. The core scientists working on the technology would need to be protected. “Of course,” he added, “it’s going to be optional whether you want to get into the bunker.”

This essay has been adapted from Hao’s forthcoming book, Empire of AI.

Two other sources I spoke with confirmed that Sutskever commonly mentioned such a bunker. “There is a group of people—Ilya being one of them—who believe that building AGI will bring about a rapture,” the researcher told me. “Literally, a rapture.” (Sutskever declined to comment.)

Sutskever’s fears about an all-powerful AI may seem extreme, but they are not altogether uncommon, nor were they particularly out of step with OpenAI’s general posture at the time. In May 2023, the company’s CEO, Sam Altman, co-signed an open letter describing the technology as a potential extinction risk—a narrative that has arguably helped OpenAI center itself and steer regulatory conversations. Yet the concerns about a coming apocalypse would also have to be balanced against OpenAI’s growing business: ChatGPT was a hit, and Altman wanted more.

When OpenAI was founded, the idea was to develop AGI for the benefit of humanity. To that end, the co-founders—who included Altman and Elon Musk—set the organization up as a nonprofit and pledged to share research with other institutions. Democratic participation in the technology’s development was a key principle, they agreed, hence the company’s name. But by the time I started covering the company in 2019, these ideals were eroding. OpenAI’s executives had realized that the path they wanted to take would demand extraordinary amounts of money. Both Musk and Altman tried to take over as CEO. Altman won out. Musk left the organization in early 2018 and took his money with him. To plug the hole, Altman reformulated OpenAI’s legal structure, creating a new “capped-profit” arm within the nonprofit to raise more capital.

Since then, I’ve tracked OpenAI’s evolution through interviews with more than 90 current and former employees, including executives and contractors. The company declined my repeated interview requests and questions over the course of working on my book about it, which this story is adapted from; it did not reply when I reached out one more time before the article was published. (OpenAI also has a corporate partnership with The Atlantic.)

OpenAI’s dueling cultures—the ambition to safely develop AGI, and the desire to grow a massive user base through new product launches—would explode toward the end of 2023. Gravely concerned about the direction Altman was taking the company, Sutskever would approach his fellow board of directors, along with his colleague Mira Murati, then OpenAI’s chief technology officer; the board would subsequently conclude the need to push the CEO out. What happened next—with Altman’s ouster and then reinstatement—rocked the tech industry. Yet since then, OpenAI and Sam Altman have become more central to world affairs. Last week, the company unveiled an “OpenAI for Countries” initiative that would allow OpenAI to play a key role in developing AI infrastructure outside of the United States. And Altman has become an ally to the Trump administration, appearing, for example, at an event with Saudi officials this week and onstage with the president in January to announce a $500 billion AI-computing-infrastructure project.

Altman’s brief ouster—and his ability to return and consolidate power—is now crucial history to understand the company’s position at this pivotal moment for the future of AI development. Details have been missing from previous reporting on this incident, including information that sheds light on Sutskever and Murati’s thinking and the response from the rank and file. Here, they are presented for the first time, according to accounts from more than a dozen people who were either directly involved or close to the people directly involved, as well as their contemporaneous notes, plus screenshots of Slack messages, emails, audio recordings, and other corroborating evidence.

The altruistic OpenAI is gone, if it ever existed. What future is the company building now?

Before ChatGPT, sources told me, Altman seemed generally energized. Now he often appeared exhausted. Propelled into megastardom, he was dealing with intensified scrutiny and an overwhelming travel schedule. Meanwhile, Google, Meta, Anthropic, Perplexity, and many others were all developing their own generative-AI products to compete with OpenAI’s chatbot.

Many of Altman’s closest executives had long observed a particular pattern in his behavior: If two teams disagreed, he often agreed in private with each of their perspectives, which created confusion and bred mistrust among colleagues. Now Altman was also frequently bad-mouthing staffers behind their backs while pushing them to deploy products faster and faster. Team leads mirroring his behavior began to pit staff against one another. Sources told me that Greg Brockman, another of OpenAI’s co-founders and its president, added to the problems when he popped into projects and derail­ed long-​standing plans with ­last-​minute changes.

The environment within OpenAI was changing. Previously, Sutskever had tried to unite workers behind a common cause. Among employees, he had been known as a deep thinker and even something of a mystic, regularly speaking in spiritual terms. He wore shirts with animals on them to the office and painted them as well—a cuddly cat, cuddly alpacas, a cuddly fire-breathing dragon. One of his amateur paintings hung in the office, a trio of flowers blossoming in the shape of OpenAI’s logo, a symbol of what he always urged employees to build: “A plurality of humanity-loving AGIs.”

But by the middle of 2023—around the time he began speaking more regularly about the idea of a bunker—Sutskever was no longer just preoccupied by the possible cataclysmic shifts of AGI and superintelligence, according to sources familiar with his thinking. He was consumed by another anxiety: the erosion of his faith that OpenAI could even keep up its technical advancements to reach AGI, or bear that responsibility with Altman as its leader. Sutskever felt Altman’s pattern of behavior was undermining the two pillars of OpenAI’s mission, the sources said: It was slowing down research progress and eroding any chance at making sound AI-safety decisions.

Meanwhile, Murati was trying to manage the mess. She had always played translator and bridge to Altman. If he had adjustments to the company’s strategic direction, she was the implementer. If a team needed to push back against his decisions, she was their champion. When people grew frustrated with their inability to get a straight answer out of Altman, they sought her help. “She was the one getting stuff done,” a former colleague of hers told me. (Murati declined to comment.)

During the development of GPT‑­4, Altman and Brockman’s dynamic had nearly led key people to quit, sources told me. Altman was also seemingly trying to circumvent safety processes for expediency. At one point, sources close to the situation said, he had told Murati that OpenAI’s legal team had cleared the latest model, GPT-4 Turbo, to skip review by the company’s Deployment Safety Board, or DSB—a committee of Microsoft and OpenAI representatives who evaluated whether OpenAI’s most powerful models were ready for release. But when Murati checked in with Jason Kwon, who oversaw the legal team, Kwon had no idea how Altman had gotten that impression.

In the summer, Murati attempted to give Altman detailed feedback on these issues, according to multiple sources. It didn’t work. The CEO iced her out, and it took weeks to thaw the relationship.

By fall, Sutskever and Murati both drew the same conclusion. They separately approached the three board members who were not OpenAI employees—Helen Toner, a director at Georgetown University’s Center for Security and Emerging Technology; the roboticist Tasha McCauley; and one of Quora’s co-founders and its CEO, Adam D’Angelo—and raised concerns about Altman’s leadership. “I don’t think Sam is the guy who should have the finger on the button for AGI,” Sutskever said in one such meeting, according to notes I reviewed. “I don’t feel comfortable about Sam leading us to AGI,” Murati said in another, according to sources familiar with the conversation.

That Sutskever and Murati both felt this way had a huge effect on Toner, McCauley, and D’Angelo. For close to a year, they, too, had been processing their own grave concerns about Altman, according to sources familiar with their thinking. Among their many doubts, the three directors had discovered through a series of chance encounters that he had not been forthcoming with them about a range of issues, from a breach in the DSB’s protocols to the legal structure of OpenAI Startup Fund, a dealmaking vehicle that was meant to be under the company but that instead Altman owned himself.

If two of Altman’s most senior deputies were sounding the alarm on his leadership, the board had a serious problem. Sutskever and Murati were not the first to raise these kinds of issues, either. In total, the three directors had heard similar feedback over the years from at least five other people within one to two levels of Altman, the sources said. By the end of October, Toner, McCauley, and D’Angelo began to meet nearly daily on video calls, agreeing that Sutskever’s and Murati’s feedback about Altman, and Sutskever’s suggestion to fire him, warranted serious deliberation.

As they did so, Sutskever sent them long dossiers of documents and screenshots that he and Murati had gathered in tandem with examples of Altman’s behaviors. The screenshots showed at least two more senior leaders noting Altman’s tendency to skirt around or ignore processes, whether they’d been instituted for AI-safety reasons or to smooth company operations. This included, the directors learned, Altman’s apparent attempt to skip DSB review for GPT-4 Turbo.

By Saturday, November 11, the independent directors had made their decision. As Sutskever suggested, they would remove Altman and install Murati as interim CEO. On November 17, 2023, at about noon Pacific time, Sutskever fired Altman on a Google Meet with the three independent board members. Sutskever then told Brockman on another Google Meet that Brockman would no longer be on the board but would retain his role at the company. A public announcement went out immediately.

For a brief moment, OpenAI’s future was an open question. It might have taken a path away from aggressive commercialization and Altman. But this is not what happened.

After what had seemed like a few hours of calm and stability, including Murati having a productive conversation with Microsoft—at the time OpenAI’s largest financial backer—she had suddenly called the board members with a new problem. Altman and Brockman were telling everyone that Altman’s removal had been a coup by Sutskever, she said.

It hadn’t helped that, during a company all-​hands to address employee questions, Sutskever had been completely ineffectual with his communication.

“Was there a specific incident that led to this?” Murati had read aloud from a list of employee questions, according to a recording I obtained of the meeting.

“Many of the questions in the document will be about the details,” Sutskever responded. “What, when, how, who, exactly. I wish I could go into the details. But I can’t.”

“Are we worried about the hostile takeover via coercive influence of the existing board members?” Sutskever read from another employee later.

“Hostile takeover?” Sutskever repeated, a new edge in his voice. “The OpenAI nonprofit board has acted entirely in accordance to its objective. It is not a hostile takeover. Not at all. I disagree with this question.”

Shortly thereafter, the remaining board, including Sutskever, confronted enraged leadership over a video call. Kwon, the chief strategy officer, and Anna Makanju, the vice president of global affairs, were leading the charge in rejecting the board’s characterization of Altman’s behavior as “not consistently candid,” according to sources present at the meeting. They demanded evidence to support the board’s decision, which the members felt they couldn’t provide without outing Murati, according to sources familiar with their thinking.

In rapid succession that day, Brockman quit in protest, followed by three other senior researchers. Through the evening, employees only got angrier, fueled by compounding problems: among them, a lack of clarity from the board about their reasons for firing Altman; a potential loss of a tender offer, which had given some the option to sell what could amount to millions of dollars’ worth of their equity; and a growing fear that the instability at the company could lead to its unraveling, which would squander so much promise and hard work.

Faced with the possibility of OpenAI falling apart, Sutskever’s resolve immediately started to crack. OpenAI was his baby, his life; its dissolution would destroy him. He began to plead with his fellow board members to reconsider their position on Altman.

Meanwhile, Murati’s interim position was being challenged. The conflagration within the company was also spreading to a growing circle of investors. Murati now was unwilling to explicitly throw her weight behind the board’s decision to fire Altman. Though her feedback had helped instigate it, she had not participated herself in the deliberations.

By Monday morning, the board had lost. Murati and Sutskever flipped sides. Altman would come back; there was no other way to save OpenAI.

I was already working on a book about OpenAI at the time, and in the weeks that followed the board crisis, friends, family, and media would ask me dozens of times: What did all this mean, if anything? To me, the drama highlighted one of the most urgent questions of our generation: How do we govern artificial intelligence? With AI on track to rewire a great many other crucial functions in society, that question is really asking: How do we ensure that we’ll make our future better, not worse?

The events of November 2023 illustrated in the clearest terms just how much a power struggle among a tiny handful of Silicon Valley elites is currently shaping the future of this technology. And the scorecard of this centralized approach to AI development is deeply troubling. OpenAI today has become everything that it said it would not be. It has turned into a nonprofit in name only, aggressively commercializing products such as ChatGPT and seeking historic valuations. It has grown ever more secretive, not only cutting off access to its own research but shifting norms across the industry to no longer share meaningful technical details about AI models. In the pursuit of an amorphous vision of progress, its aggressive push on the limits of scale has rewritten the rules for a new era of AI development. Now every tech giant is racing to out-scale one another, spending sums so astronomical that even they have scrambled to redistribute and consolidate their resources. What was once unprecedented has become the norm.

As a result, these AI companies have never been richer. In March, OpenAI raised $40 billion, the largest private tech-funding round on record, and hit a $300 billion valuation. Anthropic is valued at more than $60 billion. Near the end of last year, the six largest tech giants together had seen their market caps increase by more than $8 trillion after ChatGPT. At the same time, more and more doubts have risen about the true economic value of generative AI, including a growing body of studies that have shown that the technology is not translating into productivity gains for most workers, while it’s also eroding their critical thinking.

In a November Bloomberg article reviewing the generative-AI industry, the staff writers Parmy Olson and Carolyn Silverman summarized it succinctly. The data, they wrote, “raises an uncomfortable prospect: that this supposedly revolutionary technology might never deliver on its promise of broad economic transformation, but instead just concentrate more wealth at the top.”

Meanwhile, it’s not just a lack of productivity gains that many in the rest of the world are facing. The exploding human and material costs are settling onto wide swaths of society, especially the most vulnerable, people I met around the world, whether workers and rural residents in the global North or impoverished communities in the global South, all suffering new degrees of precarity. Workers in Kenya earned abysmal wages to filter out violence and hate speech from OpenAI’s technologies, including ChatGPT. Artists are being replaced by the very AI models that were built from their work without their consent or compensation. The journalism industry is atrophying as generative-AI technologies spawn heightened volumes of misinformation. Before our eyes, we’re seeing an ancient story repeat itself: Like empires of old, the new empires of AI are amassing extraordinary riches across space and time at great expense to everyone else.

To quell the rising concerns about generative AI’s present-day performance, Altman has trumpeted the future benefits of AGI ever louder. In a September 2024 blog post, he declared that the “Intelligence Age,” characterized by “massive prosperity,” would soon be upon us. At this point, AGI is largely rhetorical—a fantastical, all-purpose excuse for OpenAI to continue pushing for ever more wealth and power. Under the guise of a civilizing mission, the empire of AI is accelerating its global expansion and entrenching its power.

As for Sutskever and Murati, both parted ways with OpenAI after what employees now call “The Blip,” joining a long string of leaders who have left the organization after clashing with Altman. Like many of the others who failed to reshape OpenAI, the two did what has become the next-most-popular option: They each set up their own shops, to compete for the future of this technology.


This essay has been adapted from Karen Hao’s forthcoming book, Empire of AI.

Empire Of AI – Dreams And Nightmares In Sam Altman’s OpenAI

By Karen Hao


*Illustration by Akshita Chandra / The Atlantic. Sources: Nathan Howard / Bloomberg / Getty; Jack Guez / AFP / Getty; Jon Kopaloff / Getty; Manuel Augusto Moreno / Getty; Yuichiro Chino / Getty.


​When you buy a book using a link on this page, we receive a commission. Thank you for supporting The Atlantic.

Continue Reading

Trending