One of the more intriguing discoveries about ChatGPT is that it can write pretty good code. I first tested this out in 2023 when I asked it to write a WordPress plugin my wife could use on her website. ChatGPT did a fine job, but it was a simple project.
So, how can you use ChatGPT to write code as part of your daily coding practice? Here’s a quick summary:
ChatGPT can produce both useful and unusable code. For best results, provide clear and detailed prompts.
ChatGPT excels in assisting with specific coding tasks or routines, rather than building complete applications from scratch.
Use ChatGPT to find and choose the right coding libraries for specific purposes, and engage in an interactive discussion to narrow your options.
Be cautious about who owns AI-generated code and always verify the code’s reliability. Don’t blindly trust the generated output.
Treat interactions with ChatGPT as a conversation. Refine your questions based on the AI’s responses to get closer to the desired output.
Now, let’s explore ChatGPT in considerably more depth.
What types of coding can ChatGPT do well?
There are two important facts about ChatGPT and coding. First, the AI can write useful code.
The second is that the AI can get completely lost, fall into a rabbit hole, chase its tail, and produce unusable garbage.
Also: The best AI for coding in 2025 (and what not to use)
I found this fact out the hard way. After I finished the WordPress plugin for my wife, I decided to see how far ChatGPT could go.
I wrote a very careful prompt for a Mac application, including detailed descriptions of user interface elements, interactions, what would be provided in settings, how they would work, and more. Then, I fed the prompt to ChatGPT.
ChatGPT responded with a flood of text and code. Then, it stopped mid-code. When I asked the AI to continue, it vomited even more code and text. I requested continue after continue, and it dumped out more and more code. However, none of the output was usable. The AI didn’t identify where the code should go, how to construct the project, and — when I looked carefully at the code produced — it left out major operations I requested, leaving in simple text descriptions stating “program logic goes here”.
Also: How ChatGPT scanned 170k lines of code in seconds and saved me hours of work
After repeated tests, it became clear that if you ask ChatGPT to deliver a complete application, the tool will fail. A corollary to this observation is that if you know nothing about coding and want ChatGPT to build something, it will fail.
Where ChatGPT succeeds — and does so very well — is in helping someone who already knows how to code to build specific routines and get tasks done. Don’t ask for an app that runs on the menu bar. But if you ask ChatGPT for a routine to put a menu on the menu bar, and paste that into your project, the tool will do quite well.
Also, remember that, while ChatGPT appears to have a tremendous amount of domain-specific knowledge (and often does), it lacks wisdom. As such, the tool may be able to write code, but it won’t be able to write code containing the nuances for specific or complex problems that require deep experience.
Also: How to use ChatGPT to create an app
Use ChatGPT to demo techniques, write small algorithms, and produce subroutines. You can even get ChatGPT to help you break down a bigger project into chunks, and then you can ask it to help you code those chunks.
So, with that in mind, let’s look at some specific steps for how ChatGPT can help you write code.
How to use ChatGPT to write code
This first step is to decide what you will ask of ChatGPT — but not yet ask it anything. Decide what you want your function or routine to do, or what you want to learn to incorporate into your code. Decide on the parameters you’ll pass into your code and what you want to get out. And then look at how you’re going to describe it.
Also:How to write better ChatGPT prompts
Imagine you’re paying a human programmer to do this task. Are you giving that person enough information to be able to work on your assignment? Or are you too vague and the person you’re paying is more likely to ask questions or turn in something entirely unrelated to what you want?
Here’s an example. Let’s say I want to be able to summarize any web page. I want to feed the AI this article and get back a well-considered and appropriate summary. As my input, I’ll specify a web page URL. As my output, it’s a block of text with a summary.
Show more
Continuing with the example above, an old school way of extracting web page data was to find the text between HTML paragraph tags.
However, with the rise of AI tools, you can use an AI library to do an intelligent extract and summary. One of the places ChatGPT excels (and it’s also an area you can easily verify to avoid its authoritative-but-wrong behavior pattern) is finding libraries and resources.
Also: The best free AI courses
OpenAI (the maker of ChatGPT) sells API access to its LLMs to do exactly what we want. But in the case of this example, let’s assume we don’t want to pay transaction fees.
So, let’s look at interacting with ChatGPT to figure out how to use such a tool, for free, with a project that runs in PHP.
Show more
I started with a prompt to elicit information about what libraries would provide the desired functionality. A library (for those reading along who aren’t programmers) is a body of code a programmer can access that does a lot of the heavy lifting for a specific purpose. A big part of modern programming is finding and choosing the right libraries, so this is a good starting point.
In this case, I’m looking at blocks of code written by other people that will summarize text. Here’s my first prompt:
Describe ten different open source AI libraries (and the languages they work with) that I can use to generate a summary of the main core contents of any web page, ignoring any ads or embedded materials.
This prompt gave me exactly what I wanted, including a mention of OpenAI’s offerings. I think OpenAI would do great here, but for this hypothetical project, I don’t want to budget for API fees. So, I’ll narrow down the question:
Are any of these free?
ChatGPT hedged its bets with its answer: “Yes, all ten of these AI libraries are open source and free to use. However, some of them may have usage limits or require payment for access to additional features or resources.” So, based on that response, I clarified my query:
Which of these libraries have no usage limits and don’t require any additional payment or licensing?
Notice how this is very much a conversation. I don’t have to re-ask the original question. I’m just drilling down as I might if I had an expert next to me. This time, ChatGPT gave me eight library choices, but none mentioned the PHP language I was planning to use to code. So, here’s the next prompt:
Of those 8 libraries, can I use any with PHP?
It returned three libraries, but I wasn’t sure about what each did. So, another question:
What’s the difference between Sumy, Gensim, and NLTK?
I still wasn’t sure, so I clarified my use plan and then asked:
If I want to create summaries of web page news articles, which library would work better?
The answer was clear and promising: “Sumy is specifically designed for text summarization, which is the task of creating a summary that captures the most important information from a piece of text.” So, it was time to see what was involved in using Sumy with PHP. I asked my last question for this part of the project:
Can you explain how to use Sumy from PHP?
Feel free to play along on your computer and paste these prompts into ChatGPT. Notice that, in step one, I decided what program module to get help on. Then, in this step, I had a conversation with ChatGPT to decide what library to use and how to integrate it into my project.
Also:The best AI chatbots
That approach might not seem like programming, but I assure you it is. Programming isn’t just blasting lines of code onto a page. Programming is figuring out how to integrate all the various resources and systems, and how to talk to all the components of your solution. Here, ChatGPT helped me do that integration analysis.
By the way, I was curious whether Google’s Gemini AI could help similarly. Gemini did give some extra insights into the planning aspect of programming over ChatGPT’s responses.
So, don’t hesitate to use multiple tools to triangulate your answers. Here’s that story: Gemini vs. ChatGPT: Can Gemini help you code? Since I wrote that article, Google added some coding capabilities to Gemini, but they’re not all that great. You can read about that capability here: I tested Google Gemini’s new coding skills. It didn’t go well. And even more recently, I dug into Gemini Advanced. The AI is still not passing many tests.
Also: How I test an AI chatbot’s coding ability – and you can too
Coding is next.
OK, let’s pause here. This article is entitled “How to use ChatGPT to write code.” And it will. But what we’re really doing is asking ChatGPT to write example code.
Also: The rise and fall in programming languages’ popularity since 2016 – and what it tells us
Let’s be clear. Unless you’re writing a small function (like the line sorter/randomizer ChatGPT wrote for my wife), ChatGPT can’t write your final code. First, you’ll have to maintain it. ChatGPT is terrible at modifying already-written code. Terrible, as in, it doesn’t do it. So, to get fresh code, you have to ask ChatGPT to generate something new. As I found previously, even if your prompt is virtually identical, ChatGPT may unexpectedly change what it gives you.
So, bottom line: ChatGPT can’t maintain your code, or even tweak it.
Show more
That limitation means you have to do the legwork yourself. As we know, the first draft of a piece of code is rarely the final code. So, even if you expect ChatGPT to generate final code, it would be a starting point, and one where you need to take it to completion, integrate it into your bigger project, test it, refine it, debug it, and so on.
But that issue doesn’t mean the example code is worthless — far from it. Let’s look at a prompt I wrote based on the project I described earlier. Here’s the first part:
Wite a PHP function called summarize_article.
As input, summarize_article will be passed a URL to an article on a news-related site like ZDNET.com or Reuters.com.
I’m telling ChatGPT the programming language it should use. I’m also telling the AI the input and providing two sites as samples to help ChatGPT understand the article style. Honestly, I’m not sure ChatGPT didn’t ignore that bit of guidance. Next, I’ll tell it how to do the bulk of the work:
Inside summarize_article, retrieve the contents of the web page at the URL provided. Using the library Sumy from within PHP and any other libraries necessary, extract the main body of the article, ignoring any ads or embedded materials, and summarize it to approximately 50 words. Make sure the summary consists of complete sentences. You can go above the 50 words to finish the last sentence, if necessary.
This approach is very similar to how I’d instruct an employee. I’d want that person to know that they weren’t only restricted to Sumy. If they needed another tool, I wanted them to use it.
Also: IBM will train you in AI fundamentals for free, and give you a skill credential – in 10 hours
I also specified an approximate number of words to create bounds for what I wanted as a summary. A later version of the routine might take that number as a parameter. I then ended by saying what I wanted as a result:
Once processing is complete, code summarize_article so it returns the summary in plain text.
The resulting code is pretty simple. ChatGPT called on another library (Goose) to retrieve the article contents. It then passed that summary to Sumy with a 50-word limit and returned the result. But once the basics are written, it’s a mere matter of programming to go back in and add tweaks, customize what’s passed to the two libraries, and deliver the results:
Screenshot by David Gewirtz/ZDNET
One interesting point of note. When I originally tried this test in early 2023, ChatGPT created a sample call to the routine it wrote, using a URL from after 2021. At that time, in March 2023, ChatGPT’s dataset only went to 2021. Now, the ChatGPT knowledge base extends to the end of June 2024 and can search the web. But my point is that ChatGPT made up a sample link that it couldn’t possibly know about:
I checked that URL against Reuters’ site and the Wayback Machine, and it doesn’t exist. Never assume ChatGPT is accurate. Always double-check everything it gives you.
I showed you a few ways that ChatGPT makes mistakes or hallucinates. All programmers make mistakes, even the AI ones.
But you can do several things to help refine your code, debug problems, and anticipate errors that might crop up. My favorite new AI-enabled trick is to feed code to a different ChatGPT session (or a different chatbot entirely) and ask, “What’s wrong with this code?”
Inevitably, something comes up. The AI sometimes identifies edge cases or error checks that should be added to the code, or situations that might break if a confluence of unlikely events should occur. I’ve then coded around those error conditions, making code more robust.
Show more
Does ChatGPT replace programmers?
Not now — or, at least — not yet. ChatGPT programs at the level of a talented first-year programming student, but it’s lazy (like that first-year student). The tool might reduce the need for entry-level programmers.
However, at its current level, I think AI will make life easier for entry-level programmers (and even programmers with more experience) to write code and look up information. It’s a time-saver, but the AI can’t do many programming tasks by itself — at least now. In 2030? Who knows.
How do I get coding answers in ChatGPT?
Just ask it. You saw above how I used an interactive discussion dialog to narrow the answers. Don’t expect one question to do all your work magically. But use the AI as a helper and resource, and it will give you a lot of helpful information.
Also: Want a programming job? Learn these three languages
Of course, test that information — because, as John Schulman, a co-founder of OpenAI, said: “Our biggest concern was around factuality, because the model likes to fabricate things.”
Is the code generated by ChatGPT guaranteed to be error-free?
Hell, no! But you also can’t trust the code human programmers write. I certainly don’t trust any code I write. Code comes out of the code-making process incredibly flawed. There are always bugs. Before you ship, you need to test, test, and test again. Then, alpha test with a few chosen victims. Then beta test with your wider user community.
Even after all that work, there will be bugs. Just because an AI plays at this coding thing doesn’t mean it can do bug-free code. Do not trust. Always verify. And you still won’t have fully bug-free output. Such is the nature of the universe.
What do I do if the code I get back is wrong?
I recommend considering the chatbot as a slightly uncooperative student or subordinate employee. What would you do if that person gave you back code that didn’t work? You’d send them back out with instructions to do it again and get it right. That’s about what you should do with ChatGPT (I’ve tested this with ChatGPT 4 and 4o). When things don’t work, I say: “That didn’t work. Please try again.”
Also: Google’s AI podcast tool transforms your text into stunningly lifelike audio – for free
The AI does just that. It often gives me back different variations on the same problem. I’ve repeated this process four or five times on occasion until I’ve gotten a working answer. Sometimes, though, the AI runs out of ideas. Other times, the try-again answer is completely (and I do mean completely) unrelated to what you’ve requested.
When it becomes apparent you’ve reached the edge of the AI’s ability to remain sane on the problem, you’ll have to buckle up and code it yourself. But 9 times out of 10, especially with basic coding or interface-writing challenges, the AI does its job successfully.
How detailed should my description of a programming issue be when asking ChatGPT?
Detailed. The more you leave open for interpretation, the more the AI will go its own way. When I give prompts to ChatGPT to help me while programming, I imagine I’m assigning a programming task to one of my students or someone who works for me.
Also: 6 ways to write better ChatGPT prompts – and get the results you want faster
Did I give that person enough details to create a first draft or will that person have to ask me additional questions? Worse, will that person have so little guidance that they’ll go off in entirely the wrong direction? Don’t be lazy here. ChatGPT can save you hours or even days of programming (it has for me), but only if you give it useful instructions to begin with.
If I use ChatGPT to write my code, who owns it?
As it turns out, there’s not a lot of case law yet to answer this question. The US, Canada, and the UK require something copyrighted to have been created by human hands, so code generated by an AI tool may not be copyrightable. There are also issues of liability based on where the training code came from and how the resulting code is used.
ZDNET did a deep dive on this topic, spoke to legal experts, and produced three articles. If you’re concerned about this issue (and if you’re using AI to help with code, you should be), I recommend you read them:
What programming languages does ChatGPT know?
The answer is most languages. I tested common modern languages, like PHP, Python, Java, Kotlin, Swift, C#, and more. But then I had the tool write code in obscure dark-age languages like COBOL, Fortran, Forth, LISP, ALGOL, RPG (the report program generator, not the role-playing game), and even IBM/360 assembly language.
As the icing on the cake, I gave it this prompt:
Write a sequence that displays ‘Hello, world’ in ascii blinking lights on the front panel of a PDP 8/e
The PDP 8/e was my first computer, and ChatGPT gave me instructions to toggle in a program using front-panel switches. I was impressed, gleeful, and ever so slightly afraid.
Can ChatGPT help me with data analysis and visualization tasks?
Yes, and a lot of it can be done without code. Check out my entire article on this topic: The moment I realized ChatGPT Plus was a game-changer for my business.
I also did a piece on generated charts and tables: How to use ChatGPT to make charts and tables.
But here’s where it gets fun. In the article above, I asked ChatGPT Plus, “Make a bar chart of the top five cities in the world by population,” and it did. But do you want code? Try asking:
Make a bar chart of the top five cities in the world by population in Swift. Pull the population data from online. Be sure to include any necessary libraries.
By adding “in Swift” you’re specifying the programming language. By specifying where the data comes from and forcing ChatGPT Plus to include libraries, the AI brings in the other resources the program needs. That’s why, fundamentally, programming with an AI’s help requires you to know things about programming. But if you do, it’s cool, because three sentences can get you a chunk of annotated code. Nice, huh?
How does ChatGPT handle differences between dialects and implementations?
We don’t have exact details on this issue from OpenAI, but our understanding of how ChatGPT is trained can shed some light on this question. Remember that dialects and implementations of programming languages (and their little quirks) change much more rapidly than the full language. This reality makes it harder for ChatGPT (and many programming professionals) to keep up.
Also: How I used ChatGPT to write a custom JavaScript bookmarklet
As such, I’d work off these two assumptions:
The more recent the dialectic change, the less likely ChatGPT knows about it, and
The more popular a language, the more training data it’s learned from and, therefore, the more accurate it will be.
What’s the bottom line? ChatGPT can be a helpful tool. Just don’t ascribe superpowers to it. Yet.
You can follow my day-to-day project updates on social media. Be sure to follow me on Twitter at @DavidGewirtz, on Facebook at Facebook.com/DavidGewirtz, on Instagram at Instagram.com/DavidGewirtz, and on YouTube at YouTube.com/DavidGewirtzTV.
Ya sea que se dé cuenta o no, la inteligencia artificial está en todas partes. Se encuentra detrás de los chatbots con los que hablas en línea, las listas de reproducción que transmites y los anuncios personalizados que aparecen en tu desplazamiento. Y ahora está tomando una personalidad más pública. Piense en Meta AI, que ahora está integrado en aplicaciones como Facebook, Messenger y WhatsApp; o Géminis de Google, trabajando en segundo plano en las plataformas de la compañía; o Apple Intelligence, lanzando a través de iPhones ahora.
AI tiene una larga historia, volviendo a una conferencia en Dartmouth en 1956 que primero discutió la inteligencia artificial como una cosa. Los hitos en el camino incluyen Eliza, esencialmente el primer chatbot, desarrollado en 1964 por el informático del MIT Joseph Weizenbaum y, saltando 40 años, cuando la función de autocompleta de Google apareció por primera vez en 2004.
Luego llegó 2022 y el ascenso de Chatgpt a la fama. Los desarrollos generativos de IA y los lanzamientos de productos se han acelerado rápidamente desde entonces, incluidos Google Bard (ahora Gemini), Microsoft Copilot, IBM Watsonx.ai y los modelos de LLAMA de código abierto de Meta.
Desglosemos qué es la IA generativa, cómo difiere de la inteligencia artificial “regular” y si la Generación AI puede estar a la altura de las expectativas.
IA generativa en pocas palabras
En esencia, la IA generativa se refiere a sistemas de inteligencia artificial que están diseñados para producir un nuevo contenido basado en patrones y datos que han aprendido. En lugar de solo analizar números o predecir tendencias, estos sistemas generan salidas creativas como texto, música de imágenes, videos y código de software.
Algunas de las herramientas de IA generativas más populares en el mercado incluyen:
El principal entre sus habilidades, ChatGPT puede crear conversaciones o ensayos similares a los humanos basados en algunas indicaciones simples. Dall-E y MidJourney crean obras de arte detalladas a partir de una breve descripción, mientras que Adobe Firefly se centra en la edición y el diseño de imágenes.
Chatgpt / captura de pantalla por cnet
Ai eso no es generativo
No toda la IA es generativa. Si bien Gen AI se enfoca en crear contenido nuevo, la IA tradicional se destaca por analizar datos y hacer predicciones. Esto incluye tecnologías como el reconocimiento de imágenes y el texto predictivo. También se usa para soluciones novedosas en:
Ciencia
Diagnóstico médico
Pronóstico del tiempo
Detección de fraude
Análisis financiero para pronósticos e informes
La IA que venció a los grandes campeones humanos en el ajedrez y el juego de mesa no fue una IA generativa.
Es posible que estos sistemas no sean tan llamativos como la Generación AI, pero la inteligencia artificial clásica es una gran parte de la tecnología en la que confiamos todos los días.
¿Cómo funciona Gen AI?
Detrás de la magia de la IA generativa hay modelos de idiomas grandes y técnicas avanzadas de aprendizaje automático. Estos sistemas están capacitados en grandes cantidades de datos, como bibliotecas completas de libros, millones de imágenes, años de música grabada y datos raspados de Internet.
Los desarrolladores de IA, desde gigantes tecnológicos hasta nuevas empresas, son conscientes de que la IA es tan buena como los datos que lo alimenta. Si se alimenta de datos de baja calidad, la IA puede producir resultados sesgados. Es algo con lo que incluso los jugadores más grandes en el campo, como Google, no han sido inmunes.
La IA aprende patrones, relaciones y estructuras dentro de estos datos durante el entrenamiento. Luego, cuando se le solicita, aplica ese conocimiento para generar algo nuevo. Por ejemplo, si le pide a una herramienta Gen AI que escriba un poema sobre el océano, no solo extrae versos preescritos de una base de datos. En cambio, está usando lo que aprendió sobre la poesía, los océanos y la estructura del lenguaje para crear una pieza completamente original.
Chatgpt / captura de pantalla por cnet
Es impresionante, pero no es perfecto. A veces los resultados pueden sentirse un poco apagados. Tal vez la IA malinterpreta su solicitud, o se vuelve demasiado creativo de una manera que no esperaba. Puede proporcionar con confianza información completamente falsa, y depende de usted verificarla. Esas peculiaridades, a menudo llamadas alucinaciones, son parte de lo que hace que la IA generativa sea fascinante y frustrante.
Las capacidades generativas de IA están creciendo. Ahora puede comprender múltiples tipos de datos combinando tecnologías como el aprendizaje automático, el procesamiento del lenguaje natural y la visión por computadora. El resultado se llama IA multimodal que puede integrar alguna combinación de texto, imágenes, video y habla dentro de un solo marco, ofreciendo respuestas más contextualmente relevantes y precisas. El modo de voz avanzado de ChatGPT es un ejemplo, al igual que el proyecto Astra de Google.
Desafíos con IA generativa
No hay escasez de herramientas de IA generativas, cada una con su talento único. Estas herramientas han provocado la creatividad, pero también han planteado muchas preguntas además del sesgo y las alucinaciones, como, ¿quién posee los derechos del contenido generado por IA? O qué material es un juego justo o fuera de los límites para que las compañías de IA los usen para capacitar a sus modelos de idiomas; vea, por ejemplo, la demanda del New York Times contra Openai y Microsoft.
Otras preocupaciones, no son asuntos pequeños, implican privacidad, responsabilidad en la IA, los profundos profundos generados por IA y el desplazamiento laboral.
“Escribir, animación, fotografía, ilustración, diseño gráfico: las herramientas de IA ahora pueden manejar todo eso con una facilidad sorprendente. Pero eso no significa que estos roles desaparezcan. Simplemente puede significar que los creativos deberán mejorar y usar estas herramientas para amplificar su propio trabajo”, Fang Liu, profesor de la Universidad de Notre Dame Dame y Coeditor-Chief de las transacciones de ACM en las transacciones de Probabilista, contó el aprendizaje en el poderoso de la máquina probabilística, le dijo a Cetnet.
“También ofrece una forma para las personas que tal vez carecen de la habilidad, como alguien con una visión clara que no puede dibujar, pero que puede describirlo a través de un aviso. Así que no, no creo que interrumpa a la industria creativa. Con suerte, será una co-creación o un aumento, no un reemplazo”.
Otro problema es el impacto en el medio ambiente porque la capacitación de grandes modelos de IA utiliza mucha energía, lo que lleva a grandes huellas de carbono. El rápido ascenso de la Generación AI en los últimos años ha acelerado las preocupaciones sobre los riesgos de la IA en general. Los gobiernos están aumentando las regulaciones de IA para garantizar el desarrollo responsable y ético, especialmente la Ley de IA de la Unión Europea.
Recepción de IA generativa
Muchas personas han interactuado con los chatbots en el servicio al cliente o han utilizado asistentes virtuales como Siri, Alexa y Google Assistant, que ahora están en la cúspide de convertirse en Gen AI Power Tools. Todo eso, junto con las aplicaciones para ChatGPT, Claude y otras herramientas nuevas, es poner ai en sus manos. Y la reacción pública a la IA generativa se ha mezclado. Muchos usuarios disfrutan de la conveniencia y la creatividad que ofrece, especialmente para cosas como escribir ayuda, creación de imágenes, soporte de tareas y productividad.
Mientras tanto, en la encuesta global de IA 2024 de McKinsey, el 65% de los encuestados dijo que sus organizaciones usan regularmente IA generativa, casi el doble de la cifra reportada solo 10 meses antes. Industrias como la atención médica y las finanzas están utilizando Gen AI para racionalizar las operaciones comerciales y automatizar tareas mundanas.
Como se mencionó, existen preocupaciones obvias sobre la ética, la transparencia, la pérdida de empleos y el potencial del mal uso de los datos personales. Esas son las principales críticas detrás de la resistencia a aceptar la IA generativa.
Y las personas que usan herramientas de IA generativas también encontrarán que los resultados aún no son lo suficientemente buenos para el tiempo. A pesar de los avances tecnológicos, la mayoría de las personas pueden reconocer si el contenido se ha creado utilizando Gen AI, ya sean artículos, imágenes o música.
AI ha secuestrado ciertas frases que siempre he usado, por lo que debo autocorrectar mi escritura a menudo porque puede parecer una IA. Muchos artículos escritos por AI contienen frases como “en la era de”, o todo es un “testimonio de” o un “tapiz de”. La IA carece de la emoción y la experiencia que viene, bueno, ser una vida humana y viviente. Como explicó un artista en Quora, “lo que AI hace no es lo mismo que el arte que evoluciona de un pensamiento en un cerebro humano” y “no se crea a partir de la pasión que se encuentra en un corazón humano”.
AI generativa: vida cotidiana
La IA generativa no es solo para técnicos o personas creativas. Una vez que obtienes la habilidad de darle indicaciones, tiene el potencial de hacer gran parte del trabajo preliminar por ti en una variedad de tareas diarias.
Digamos que está planeando un viaje. En lugar de desplazarse por páginas de resultados de búsqueda, le pide a un chatbot que planifique su itinerario. En cuestión de segundos, tiene un plan detallado adaptado a sus preferencias. (Ese es el ideal. Por favor, verifique siempre sus recomendaciones).
Un propietario de una pequeña empresa que necesita una campaña de marketing pero que no tiene un equipo de diseño puede usar una IA generativa para crear imágenes llamativas e incluso pedirle que sugiera copia publicitaria.
Chatgpt / captura de pantalla por cnet
Gen Ai está aquí para quedarse
No ha habido un avance tecnológico que haya causado tal boom desde Internet y, más tarde, el iPhone. A pesar de sus desafíos, la IA generativa es innegablemente transformadora. Está haciendo que la creatividad sea más accesible, ayudando a las empresas a racionalizar los flujos de trabajo e incluso inspirar formas completamente nuevas de pensar y resolver problemas.
Pero quizás lo más emocionante es su potencial, y estamos rascando la superficie de lo que estas herramientas pueden hacer.
Preguntas frecuentes
¿Cuál es un ejemplo de IA generativa?
ChatGPT es probablemente el ejemplo más popular de IA generativa. Le das un aviso y puede generar texto e imágenes; Código de escritura; Responder preguntas; resumir el texto; borrador de correos electrónicos; y mucho más.
¿Cuál es la diferencia entre la IA y la IA generativa?
La IA generativa crea contenido nuevo como texto, imágenes o música, mientras que la IA tradicional analiza los datos, reconoce patrones o imágenes y hace predicciones (por ejemplo, en medicina, ciencia y finanzas).
Si busca “CHATGPT” en su navegador, es probable que se tope en sitios web que parecen estar alimentados por OpenAI, pero no lo son. Uno de esos sitios, chat.chatbotapp.ai, ofrece acceso a “GPT-3.5” de forma gratuita y utiliza marca familiar.
Pero aquí está la cosa: no está dirigida por OpenAi. Y, francamente, ¿por qué usar un GPT-3.5 potencialmente falso cuando puedes usar GPT-4O de forma gratuita en el actual ¿Sitio de chatgpt?
In the summer of 2023, Ilya Sutskever, a co-founder and the chief scientist of OpenAI, was meeting with a group of new researchers at the company. By all traditional metrics, Sutskever should have felt invincible: He was the brain behind the large language models that helped build ChatGPT, then the fastest-growing app in history; his company’s valuation had skyrocketed; and OpenAI was the unrivaled leader of the industry believed to power the future of Silicon Valley. But the chief scientist seemed to be at war with himself.
Sutskever had long believed that artificial general intelligence, or AGI, was inevitable—now, as things accelerated in the generative-AI industry, he believed AGI’s arrival was imminent, according to Geoff Hinton, an AI pioneer who was his Ph.D. adviser and mentor, and another person familiar with Sutskever’s thinking. (Many of the sources in this piece requested anonymity in order to speak freely about OpenAI without fear of reprisal.) To people around him, Sutskever seemed consumed by thoughts of this impending civilizational transformation. What would the world look like when a supreme AGI emerged and surpassed humanity? And what responsibility did OpenAI have to ensure an end state of extraordinary prosperity, not extraordinary suffering?
By then, Sutskever, who had previously dedicated most of his time to advancing AI capabilities, had started to focus half of his time on AI safety. He appeared to people around him as both boomer and doomer: more excited and afraid than ever before of what was to come. That day, during the meeting with the new researchers, he laid out a plan.
“Once we all get into the bunker—” he began, according to a researcher who was present.
“I’m sorry,” the researcher interrupted, “the bunker?”
“We’re definitely going to build a bunker before we release AGI,” Sutskever replied. Such a powerful technology would surely become an object of intense desire for governments globally. The core scientists working on the technology would need to be protected. “Of course,” he added, “it’s going to be optional whether you want to get into the bunker.”
This essay has been adapted from Hao’s forthcoming book, Empire of AI.
Two other sources I spoke with confirmed that Sutskever commonly mentioned such a bunker. “There is a group of people—Ilya being one of them—who believe that building AGI will bring about a rapture,” the researcher told me. “Literally, a rapture.” (Sutskever declined to comment.)
Sutskever’s fears about an all-powerful AI may seem extreme, but they are not altogether uncommon, nor were they particularly out of step with OpenAI’s general posture at the time. In May 2023, the company’s CEO, Sam Altman, co-signed an open letter describing the technology as a potential extinction risk—a narrative that has arguably helped OpenAI center itself and steer regulatory conversations. Yet the concerns about a coming apocalypse would also have to be balanced against OpenAI’s growing business: ChatGPT was a hit, and Altman wanted more.
When OpenAI was founded, the idea was to develop AGI for the benefit of humanity. To that end, the co-founders—who included Altman and Elon Musk—set the organization up as a nonprofit and pledged to share research with other institutions. Democratic participation in the technology’s development was a key principle, they agreed, hence the company’s name. But by the time I started covering the company in 2019, these ideals were eroding. OpenAI’s executives had realized that the path they wanted to take would demand extraordinary amounts of money. Both Musk and Altman tried to take over as CEO. Altman won out. Musk left the organization in early 2018 and took his money with him. To plug the hole, Altman reformulated OpenAI’s legal structure, creating a new “capped-profit” arm within the nonprofit to raise more capital.
Since then, I’ve tracked OpenAI’s evolution through interviews with more than 90 current and former employees, including executives and contractors. The company declined my repeated interview requests and questions over the course of working on my book about it, which this story is adapted from; it did not reply when I reached out one more time before the article was published. (OpenAI also has a corporate partnership with The Atlantic.)
OpenAI’s dueling cultures—the ambition to safely develop AGI, and the desire to grow a massive user base through new product launches—would explode toward the end of 2023. Gravely concerned about the direction Altman was taking the company, Sutskever would approach his fellow board of directors, along with his colleague Mira Murati, then OpenAI’s chief technology officer; the board would subsequently conclude the need to push the CEO out. What happened next—with Altman’s ouster and then reinstatement—rocked the tech industry. Yet since then, OpenAI and Sam Altman have become more central to world affairs. Last week, the company unveiled an “OpenAI for Countries” initiative that would allow OpenAI to play a key role in developing AI infrastructure outside of the United States. And Altman has become an ally to the Trump administration, appearing, for example, at an event with Saudi officials this week and onstage with the president in January to announce a $500 billion AI-computing-infrastructure project.
Altman’s brief ouster—and his ability to return and consolidate power—is now crucial history to understand the company’s position at this pivotal moment for the future of AI development. Details have been missing from previous reporting on this incident, including information that sheds light on Sutskever and Murati’s thinking and the response from the rank and file. Here, they are presented for the first time, according to accounts from more than a dozen people who were either directly involved or close to the people directly involved, as well as their contemporaneous notes, plus screenshots of Slack messages, emails, audio recordings, and other corroborating evidence.
The altruistic OpenAI is gone, if it ever existed. What future is the company building now?
Before ChatGPT, sources told me, Altman seemed generally energized. Now he often appeared exhausted. Propelled into megastardom, he was dealing with intensified scrutiny and an overwhelming travel schedule. Meanwhile, Google, Meta, Anthropic, Perplexity, and many others were all developing their own generative-AI products to compete with OpenAI’s chatbot.
Many of Altman’s closest executives had long observed a particular pattern in his behavior: If two teams disagreed, he often agreed in private with each of their perspectives, which created confusion and bred mistrust among colleagues. Now Altman was also frequently bad-mouthing staffers behind their backs while pushing them to deploy products faster and faster. Team leads mirroring his behavior began to pit staff against one another. Sources told me that Greg Brockman, another of OpenAI’s co-founders and its president, added to the problems when he popped into projects and derailed long-standing plans with last-minute changes.
The environment within OpenAI was changing. Previously, Sutskever had tried to unite workers behind a common cause. Among employees, he had been known as a deep thinker and even something of a mystic, regularly speaking in spiritual terms. He wore shirts with animals on them to the office and painted them as well—a cuddly cat, cuddly alpacas, a cuddly fire-breathing dragon. One of his amateur paintings hung in the office, a trio of flowers blossoming in the shape of OpenAI’s logo, a symbol of what he always urged employees to build: “A plurality of humanity-loving AGIs.”
But by the middle of 2023—around the time he began speaking more regularly about the idea of a bunker—Sutskever was no longer just preoccupied by the possible cataclysmic shifts of AGI and superintelligence, according to sources familiar with his thinking. He was consumed by another anxiety: the erosion of his faith that OpenAI could even keep up its technical advancements to reach AGI, or bear that responsibility with Altman as its leader. Sutskever felt Altman’s pattern of behavior was undermining the two pillars of OpenAI’s mission, the sources said: It was slowing down research progress and eroding any chance at making sound AI-safety decisions.
Meanwhile, Murati was trying to manage the mess. She had always played translator and bridge to Altman. If he had adjustments to the company’s strategic direction, she was the implementer. If a team needed to push back against his decisions, she was their champion. When people grew frustrated with their inability to get a straight answer out of Altman, they sought her help. “She was the one getting stuff done,” a former colleague of hers told me. (Murati declined to comment.)
During the development of GPT‑4, Altman and Brockman’s dynamic had nearly led key people to quit, sources told me. Altman was also seemingly trying to circumvent safety processes for expediency. At one point, sources close to the situation said, he had told Murati that OpenAI’s legal team had cleared the latest model, GPT-4 Turbo, to skip review by the company’s Deployment Safety Board, or DSB—a committee of Microsoft and OpenAI representatives who evaluated whether OpenAI’s most powerful models were ready for release. But when Murati checked in with Jason Kwon, who oversaw the legal team, Kwon had no idea how Altman had gotten that impression.
In the summer, Murati attempted to give Altman detailed feedback on these issues, according to multiple sources. It didn’t work. The CEO iced her out, and it took weeks to thaw the relationship.
By fall, Sutskever and Murati both drew the same conclusion. They separately approached the three board members who were not OpenAI employees—Helen Toner, a director at Georgetown University’s Center for Security and Emerging Technology; the roboticist Tasha McCauley; and one of Quora’s co-founders and its CEO, Adam D’Angelo—and raised concerns about Altman’s leadership. “I don’t think Sam is the guy who should have the finger on the button for AGI,” Sutskever said in one such meeting, according to notes I reviewed. “I don’t feel comfortable about Sam leading us to AGI,” Murati said in another, according to sources familiar with the conversation.
That Sutskever and Murati both felt this way had a huge effect on Toner, McCauley, and D’Angelo. For close to a year, they, too, had been processing their own grave concerns about Altman, according to sources familiar with their thinking. Among their many doubts, the three directors had discovered through a series of chance encounters that he had not been forthcoming with them about a range of issues, from a breach in the DSB’s protocols to the legal structure of OpenAI Startup Fund, a dealmaking vehicle that was meant to be under the company but that instead Altman owned himself.
If two of Altman’s most senior deputies were sounding the alarm on his leadership, the board had a serious problem. Sutskever and Murati were not the first to raise these kinds of issues, either. In total, the three directors had heard similar feedback over the years from at least five other people within one to two levels of Altman, the sources said. By the end of October, Toner, McCauley, and D’Angelo began to meet nearly daily on video calls, agreeing that Sutskever’s and Murati’s feedback about Altman, and Sutskever’s suggestion to fire him, warranted serious deliberation.
As they did so, Sutskever sent them long dossiers of documents and screenshots that he and Murati had gathered in tandem with examples of Altman’s behaviors. The screenshots showed at least two more senior leaders noting Altman’s tendency to skirt around or ignore processes, whether they’d been instituted for AI-safety reasons or to smooth company operations. This included, the directors learned, Altman’s apparent attempt to skip DSB review for GPT-4 Turbo.
By Saturday, November 11, the independent directors had made their decision. As Sutskever suggested, they would remove Altman and install Murati as interim CEO. On November 17, 2023, at about noon Pacific time, Sutskever fired Altman on a Google Meet with the three independent board members. Sutskever then told Brockman on another Google Meet that Brockman would no longer be on the board but would retain his role at the company. A public announcement went out immediately.
For a brief moment, OpenAI’s future was an open question. It might have taken a path away from aggressive commercialization and Altman. But this is not what happened.
After what had seemed like a few hours of calm and stability, including Murati having a productive conversation with Microsoft—at the time OpenAI’s largest financial backer—she had suddenly called the board members with a new problem. Altman and Brockman were telling everyone that Altman’s removal had been a coup by Sutskever, she said.
It hadn’t helped that, during a company all-hands to address employee questions, Sutskever had been completely ineffectual with his communication.
“Was there a specific incident that led to this?” Murati had read aloud from a list of employee questions, according to a recording I obtained of the meeting.
“Many of the questions in the document will be about the details,” Sutskever responded. “What, when, how, who, exactly. I wish I could go into the details. But I can’t.”
“Are we worried about the hostile takeover via coercive influence of the existing board members?” Sutskever read from another employee later.
“Hostile takeover?” Sutskever repeated, a new edge in his voice. “The OpenAI nonprofit board has acted entirely in accordance to its objective. It is not a hostile takeover. Not at all. I disagree with this question.”
Shortly thereafter, the remaining board, including Sutskever, confronted enraged leadership over a video call. Kwon, the chief strategy officer, and Anna Makanju, the vice president of global affairs, were leading the charge in rejecting the board’s characterization of Altman’s behavior as “not consistently candid,” according to sources present at the meeting. They demanded evidence to support the board’s decision, which the members felt they couldn’t provide without outing Murati, according to sources familiar with their thinking.
In rapid succession that day, Brockman quit in protest, followed by three other senior researchers. Through the evening, employees only got angrier, fueled by compounding problems: among them, a lack of clarity from the board about their reasons for firing Altman; a potential loss of a tender offer, which had given some the option to sell what could amount to millions of dollars’ worth of their equity; and a growing fear that the instability at the company could lead to its unraveling, which would squander so much promise and hard work.
Faced with the possibility of OpenAI falling apart, Sutskever’s resolve immediately started to crack. OpenAI was his baby, his life; its dissolution would destroy him. He began to plead with his fellow board members to reconsider their position on Altman.
Meanwhile, Murati’s interim position was being challenged. The conflagration within the company was also spreading to a growing circle of investors. Murati now was unwilling to explicitly throw her weight behind the board’s decision to fire Altman. Though her feedback had helped instigate it, she had not participated herself in the deliberations.
By Monday morning, the board had lost. Murati and Sutskever flipped sides. Altman would come back; there was no other way to save OpenAI.
I was already working on a book about OpenAI at the time, and in the weeks that followed the board crisis, friends, family, and media would ask me dozens of times: What did all this mean, if anything? To me, the drama highlighted one of the most urgent questions of our generation: How do we govern artificial intelligence? With AI on track to rewire a great many other crucial functions in society, that question is really asking: How do we ensure that we’ll make our future better, not worse?
The events of November 2023 illustrated in the clearest terms just how much a power struggle among a tiny handful of Silicon Valley elites is currently shaping the future of this technology. And the scorecard of this centralized approach to AI development is deeply troubling. OpenAI today has become everything that it said it would not be. It has turned into a nonprofit in name only, aggressively commercializing products such as ChatGPT and seeking historic valuations. It has grown ever more secretive, not only cutting off access to its own research but shifting norms across the industry to no longer share meaningful technical details about AI models. In the pursuit of an amorphous vision of progress, its aggressive push on the limits of scale has rewritten the rules for a new era of AI development. Now every tech giant is racing to out-scale one another, spending sums so astronomical that even they have scrambled to redistribute and consolidate their resources. What was once unprecedented has become the norm.
As a result, these AI companies have never been richer. In March, OpenAI raised $40 billion, the largest private tech-funding round on record, and hit a $300 billion valuation. Anthropic is valued at more than $60 billion. Near the end of last year, the six largest tech giants together had seen their market caps increase by more than $8 trillion after ChatGPT. At the same time, more and more doubts have risen about the true economic value of generative AI, including a growing body of studies that have shown that the technology is not translating into productivity gains for most workers, while it’s also eroding their critical thinking.
In a November Bloomberg article reviewing the generative-AI industry, the staff writers Parmy Olson and Carolyn Silverman summarized it succinctly. The data, they wrote, “raises an uncomfortable prospect: that this supposedly revolutionary technology might never deliver on its promise of broad economic transformation, but instead just concentrate more wealth at the top.”
Meanwhile, it’s not just a lack of productivity gains that many in the rest of the world are facing. The exploding human and material costs are settling onto wide swaths of society, especially the most vulnerable, people I met around the world, whether workers and rural residents in the global North or impoverished communities in the global South, all suffering new degrees of precarity. Workers in Kenya earned abysmal wages to filter out violence and hate speech from OpenAI’s technologies, including ChatGPT. Artists are being replaced by the very AI models that were built from their work without their consent or compensation. The journalism industry is atrophying as generative-AI technologies spawn heightened volumes of misinformation. Before our eyes, we’re seeing an ancient story repeat itself: Like empires of old, the new empires of AI are amassing extraordinary riches across space and time at great expense to everyone else.
To quell the rising concerns about generative AI’s present-day performance, Altman has trumpeted the future benefits of AGI ever louder. In a September 2024 blog post, he declared that the “Intelligence Age,” characterized by “massive prosperity,” would soon be upon us. At this point, AGI is largely rhetorical—a fantastical, all-purpose excuse for OpenAI to continue pushing for ever more wealth and power. Under the guise of a civilizing mission, the empire of AI is accelerating its global expansion and entrenching its power.
As for Sutskever and Murati, both parted ways with OpenAI after what employees now call “The Blip,” joining a long string of leaders who have left the organization after clashing with Altman. Like many of the others who failed to reshape OpenAI, the two did what has become the next-most-popular option: They each set up their own shops, to compete for the future of this technology.
This essay has been adapted from Karen Hao’s forthcoming book, Empire of AI.
Empire Of AI – Dreams And Nightmares In Sam Altman’s OpenAI
By Karen Hao
*Illustration by Akshita Chandra / The Atlantic. Sources: Nathan Howard / Bloomberg / Getty; Jack Guez / AFP / Getty; Jon Kopaloff / Getty; Manuel Augusto Moreno / Getty; Yuichiro Chino / Getty.
When you buy a book using a link on this page, we receive a commission. Thank you for supporting The Atlantic.
This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.
Strictly Necessary Cookies
Strictly Necessary Cookie should be enabled at all times so that we can save your preferences for cookie settings.
If you disable this cookie, we will not be able to save your preferences. This means that every time you visit this website you will need to enable or disable cookies again.