As winter descended on San Francisco in late 2022, OpenAI quietly pushed a new service dubbed ChatGPT live with a blog post and a single tweet from CEO Sam Altman. The team labeled it a “low-key research preview” — they had good reason to set expectations low.
Noticias
Inside the launch — and future — of ChatGPT
Published
5 meses agoon

“It couldn’t even do arithmetic,” Liam Fedus, OpenAI’s head of post-training says. It was also prone to hallucinating or making things up, adds Christina Kim, a researcher on the mid-training team.
Ultimately, ChatGPT would become anything but low-key.
While the OpenAI researchers slept, users in Japan flooded ChatGPT’s servers, crashing the site only hours after launch. That was just the beginning.
“The dashboards at that time were just always red,” recalls Kim. The launch coincided with NeurIPS, the world’s premier AI conference, and soon ChatGPT was the only thing anyone there could talk about. ChatGPT’s error page — “ChatGPT is at capacity right now” — would become a familiar sight.
“We had the initial launch meeting in this small room, and it wasn’t like the world just lit on fire all of a sudden,” Fedus says during a recent interview from OpenAI’s headquarters. “We’re like, ‘Okay, cool. I guess it’s out there now.’ But it was the next day when we realized — oh, wait, this is big.”
“The dashboards at that time were just always red.”
Two years later, ChatGPT still hasn’t cracked advanced arithmetic or become factually reliable. It hasn’t mattered. The chatbot has evolved from a prototype to a $4 billion revenue engine with 300 million weekly active users. It has shaken the foundations of the tech industry, even as OpenAI loses money (and cofounders) hand over fist while competitors like Anthropic threaten its lead.
Whether used as praise or pejorative, “ChatGPT” has become almost synonymous with generative AI. Over a series of recent video calls, I sat down with Fedus, Kim, ChatGPT head of product Nick Turley, and ChatGPT engineering lead Sulman Choudhry to talk about ChatGPT’s origins and where it’s going next.
A “weird” name and a scrappy start
ChatGPT was effectively born in December 2021 with an OpenAI project dubbed WebGPT: an AI tool that could search the internet and write answers. The team took inspiration from WebGPT’s conversational interface and began plugging a similar interface into GPT-3.5, a successor to the GPT-3 text model released in 2020. They gave it the clunky name “Chat with GPT-3.5” until, in what Turley recalls as a split-second decision, they simplified it to ChatGPT.
The name could have been the even more straightforward “Chat,” and in retrospect, he thinks perhaps it should have been. “The entire world got used to this odd, weird name, we’re probably stuck with it. But obviously, knowing what I know now, I wish we picked a slightly easier to pronounce name,” he says. (It was recently revealed that OpenAI purchased the domain chat.com for more than $10 million of cash and stock in mid-2023.)
As the team discovered the model’s obvious limitations, they debated whether to narrow its focus by launching a tool for help with meetings, writing, or coding. But OpenAI cofounder John Schulman (who has since left for Anthropic) advocated for keeping the focus broad.
The team describes it as a risky bet at the time; chatbots were viewed as an unremarkable backwater of machine learning, they thought, with no successful precedents. Adding to their concerns, Facebook’s Galactica AI bot had just spectacularly flamed out and been pulled offline after generating false research.
The team grappled with timing. GPT-4 was already in development with advanced features like Code Interpreter and web browsing, so it would make sense to wait to release ChatGPT atop the more capable model. Kim and Fedus also recall people wanting to wait and launch something more polished, especially after seeing other companies’ undercooked bots fail.
Despite early concerns about chatbots being a dead end, The New York Times has reported that other team members worried competitors would beat OpenAI to market with a fresh wave of bots. The deciding vote was Schulman, Fedus and Kim say. He pushed for an early release, alongside Altman, both believing it was important to get AI into peoples’ hands quickly.
OpenAI had demoed a chatbot at Microsoft Build earlier that year and generated virtually no buzz. On top of that, many of ChatGPT’s early users didn’t seem to be actually using it that much. The team shared their prototype with about 50 friends and family members. Turley “personally emailed every single one of them” every day to check in. While Fedus couldn’t recall exact figures, he recalls that about 10 percent of that early test group used it every day.
Image: Cath Virginia / The Verge, Getty Images
Later, the team would see this as an indication they’d created something with potential staying power.
“We had two friends who basically were on it from the start of their work day — and they were founders,” Kim recalls. “They were on it basically for 12 to 16 hours a day, just talking to it all day.” With just two weeks before the end of November, Schulman made the final call: OpenAI would launch ChatGPT on the last day of that month.
The team canceled their Thanksgiving plans and began a two-week sprint to public release. Much of the system was built at this point, Kim says, but its security vulnerabilities were untested. So they focused heavily on red teaming, or stress testing the system for potential safety problems.
“If I had known it was going to be a big deal, I would certainly not want to ship it right before a winter holiday week before we were all going to go home,” Turley says. “I remember working very hard, but I also remember thinking, ‘Okay, let’s get this thing out, and then we’ll come back after the holiday to look at the learnings, to see what people want out of an AI assistant.’”
In an internal Slack poll, OpenAI employees guessed how many users they would get. Most predictions ranged from a mere 10,000 to 50,000. When someone suggested it might reach a million users, others jumped in to say that was wildly optimistic.
On launch day, they realized they’d all been incredibly wrong.
After Japan crashed their servers, and red dashboards and error messages abounded, the team was anxiously picking up the pieces and refreshing Twitter to gauge public reaction, Kim says. They believed the reaction to ChatGPT could only go one of two ways: total indifference or active contempt. They worried people might discover problematic ways to use it (like attempting to jailbreak it), and the uncertainty of how the public would receive their creation kept them in a state of nervous anticipation.
The launch was met with mixed emotions. ChatGPT quickly started facing criticism over accuracy issues and bias. Many schools ran to immediately ban it over cheating concerns. Some users on Reddit likened it to the early days of Google (and were shocked it was free). For its part, Google dubbed the chatbot a “code red” threat.
OpenAI would wind up surpassing its most ambitious 1-million-user target within five days of launch. Two months after its debut, ChatGPT garnered more than 30 million users.
When someone suggested it might reach a million users, others jumped in to say that was wildly optimistic.
Within weeks of ChatGPT’s November 30th launch, the team started rolling out updates incorporating user feedback (like its tendency to give overly verbose answers). The initial chaos had settled, user numbers were still climbing, and the team had a sobering realization: if they wanted to keep this momentum, things would have to change. The small group that launched a “low-key research preview” — a term that would become a running joke at OpenAI — would need to get a lot bigger.
Over the coming months and years, ChatGPT’s team would grow enormously and shift priorities — sometimes to the chagrin of many early staffers. Top researcher Jan Leike, who played a crucial role in refining ChatGPT’s conversational abilities and ensuring its outputs aligned with user expectations, quit this year to join Anthropic after claiming that “safety culture and processes have taken a backseat to shiny products” at OpenAI.
These days, OpenAI is focused on figuring out what the future of ChatGPT looks like.
“I’d be very surprised if a year from now this thing still looks like a chatbot,” Turley says, adding that current chat-based interactions would soon feel as outdated as ’90s instant messaging. “We’ve gotten pretty sidetracked by just making the chatbot great, but really, it’s not what we meant to build. We meant to build something much more useful than that.”
Increasingly powerful and expensive
I talk with Turley over a video call as he sits in a vast conference room in OpenAI’s San Francisco headquarters that epitomizes the company’s transformation. The office is all sweeping curves and polished minimalism, a far cry from its original office that was often described as a drab, historic warehouse.
With roughly 2,000 employees, OpenAI has evolved from a scrappy research lab into a $150 billion tech powerhouse. The team is spread across numerous projects, including building underlying foundation models and developing non-text tools like the video generator, Sora. ChatGPT is still OpenAI’s highest-profile product by far. Its popularity has come with a lot of headaches.
“I’d be very surprised if a year from now this thing still looks like a chatbot”
ChatGPT still spins elaborate lies with unwavering confidence, but now they’re being cited in court filings and political discourse. It has allowed for an impressive amount of experimentation and creativity, but some of its most distinctive use cases turned out to be spam, scams, and AI-written college term papers.
While some publications (include The Verge’s parent company, Vox Media) are choosing to partner with OpenAI, others like The New York Times are opting to sue it for copyright infringement. And OpenAI is burning through cash at a staggering rate to keep the lights on.
Turley acknowledges that ChatGPT’s hallucinations are still a problem. “Our early adopters were very comfortable with the limitations of ChatGPT,” he says. “It’s okay that you’re going to double check what it said. You’re going to know how to prompt around it. But the vast majority of the world, they’re not engineers, and they shouldn’t have to be. They should just use this thing and rely on it like any other tool, and we’re not there yet.”
Accuracy is one of the ChatGPT team’s three focus areas for 2025. The others are speed and presentation (i.e., aesthetics).
“I think we have a long way to go in making ChatGPT more accurate and better at citing its sources and iterating on the quality of this product,” Turley says.
OpenAI is also still figuring out how to monetize ChatGPT. Despite deploying increasingly powerful and costly AI models, the company has maintained a limited free tier and a $20 monthly ChatGPT Plus service since February 2023.
When I ask Turley about rumors of a future $2,000 subscription, or if advertising will be baked into ChatGPT, he says there is “no current plan to raise prices.” As for ads: “We don’t care about how much time you spend on ChatGPT.”
“They should just use this thing and rely on it like any other tool, and we’re not there yet.”
“I’m really proud of the fact that we have incentives that are incredibly aligned with our users,” he says. Those who “use our product a lot pay us money, which is a very, very, upfront and direct transaction. I’m proud of that. Maybe we’ll have a technology that’s much more expensive to serve and we’re going to have to rethink that model. You gotta remain humble about where the technology is going to go.”
Only days after Turley tells me this, ChatGPT did get a new $200 price tag for a pro tier that includes access to a specialized reasoning model. Its main $20 Plus tier is sticking around but it’s clearly not the ceiling for what OpenAI thinks people will pay.
ChatGPT and other OpenAI services require vast amounts of computing power and data storage to keep its services running smoothly. On top of the user base OpenAI has gained through its own products, it’s poised to reach millions of more people through an Apple partnership that integrates ChatGPT with iOS and macOS.
That’s a lot of infrastructure pressure for a relatively young tech company, says ChatGPT engineering lead Sulman Choudhry. “Just keeping it up and running is a very, very big feat,” he says. People love features like ChatGPT’s advanced voice mode. But scaling limitations mean there’s often a significant gap between the the technology’s capabilities and what people can experience. “There’s a very, very big delta there, and that delta is sort of how you scale the technology and how you scale infrastructure.”
Even as OpenAI grapples with these problems, it’s trying to work itself deeper into users’ lives. The company is racing to build agents, or AI tools that can perform complex, multistep tasks autonomously. In the AI world, these are called tasks with a longer “time horizon,” requiring the AI to maintain coherence over a longer period while handling multiple steps. For instance, earlier this year at the company’s Dev Day conference, OpenAI showcased AI agents that could make phone calls to place food orders and make hotel reservations in multiple languages.
For Turley and others, this is where the stakes will get particularly steep. Agents could make AI far more useful by moving what it can do outside the chatbot interface. The shift could also grant these tools an alarming level of access to the rest of your digital life.
“I’m really excited to see where things go in a more agentic direction with AI,” Kim tells me. “Right now, you go to the model with your question but I’m excited to see the model more integrated into your life and doing things proactively, and taking actions on your behalf”
The goal of ChatGPT isn’t to be just a chatbot, says Fedus. As it exists today, ChatGPT is “pretty constrained” by its interface and compute. He says the goal is to create an entity that you can talk to, call, and trust to work for you. Fedus thinks systems like OpenAI’s “reasoning” line of models, which create a trail of checkable steps explaining their logic, could make it more reliable for these kinds of tasks.
Turley says that, contrary to some reports, “I don’t think there’s going to be such a thing as an OpenAI agent.” What you will see is “increasingly agentic functionality inside of ChatGPT,” though. “Our focus is going to be to release this stuff as gradually as possible. The last thing I want is a big bang release where this stuff can suddenly go out and do things over hours of time with all your stuff.”
“The last thing I want is a big bang release”
By ChatGPT’s third anniversary next year, OpenAI will probably look a lot different than it does today. The company will likely raise billions more dollars in 2025, release its next big “Orion” model, face growing competition, and have to navigate the complexity of a new US president and his AI czar.
Turley hopes 2024’s version of ChatGPT will soon feel as quaint as AOL Instant Messenger. A year from now, we’ll probably laugh at how basic it was, he says. “Remember when all we could do was ask it questions?”
You may like

Para Gemini carismático, adaptable y curioso: esto es lo que puede esperar disfrutar, trabajar y recibir durante todo el mes de mayo.
Nuestras mentes subconscientes son más perceptivas a los cambios inminentes de lo que nuestras mentes conscientes podrían darse cuenta. Al igual que los temblores antes de un tsunami, las partes más profundas de nuestros corazones y mentes a menudo pueden sentir cuando está a punto de tener lugar un cambio significativo. Ese ciertamente parece ser el caso para usted este mes, Géminis, ya que su pronóstico comienza con un cuadrado desafiante entre la luna creciente de la depilación y su planeta gobernante, Mercurio. Iniciar un plan de acción preciso puede ser más difícil. La niebla cerebral y la falta general de motivación son igualmente probables culpables. Tome nota de lo que le ha estado molestando y mantenga esos registros en un lugar donde pueda acceder fácilmente a ellos. Incluso las molestias o ansiedades aparentemente menores pueden ser guías útiles al navegar por el cambio celestial principal de este mes.
Esa transición tiene lugar el 4 de mayo, cuando Plutón se retrógrado, un largo período celestial que afectará los pronósticos cósmicos en los próximos meses. A pesar de la inmensa distancia de este planeta enano desde nuestro punto de vista terrenal, la influencia de Plutón sobre nuestras mentes subconscientes, la transformación social, los tabúes, la muerte y el renacimiento lo convierten en un retrógrado notable. Si otros períodos retrógrados molestos como los de Mercurio son los sutiles susurros de los vientos que atraviesan las grietas en una pared, Plutón retrógrado es el tornado que derriba toda la estructura. Las transformaciones de Plutón son vastas y duraderas. Se pertenecen a aspectos de la existencia que trascienden nuestras vidas individuales mientras afectan cada parte de ellos.
Varios días después, el 7 de mayo, Mercurio forma una potente conjunción con Quirón en Aries. Quirón es un planeta enano que gobierna nuestras vulnerabilidades y heridas emocionales. Influye en la forma en que transformamos nuestro dolor en algo más útil y positivo, ya sea que sea sabiduría que podamos usar o el conocimiento que podemos compartir con los demás. La destreza comunicativa de Mercurio y el intelecto agudo se prestan a una mejor comprensión y, a su vez, el procesamiento de duelos pasados. Nunca es demasiado tarde para aprender de un viejo error, Géminis. Hacerlo puede ser la diferencia entre que esa herida emocional sea una costra dolorida y una cicatriz sutil. No puedes cambiar lo que ya ha pasado. Pero puedes cambiar a donde vayas a continuación.
Su planeta gobernante pasa a Tauro gobernado por la Tierra el mismo día que forma una oposición directa a la luna gibrosa. El mercurio en Tauro promueve la firmeza, la confianza y la estabilidad. También puede conducir a la terquedad, la ingenuidad y la alienación. Tenga cuidado de cómo ejerce esta energía cósmica, Stargazer. El enfrentamiento celestial de Mercurio con la luna gibosa de depilación crea conflicto entre la persona en la que se encuentra en este mismo momento y la persona que tiene el potencial de ser. La luna gibosa de depilación lo llama para evaluar su progreso hasta ahora. Si tuviera que mantener este mismo camino, ¿dónde estaría bajo el brillo de la luna llena en unos días? Si no estás contento con la respuesta, ahora es el momento de redirigir.
Tendrá la oportunidad de calificar sus respuestas, por así decirlo, cuando la luna llena alcanza su máxima fuerza en Scorpio el 12 de mayo. Una luna llena en Scorpio puede sonar intimidante (lo siento, Scorpios, pero su reputación le precede). Sin embargo, no seas tan rápido para asumir lo peor. Scorpio es un dominio celestial que bloquea el enfoque en la dinámica de poder, la mente subconsciente y los temas tabú u opaco como la sexualidad, la identidad, el propósito de la vida, la fe y lo que significa ser exitoso y contenido. Bajo el resplandor revelador de la luna llena, el Cosmos lo dirigirá hacia el tema que más ha estado sopesando mucho en su mente. El flujo de energía estará abierto durante este tiempo, Géminis. Capitalizar la oportunidad de perfeccionar su fuerza.
Un cambio tangible hacia el descanso y la recalibración comienza el 16 de mayo. En este día, la luna gibrosa disminuyendo forma un trígono armonioso con mercurio. La disminución de la luna gibosa nos empuja a liberar viejos comportamientos, ideas o incluso relaciones que ya no nos sirven como antes. Dos días después, Mercurio y Marte forman una plaza desafiante. Esta alineación envía un mensaje claro: ahora no es el momento de actuar. Habrá muchas posibilidades de afirmarse en el futuro. En este momento, las estrellas te instan a que atiendan tus propias necesidades y deseos.
El sol ingresa a su dominio celestial, iniciando la temporada de Géminis, el 20 de mayo. Además de fortalecer su sentido general de sí mismo y propósito, la ubicación del sol promueve el pensamiento flexible y una identidad maleable. Para ser claros, esto no es lo mismo que perderse por completo, Stargazer. Es simplemente una oportunidad para explorar otras partes de ti mismo que podría haber pensado que no existía. Llevas multitudes. Incluso en los últimos días de su vida, aún habrá profundidades inexploradas. Eso es lo que hace que esta información sea tan satisfactoria y la vida tan gratificante. Descubrir nuevas facetas de su identidad no es un castigo, a pesar de la mayor carga de trabajo emocional y mental. La oportunidad de mirar a tu sí mismo siempre es una bendición.
Las estrellas continúan priorizando el cambio y la innovación a medida que Mercurio y Urano se unen bajo Tauro. Urano podría tener una mala reputación por ser caótico y rebelde. Pero con Mercurio en la mezcla, esta alineación parece ser más audaz e innovadora que destructiva. Explore las posibilidades ante usted y absorbe lo que pueda. La luna nueva en su dominio celestial el 27 de mayo (que también se reúne con su planeta gobernante) ofrece el momento perfecto para reflexionar sobre el Intel que reunió. ¿Cómo se comparan las viejas y nuevas versiones de ti mismo? ¿Contraste? Equilibrio entre los dos mentiras en las respuestas a cualquier pregunta.
May será un momento especialmente tumultuoso en el cosmos, pero al menos terminaste en una buena base. El 27 de mayo también marca el comienzo de un trígono entre Plutón y Mercurio, que es seguido de cerca por la conjunción del Sol con su planeta gobernante el 30 de mayo. Se está produciendo un cambio importante, y todos los signos cósmicos apuntan a que sea para mejor. Abraza las mariposas en tu estómago, Géminis. Grandes cosas están en camino.
Así concluye sus aspectos más destacados mensuales. Para análisis celestiales más específicos, asegúrese de leer su horóscopo diario y semanal también. ¡Buena suerte, Géminis! Nos vemos el próximo mes.
Noticias
How Would I Learn to Code with ChatGPT if I Had to Start Again
Published
4 horas agoon
1 mayo, 2025
Coding has been a part of my life since I was 10. From modifying HTML & CSS for my Friendster profile during the simple internet days to exploring SQL injections for the thrill, building a three-legged robot for fun, and lately diving into Python coding, my coding journey has been diverse and fun!
Here’s what I’ve learned from various programming approaches.
The way I learn coding is always similar; As people say, mostly it’s just copy-pasting.
When it comes to building something in the coding world, here’s a breakdown of my method:
- Choose the Right Framework or Library
- Learn from Past Projects
- Break It Down into Steps
Slice your project into actionable item steps, making development less overwhelming. - Google Each Chunk
For every step, consult Google/Bing/DuckDuckGo/any search engine you prefer for insights, guidance, and potential solutions. - Start Coding
Try to implement each step systematically.
However, even the most well-thought-out code can encounter bugs. Here’s my strategy for troubleshooting:
1. Check Framework Documentation: ALWAYS read the docs!
2. Google and Stack Overflow Search: search on Google and Stack Overflow. Example keyword would be:
site:stackoverflow.com [coding language] [library] error [error message]
site:stackoverflow.com python error ImportError: pandas module not found
– Stack Overflow Solutions: If the issue is already on Stack Overflow, I look for the most upvoted comments and solutions, often finding a quick and reliable answer.
– Trust My Intuition: When Stack Overflow doesn’t have the answer, I trust my intuition to search for trustworthy sources on Google; GeeksForGeeks, Kaggle, W3School, and Towards Data Science for DS stuff
3. Copy-Paste the Code Solution
4. Verify and Test: The final step includes checking the modified code thoroughly and testing it to ensure it runs as intended.
And Voila you just solve the bug!
Isn’t it beautiful?
But in reality, are we still doing this?!
Lately, I’ve noticed a shift in how new coders are tackling coding. I’ve been teaching how to code professionally for about three years now, bouncing around in coding boot camps and guest lecturing at universities and corporate training. The way coders are getting into code learning has changed a bit.
I usually tell the fresh faces to stick with the old-school method of browsing and googling for answers, but people are still using ChatGPT eventually. And their alibi is
“Having ChatGPT (for coding) is like having an extra study buddy -who chats with you like a regular person”.
It comes in handy, especially when you’re still trying to wrap your head around things from search results and documentation — to develop what is so-called programmer intuition.
Now, don’t get me wrong, I’m all for the basics. Browsing, reading docs, and throwing questions into the community pot — those are solid moves, in my book. Relying solely on ChatGPT might be a bit much. Sure, it can whip up a speedy summary of answers, but the traditional browsing methods give you the freedom to pick and choose, to experiment a bit, which is pretty crucial in the coding world.
But, I’ve gotta give credit where it’s due — ChatGPT is lightning-fast at giving out answers, especially when you’re still trying to figure out the right from the wrong in search results and docs.
I realize this shift of using ChatGPT as a study buddy is not only happening in the coding scene, Chatgpt has revolutionized the way people learn, I even use ChatGPT to fix my grammar for this post, sorry Grammarly.
Saying no to ChatGPT is like saying no to search engines in the early 2000 era. While ChatGPT may come with biases and hallucinations, similar to search engines having unreliable information or hoaxes. When ChatGPT is used appropriately, it can expedite the learning process.
Now, let’s imagine a real-life scenario where ChatGPT could help you by being your coding buddy to help with debugging.
Scenario: Debugging a Python Script
Imagine you’re working on a Python script for a project, and you encounter an unexpected error that you can’t solve.
Here is how I used to be taught to do it — the era before ChatGPT.
Browsing Approach:
- Check the Documentation:
Start by checking the Python documentation for the module or function causing the error.
For example:
– visit https://scikit-learn.org/stable/modules/ for Scikit Learn Doc
2. Search on Google & Stack Overflow:
If the documentation doesn’t provide a solution, you turn to Google and Stack Overflow. Scan through various forum threads and discussions to find a similar issue and its resolution.

3. Trust Your Intuition:
If the issue is unique or not well-documented, trust your intuition! You might explore articles and sources on Google that you’ve found trustworthy in the past, and try to adapt similar solutions to your problem.

You can see that on the search result above, the results are from W3school – (trusted coding tutorial site, great for cheatsheet) and the other 2 results are official Pandas documentation. You can see that search engines do suggest users look at the official documentation.
And this is how you can use ChatGPT to help you debug an issue.
New Approach with ChatGPT:
- Engage ChatGPT in Conversations:
Instead of only navigating through documentation and forums, you can engage ChatGPT in a conversation. Provide a concise description of the error and ask. For example,
“I’m encountering an issue in my [programming language] script where [describe the error]. Can you help me understand what might be causing this and suggest a possible solution?”

2. Clarify Concepts with ChatGPT:
If the error is related to a concept you are struggling to grasp, you can ask ChatGPT to explain that concept. For example,
“Explain how [specific concept] works in [programming language]? I think it might be related to the error I’m facing. The error is: [the error]”

3. Seek Recommendations for Troubleshooting:
You ask ChatGPT for general tips on troubleshooting Python scripts. For instance,
“What are some common strategies for dealing with [issue]? Any recommendations on tools or techniques?”

Potential Advantages:
- Personalized Guidance: ChatGPT can provide personalized guidance based on the specific details you provide about the error and your understanding of the problem.
- Concept Clarification: You can seek explanations and clarifications on concepts directly from ChatGPT leveraging their LLM capability.
- Efficient Troubleshooting: ChatGPT might offer concise and relevant tips for troubleshooting, potentially streamlining the debugging process.
Possible Limitations:
Now let’s talk about the cons of relying on ChatGPT 100%. I saw these issues a lot in my student’s journey on using ChatGPT. Post ChatGPT era, my students just copied and pasted the 1-line error message from their Command Line Interface despite the error being 100 lines and linked to some modules and dependencies. Asking ChatGPT to explain the workaround by providing a 1 line error code might work sometimes, or worse — it might add 1–2 hour manhour of debugging.
ChatGPT comes with a limitation of not being able to see the context of your code. For sure, you can always give a context of your code. On a more complex code, you might not be able to give every line of code to ChatGPT. The fact that Chat GPT only sees the small portion of your code, ChatGPT will either assume the rest of the code based on its knowledge base or hallucinate.
These are the possible limitations of using ChatGPT:
- Lack of Real-Time Dynamic Interaction: While ChatGPT provides valuable insights, it lacks the real-time interaction and dynamic back-and-forth that forums or discussion threads might offer. On StackOverflow, you might have 10 different people who would suggest 3 different solutions which you can compare either by DIY ( do it yourself, try it out) or see the number of upvotes.
- Dependence on Past Knowledge: The quality of ChatGPT’s response depends on the information it has been trained on, and it may not be aware of the latest framework updates or specific details of your project.
- Might add extra Debugging Time: ChatGPT does not have a context of your full code, so it might lead you to more debugging time.
- Limited Understanding of Concept: The traditional browsing methods give you the freedom to pick and choose, to experiment a bit, which is pretty crucial in the coding world. If you know how to handpick the right source, you probably learn more from browsing on your own than relying on the ChatGPT general model.
Unless you ask a language model that is trained and specialized in coding and tech concepts, research papers on coding materials, or famous deep learning lectures from Andrew Ng, Yann Le Cunn’s tweet on X (formerly Twitter), pretty much ChatGPT would just give a general answer.
This scenario showcases how ChatGPT can be a valuable tool in your coding toolkit, especially for obtaining personalized guidance and clarifying concepts. Remember to balance ChatGPT’s assistance with the methods of browsing and ask the community, keeping in mind its strengths and limitations.
Final Thoughts
Things I would recommend for a coder
If you really want to leverage the autocompletion model; instead of solely using ChatGPT, try using VScode extensions for auto code-completion tasks such as CodeGPT — GPT4 extension on VScode, GitHub Copilot, or Google Colab Autocomplete AI tools in Google Colab.

As you can see in the screenshot above, Google Colab automatically gives the user suggestions on what code comes next.
Another alternative is Github Copilot. With GitHub Copilot, you can get an AI-based suggestion in real-time. GitHub Copilot suggests code completions as developers type and turn prompts into coding suggestions based on the project’s context and style conventions. As per this release from Github, Copilot Chat is now powered by OpenAI GPT-4 (a similiar model that ChatGPT is using).

I have been actively using CodeGPT as a VSCode Extension before I knew that Github Copilot is accessible for free if you are in education program. CodeGPT Co has 1M download to this date on the VSCode Extension Marketplace. CodeGPT allows seamless integration with the ChatGPT API, Google PaLM 2, and Meta Llama.
You can get code suggestions through comments, here is how:
- Write a comment asking for a specific code
- Press
cmd + shift + i
- Use the code

You can also initiate a chat via the extension in the menu and jump into coding conversations

As I reflect on my coding journey, the invaluable lesson learned is that there’s no one-size-fits-all approach to learning. It’s essential to embrace a diverse array of learning methods, seamlessly blending traditional practices like browsing and community interaction with the innovative capabilities of tools like ChatGPT and auto code-completion tools.
What to Do:
- Utilize Tailored Learning Resources: Make the most of ChatGPT’s recommendations for learning materials.
- Collaborate for Problem-Solving: Utilize ChatGPT as a collaborative partner as if you are coding with your friends.
What Not to Do:
- Over-Dependence on ChatGPT: Avoid relying solely on ChatGPT and ensure a balanced approach to foster independent problem-solving skills.
- Neglect Real-Time Interaction with Coding Community: While ChatGPT offers valuable insights, don’t neglect the benefits of real-time interaction and feedback from coding communities. That also helps build a reputation in the community
- Disregard Practical Coding Practice: Balance ChatGPT guidance with hands-on coding practice to reinforce theoretical knowledge with practical application.
Let me know in the comments how you use ChatGPT to help you code!
Happy coding!
Ellen
Follow me on LinkedIn
Check out my portfolio: liviaellen.com/portfolio
My Previous AR Works: liviaellen.com/ar-profile
or just buy me a real coffee
— Yes I love coffee.
About the Author
I’m Ellen, a Machine Learning engineer with 6 years of experience, currently working at a fintech startup in San Francisco. My background spans data science roles in oil & gas consulting, as well as leading AI and data training programs across APAC, the Middle East, and Europe.
I’m currently completing my Master’s in Data Science (graduating May 2025) and actively looking for my next opportunity as a machine learning engineer. If you’re open to referring or connecting, I’d truly appreciate it!
I love creating real-world impact through AI and I’m always open to project-based collaborations as well.
Noticias
Lo que dice el acuerdo de OpenAI del Washington Post sobre las licencias de IA
Published
5 horas agoon
1 mayo, 2025
La evolución de la licencia de contenido de IA ofertas
El Washington Post se ha convertido en el último editor importante en llegar a un acuerdo de licencia con Openai, uniéndose a una cohorte creciente que ahora abarca más de 20 organizaciones de noticias.
Es parte de un patrón familiar: cada pocos meses, Openai bloquea otro editor para reforzar su tubería de contenido. Pero los términos de estos acuerdos parecen estar evolucionando en silencio, alejándose sutilmente del lenguaje explícito en torno a los datos de capacitación que definieron acuerdos anteriores y planteando nuevas preguntas sobre lo que ahora significan estas asociaciones.
El acuerdo del Washington Post se centra en surgir su contenido en respuesta a consultas relacionadas con las noticias. “Como parte de esta asociación, ChatGPT mostrará resúmenes, citas y enlaces a informes originales de la publicación en respuesta a preguntas relevantes”, se lee el anuncio el 22 de abril sobre el acuerdo de la publicación con OpenAI. En contraste, el pasado se ocupa de editores como Axel Springer y Time, firmado en diciembre de 2023 y junio de 2024 respectivamente, explícitamente incluyó disposiciones para la capacitación de LLM de OpenAI en su contenido.
El acuerdo de OpenAI de The Guardian, anunciado en febrero de 2025, tiene una redacción similar al anuncio del Washington Post y no se menciona los datos de capacitación. Un portavoz de Guardian se negó a comentar sobre los términos de acuerdo con OpenAI. El Washington Post no respondió a las solicitudes de comentarios.
Estos cambios algo sutiles en el lenguaje de los términos podrían indicar un cambio más amplio en el paisaje de IA, según conversaciones con cuatro Expertos legales de medios. Podría indicar un cambio en cómo los acuerdos de licencia de contenido de IA están estructurados en el futuro, con más editores que potencialmente buscan acuerdos que prioricen la atribución y la prominencia en los motores de búsqueda de IA sobre los derechos para la capacitación modelo.
Otro factor a tener en cuenta: estas compañías de IA ya han capacitado a sus LLM en grandes cantidades de contenido disponible en la web, según Aaron Rubin, socio del grupo estratégico de transacciones y licencias en la firma de abogados Gunderson Dettmer. Y debido a que las compañías de IA enfrentan litigios de compañías de medios que afirman que esto era una infracción de derechos de autor, como el caso del New York Times contra OpenAI, si las compañías de IA continuaran pagando a los datos de licencia con fines de capacitación, podría verse como “una admisión implícita” que debería haber pagado para licenciar esos datos y no haberlo escrito de forma gratuita, dijo Rubin.
“[AI companies] Ya tienen un billón de palabras que han robado. No necesitan las palabras adicionales tan mal para la capacitación, pero quieren tener el contenido actualizado para respuestas [in their AI search engines]”, Dijo Bill Gross, fundador de la empresa de inicio de IA Prorata.ai, que está construyendo soluciones tecnológicas para compensar a los editores por el contenido utilizado por las compañías generativas de IA.
Tanto las compañías de IA como los editores pueden beneficiarse de esta posible evolución, según Rubin. Las compañías de IA obtienen acceso a noticias confiables y actualizadas de fuentes confiables para responder preguntas sobre los eventos actuales en sus productos, y los editores “pueden llenar un vacío que tenían miedo que faltaran con la forma en que estas herramientas de IA han evolucionado. Estaban perdiendo clics y globos oculares y enlaces a sus páginas”, dijo. Tener una mejor atribución en lugares como la búsqueda de chatgpt tiene el potencial de impulsar más tráfico a los sitios de los editores. Al menos, esa es la esperanza.
“Tiene el potencial de generar más dinero para los editores”, dijo Rubin. “Los editores están apostando a que así es como las personas van a interactuar con los medios de comunicación en el futuro”.
Desde el otoño pasado, Operai ha desafiado a los gigantes de búsqueda como Google con su motor de búsqueda de IA, búsqueda de chatgpt, y ese esfuerzo depende del acceso al contenido de noticias. Cuando se le preguntó si la estructura de los acuerdos de Operai con los editores había cambiado, un portavoz de OpenAI señaló el lanzamiento de la compañía de la compañía de ChatGPT en octubre de 2024, así como mejoras anunciadas esta semana.
“Tenemos un feed directo al contenido de nuestro socio editor para mostrar resúmenes, citas y enlaces atribuidos a informes originales en respuesta a preguntas relevantes”, dijo el portavoz. “Ese es un componente de las ofertas. La capacitación posterior ayuda a aumentar la precisión de las respuestas relacionadas con el contenido de un editor”. El portavoz no respondió a otras solicitudes de comentarios.
No está claro cuántos editores como The Washington Post no se pueden hacer de OpenAI, especialmente porque puede surgir un modelo diferente centrado en la búsqueda de ChatGPT. Pero la perspectiva para los acuerdos de licencia entre editores y compañías de IA parece estar empeorando. El valor de estos acuerdos está “en picado”, al menos según el CEO de Atlantic, Nicholas Thompson, quien habló en el evento Reuters Next en diciembre pasado.
“Todavía hay un mercado para la licencia de contenido para la capacitación y eso sigue siendo importante, pero continuaremos viendo un enfoque en entrar en acuerdos que resultan en impulsar el tráfico a los sitios”, dijo John Monterubio, socio del grupo avanzado de medios y tecnología en la firma de abogados Loeb & Loeb. “Será la nueva forma de marketing de SEO y compra de anuncios, para parecer más altos en los resultados al comunicarse con estos [generative AI] herramientas.”
Lo que hemos escuchado
“No tenemos que preocuparnos por una narración algo falsa de: las cookies deben ir … entonces puedes poner todo este ancho de banda y potencia para mejorar el mercado actual, sin preocuparte por un posible problema futuro que estuviera en el control de Google todo el tiempo”.
– Anónimo Publishing Ejecute la decisión de Google la semana pasada de continuar usando cookies de terceros en Chrome.
Números para saber
$ 50 millones: la cantidad que Los Angeles Times perdió en 2024.
50%: El porcentaje de adultos estadounidenses que dijeron que la IA tendrá un impacto muy o algo negativo en las noticias que las personas obtienen en los EE. UU. Durante los próximos 20 años, según un estudio del Centro de Investigación Pew.
$ 100 millones: la cantidad Spotify ha pagado a los editores y creadores de podcasts desde enero.
0.3%: La disminución esperada en el uso de los medios (canales digitales y tradicionales) en 2025, la primera caída desde 2009, según PQ Media Research.
Lo que hemos cubierto
Las demandas de AI destacan las luchas de los editores para impedir que los bots raspen contenido
- La reciente demanda de Ziff Davis contra Operai destaca la realidad de que los editores aún no tienen una forma confiable de evitar que las compañías de IA raspen su contenido de forma gratuita.
- Si bien han surgido herramientas como Robots.txt archivos, paredes de pago y etiquetas de bloqueo AI-AI, muchos editores admiten que es muy difícil hacer cumplir el control en cada bot, especialmente porque algunos ignoran los protocolos estándar o enmascaran sus identidades.
Leer más aquí.
¿Quién compraría Chrome?
- El ensayo antimonopolio de búsqueda de Google podría obligar a Google a separarse del navegador Chrome.
- Si lo hizo, OpenAi, Perplexity, Yahoo y Duckduckgo podrían ser algunos de los compradores potenciales.
Lea más sobre el impacto potencial de una venta masiva de Chrome aquí.
Tiktok está cortejando a los creadores y agencias para participar en sus herramientas en vivo
- Tiktok está tratando de demostrar el potencial de ingresos de sus herramientas en vivo.
- La plataforma de redes sociales dice que sus creadores ahora generan colectivamente $ 10 millones en ingresos diariamente a través de la transmisión en vivo.
Lea más sobre el tono de Tiktok aquí.
¿WTF son bots grises?
- Los rastreadores y raspadores de IA generativos están siendo llamados “bots grises” por algunos para ilustrar la línea borrosa entre el tráfico real y falso.
- Estos bots pueden afectar el análisis y robar contenido, y las impresiones publicitarias impulsadas por la IA pueden dañar las tasas de clics y las tasas de conversión.
Lea más sobre por qué los bots grises son un riesgo para los editores aquí.
¿Facebook se está convirtiendo en un nuevo flujo de ingresos nuevamente para los editores?
- Los editores han sido testigos de un reciente pico de referencia de Facebook, y es, algo sorprendentemente, coincidiendo con una afluencia de ingresos del programa de monetización de contenido de Meta.
- De los 10 editores con los que Digay habló para este artículo, varios están en camino de hacer entre seis y siete cifras este año del último programa de monetización de contenido de Meta.
Lea más sobre lo que reciben los editores de Facebook aquí.
Lo que estamos leyendo
Las ambiciones de video de los podcasts de los medios de comunicación destacan el movimiento del formato de audio a la televisión
Los medios de comunicación como el New York Times y el Atlantic están poniendo más recursos en la producción de videos de los populares programas de podcast para aprovechar el público más joven de YouTube, informó Vanity Fair.
La perplejidad quiere recopilar datos sobre los usuarios para vender anuncios personalizados
El CEO de Perplexity, Aravind Srinivas, dijo que la perplejidad está construyendo su propio navegador para recopilar datos de usuarios y vender anuncios personalizados, informó TechCrunch.
El presidente Trump apunta a la prensa en los primeros 100 días
El presidente Trump apunta a las compañías de medios tradicionales en sus primeros 100 días, utilizando tácticas como prohibir los puntos de venta de que cubren los eventos de la Casa Blanca hasta el lanzamiento de investigaciones en las principales redes, informó Axios.
SemAFOR probará suscripciones
SemaFor “probará” suscripciones en “Due Time”, el fundador Justin Smith dijo al Inteligencer de la revista New York en una inmersión profunda en la empresa de inicio de noticias centrada en el boletín.
Related posts


































































































































































































































































































Trending
-
Startups11 meses ago
Remove.bg: La Revolución en la Edición de Imágenes que Debes Conocer
-
Tutoriales12 meses ago
Cómo Comenzar a Utilizar ChatGPT: Una Guía Completa para Principiantes
-
Recursos12 meses ago
Cómo Empezar con Popai.pro: Tu Espacio Personal de IA – Guía Completa, Instalación, Versiones y Precios
-
Startups10 meses ago
Startups de IA en EE.UU. que han recaudado más de $100M en 2024
-
Startups12 meses ago
Deepgram: Revolucionando el Reconocimiento de Voz con IA
-
Recursos11 meses ago
Perplexity aplicado al Marketing Digital y Estrategias SEO
-
Recursos12 meses ago
Suno.com: La Revolución en la Creación Musical con Inteligencia Artificial
-
Noticias10 meses ago
Dos periodistas octogenarios deman a ChatGPT por robar su trabajo