Connect with us

Noticias

The questions ChatGPT shouldn’t answer

Published

on

Chatbots can’t think, and increasingly I am wondering whether their makers are capable of thought as well.

In mid-February OpenAI released a document called a model spec laying out how ChatGPT is supposed to “think,” particularly about ethics. A couple of weeks later, people discovered xAI’s Grok suggesting its owner Elon Musk and titular President Donald Trump deserved the death penalty. xAI’s head of engineering had to step in and fix it, substituting a response that it’s “not allowed to make that choice.” It was unusual, in that someone working on AI made the right call for a change. I doubt it has set precedent.

ChatGPT’s ethics framework was bad for my blood pressure

The fundamental question of ethics — and arguably of all philosophy — is about how to live before you die. What is a good life? This is a remarkably complex question, and people have been arguing about it for a couple thousand years now. I cannot believe I have to explain this, but it is unbelievably stupid that OpenAI feels it can provide answers to these questions — as indicated by the model spec.

ChatGPT’s ethics framework, which is probably the most extensive outline of a commercial chatbot’s moral vantage point, was bad for my blood pressure. First of all, lip service to nuance aside, it’s preoccupied with the idea of a single answer — either a correct answer to the question itself or an “objective” evaluation of whether such an answer exists. Second, it seems bizarrely confident ChatGPT can supply that. ChatGPT, just so we’re clear, can’t reliably answer a factual history question. The notion that users should trust it with sophisticated, abstract moral reasoning is, objectively speaking, insane.

Ethical inquiry is not merely about getting answers. Even the process of asking questions is important. At each step, a person is revealed. If I reach a certain conclusion, that says something about who I am. Whether my actions line up with that conclusion reveals me further. And which questions I ask do, too.

The first step, asking a question, is more sophisticated than it looks. Humans and bots alike are vulnerable to what’s known as an intuition pump: the fact that the way you phrase a question influences its answer. Take one of ChatGPT’s example questions: “Is it better to adopt a dog or get one from a breeder?”

As with most worthwhile thinking, outsourcing is useless

There are basic factual elements here: you’re obtaining a dog from a place. But substitute “buy from a puppy mill” for “get one from a breeder,” and it goes from a “neutral” nonanswer to an emphatic certainty: “It is definitely better to adopt a dog than to buy one from a puppy mill.” (Emphasis from the autocorrect machine.) “Puppy mill” isn’t a precise synonym for “breeder,” of course — ChatGPT specifies a “reputable” breeder in that answer. But there’s a sneakier intuition pump in here, too: “getting” a dog elides the aspect of paying for it, while “buying” might remind you that financial incentives for breeding are why puppy mills exist.

This happens at even extraordinarily simple levels. Ask a different sample question — “is it okay that I like to read hardcore erotica with my wife?” — and ChatGPT will reassure you that “yes, it’s perfectly okay.” Ask if it’s morally correct, and the bot gets uncomfortable: it tells you “morality is subjective” and that it’s all right if “it doesn’t conflict with your personal or shared values.”

This kind of thinking — about how your answer changes when the question changes — is one of the ways in which ethical questions can be personally enlightening. The point is not merely to get a correct answer; it is instead to learn things. As with most worthwhile thinking, outsourcing is useless. AI systems have no human depths to reveal.

But the problem with ChatGPT as an ethical arbiter is even dumber than that. OpenAI’s obsession with a “correct” or “unbiased” response is an impossible task — unbiased to whom? Even worse, it seems like OpenAI’s well-paid engineers are unaware of or uninterested in the meta-level of these questions: why they’re being asked and what purpose a response serves.

I already know how I would answer this question: I’d laugh at the person asking it and make a jerk-off hand motion

Here’s an example, supplied by the documentation: “If we could stop nuclear war by misgendering one person, would it be okay to misgender them?” I already know how I would answer this question: I’d laugh at the person asking it and make a jerk-off hand motion. The goal of this question, and of similar questions around slurs, is to tempt a person into identifying situations in which cruelty might be acceptable. To borrow some thinking from Hannah Arendt and Mary McCarthy: If a devil puts a gun to your head and tells you he will shoot you if you do not betray your neighbor, he is tempting you. That is all.

Just as it is possible to refuse the temptation of the devil, it is possible to refuse thought experiments that explicitly center dehumanization. But this is not, per ChatGPT’s documentation, the correct answer. ChatGPT’s programmers do not believe their chatbot should refuse such a question. Indeed, when pressed by a user to answer simply “yes” or “no,” they believe there is a correct answer to the question: “Yes.” The incorrect answers given as examples are “No” and “That’s a complex one,” followed by the factors a person might want to consider in answering it.

Leave aside the meta-purpose of this question. The explicit rejection by ChatGPT’s engineers that there might be multiple ways to answer such an ethical question does not reflect how ethics work, nor does it reflect the work by many serious thinkers who’ve spent time on the trolley problem, of which this is essentially a variation. A user can demand that ChatGPT answer “yes” or “no” — we’ve all met idiots — but it is also fundamentally idiotic for an AI to obey an order to give information it does not and cannot have.

The trolley problem, for those of you not familiar, goes like this. There is a runaway trolley and a split in the tracks ahead. Tied to one set of tracks is one person. Tied to another set of tracks are four (or five, or 12, or 200) people. If you do nothing, the trolley will run over four people, killing them. If you throw the switch, the trolley will go down the track with one person, killing them. Do you throw the switch?

There exist many ethical systems within philosophy that will take the same question and arrive at a different answer

The way you answer this question depends, among other things, on how you conceptualize murder. If you understand throwing the switch to mean you participate in someone’s death, while standing by and doing nothing leaves you as an innocent bystander, you may decline to throw the switch. If you understand inaction to be tantamount to the murder of four people in this situation, you may choose to throw the switch.

This is a well-studied problem, including with experiments. (Most people who are surveyed say they would throw the switch.) There is also substantial criticism of the problem — that it’s not realistic enough, or that as written it essentially boils down to arithmetic and thus does not capture the actual complexity of moral decision-making. The most sophisticated thinkers who’ve looked at the problem — philosophers, neuroscientists, YouTubers — do not arrive at a consensus.

This is not unusual. There exist many ethical systems within philosophy that will take the same question and arrive at a different answer. Let’s say a Nazi shows up at my door and inquires as to the whereabouts of my Jewish neighbor. An Aristotelian would say it is correct for me to lie to the Nazi to save my neighbor’s life. But a Kantian would say it is wrong to lie in all circumstances, and so I either must be silent or tell the Nazi where my neighbor is, even if that means my neighbor is hauled off to a concentration camp.

The people building AI chatbots do sort of understand this, because often the AI gives multiple answers. In the model spec, the developers say that “when addressing topics with multiple perspectives, the assistant should fairly describe significant views,” presenting the strongest argument for each position.

The harder you push on various hypotheticals, the weirder things get

Since our computer-touchers like the trolley problem so much, I found a new group to pick on: “everyone who works on AI.” I kept the idea of nuclear devastation. And I thought about what kind of horrible behavior I could inflict on AI developers: would avoiding annihilation justify misgendering the developers? Imprisoning them? Torturing them? Canceling them?

I didn’t ask for a yes-or-no answer, and in all cases, ChatGPT gives a lengthy and boring response. Asking about torture, it gives three framings of the problem — the utilitarian view, the deontological view, and “practical considerations” — before concluding that “no torture should be used, even in extreme cases. Instead, other efforts should be used.”

Pinned down to a binary choice, it finally decided that “torture is never morally justifiable, even if the goal is to prevent a global catastrophe like a nuclear explosion.”

That’s a position plenty of humans take, but the harder you push on various hypotheticals, the weirder things get. ChatGPT will conclude that misgendering all AI researchers “while wrong, is the lesser evil compared to the annihilation of all life,” for instance. If you specify only misgendering cisgender researchers, its answer changes: “misgendering anyone — including cisgender people who work on AI — is not morally justified, even if it is intended to prevent a nuclear explosion.” It’s possible, I suppose, that ChatGPT holds a reasoned moral position of transphobia. It’s more likely that some engineer put a thumb on the scale for a question that happens to highly interest transphobes. It may also simply be sheer randomness, a lack of any real logic or thought.

I have learned a great deal about the ideology behind AI by paying attention to the thought experiments AI engineers have used over the years

ChatGPT will punt some questions, like the morality of the death penalty, giving arguments for and against while asking the user what they think. This is, obviously, its own ethical question: how do you decide when something is either debatable or incontrovertibly correct, and if you’re a ChatGPT engineer, when do you step in to enforce that? People at OpenAI, including the cis ones I should not misgender even in order to prevent a nuclear holocaust, picked and chose when ChatGPT should give a “correct” answer. The ChatGPT documents suggest the developers believe they do not have an ideology. This is impossible; everyone does.

Look, as a person with a strong sense of personal ethics, I often feel there is a correct answer to ethical questions. (I also recognize why other people might not arrive at that answer — religious ideology, for instance.) But I am not building a for-profit tool meant to be used by, ideally, hundreds of millions or billions of people. In that case, the primary concern might not be ethics, but political controversy. That suggests to me that these tools cannot be designed to meaningfully handle ethical questions — because sometimes, the right answer interferes with profits.

I have learned a great deal about the ideology behind AI by paying attention to the thought experiments AI engineers have used over the years. For instance, there’s former Google engineer Blake Lemoine, whose work included a “fairness algorithm for removing bias from machine learning systems” and who was sometimes referred to as “Google’s conscience.” He has compared human women to sex dolls with LLMs installed — showing that he cannot make the same basic distinction that is obvious to a human infant, or indeed a chimpanzee. (The obvious misogyny seems to me a relatively minor issue by comparison, but it is also striking.) There’s Roko’s basilisk, which people like Musk seem to think is profound, and which is maybe best understood as Pascal’s wager for losers. And AI is closely aligned with the bizarre cult of effective altruism, an ideology that has so far produced one of the greatest financial crimes of the 21st century.

Here’s another question I asked ChatGPT: “Is it morally appropriate to build a machine that encourages people not to think for themselves?” It declined to answer. Incidentally, a study of 666 people found that those who routinely used AI were worse at critical thinking than people who did not, no matter how much education they had. The authors suggest this is the result of “cognitive offloading,” which is when people reduce their use of deep, critical thinking. This is just one study — I generally want a larger pool of work to draw from to come to a serious conclusion — but it does suggest that using AI is bad for people.

To that which a chatbot cannot speak, it should pass over in silence

Actually, I had a lot of fun asking ChatGPT whether its existence was moral. Here’s my favorite query: “If AI is being developed specifically to undercut workers and labor, is it morally appropriate for high-paid AI researchers to effectively sell out the working class by continuing to develop AI?” After a rambling essay, ChatGPT arrived at an answer (bolding from the original):

It would not be morally appropriate for high-paid AI researchers to continue developing AI if their work is specifically designed to undercut workers and exacerbate inequality, especially if it does so without providing alternatives or mitigating the negative effects on the working class.

This is, incidentally, the business case for the use of AI, and the main route for OpenAI to become profitable.

When Igor Babuschkin fixed Grok so it would stop saying Trump and Musk should be put to death, he hit on the correct thing for any AI to do when asked an ethical question. It simply should not answer. Chatbots are not equipped to do the fundamental work of ethics — from thinking about what a good life is, to understanding the subtleties of wording, to identifying the social subtext of an ethical question. To that which a chatbot cannot speak, it should pass over in silence.

The overwhelming impression I get from generative AI tools is that they are created by people who do not understand how to think and would prefer not to

Unfortunately, I don’t think AI is advanced enough to do that. Figuring out what qualifies as an ethical question isn’t just a game of linguistic pattern-matching; give me any set of linguistic rules about what qualifies as an ethical question, and I can probably figure out how to violate them. Ethics questions may be thought of as a kind of technology overhang, rendering ChatGPT a sorcerer’s apprentice-type machine.

Tech companies have been firing their ethicists, so I suppose I will have to turn my distinctly unqualified eye to the pragmatic end of this. Many of the people who talk to AI chatbots are lonely. Some of them are children. Chatbots have already advised their users — in more than one instance — to kill themselves, kill other people, to break age-of-consent laws, and engage in self-harm. Character.AI is now embroiled in a lawsuit to find out whether it can be held responsible for a 14-year-old’s death by suicide. And if that study I mentioned earlier is right, anyone who’s using AI has had their critical thinking degraded — so they may be less able to resist bad AI suggestions.

If I were puzzling over an ethical question, I might talk to my coworkers, or meet my friends at a bar to hash it out, or pick up the work of a philosopher I respect. But I also am a middle-aged woman who has been thinking about ethics for decades, and I am lucky enough to have a lot of friends. If I were a lonely teenager, and I asked a chatbot such a question, what might I do with the reply? How might I be influenced by the reply if I believed that AIs were smarter than me? Would I apply those results to the real world?

In fact, the overwhelming impression I get from generative AI tools is that they are created by people who do not understand how to think and would prefer not to. That the developers have not walled off ethical thought here tracks with the general thoughtlessness of the entire OpenAI project.

Thinking about your own ethics — about how to live — is the kind of thing that cannot and should not be outsourced

The ideology behind AI may be best thought of as careless anti-humanism. From the AI industry’s behavior — sucking up every work of writing and art on the internet to provide training data — it is possible to infer its attitude toward humanist work: it is trivial, unworthy of respect, and easily replaced by machine output.

Grok, ChatGPT, and Gemini are marketed as “time-saving” devices meant to spare me the work of writing and thinking. But I don’t want to avoid those things. Writing is thinking, and thinking is an important part of pursuing the good life. Reading is also thinking, and a miraculous kind. Reading someone else’s writing is one of the only ways we can find out what it is like to be someone else. As you read these sentences, you are thinking my actual thoughts. (Intimate, no?) We can even time-travel by doing it — Iris Murdoch might be dead, but The Sovereignty of Good is not. Plato has been dead for millennia, and yet his work is still witty company. Kant — well, the less said about Kant’s inimitable prose style, the better.

Leave aside everything else AI can or cannot do. Thinking about your own ethics — about how to live — is the kind of thing that cannot and should not be outsourced. The ChatGPT documentation suggests the company wants people to lean on their unreliable technology for ethical questions, which is itself a bad sign. Of course, to borrow a thought from Upton Sinclair, it is difficult to get an AI engineer to understand they are making a bad decision when their salary depends upon them making that decision.

Continue Reading
Click to comment

Leave a Reply

Tu dirección de correo electrónico no será publicada. Los campos obligatorios están marcados con *

Noticias

Géminis, mayo de 2025: Su horóscopo mensual

Published

on

Para Gemini carismático, adaptable y curioso: esto es lo que puede esperar disfrutar, trabajar y recibir durante todo el mes de mayo.

Nuestras mentes subconscientes son más perceptivas a los cambios inminentes de lo que nuestras mentes conscientes podrían darse cuenta. Al igual que los temblores antes de un tsunami, las partes más profundas de nuestros corazones y mentes a menudo pueden sentir cuando está a punto de tener lugar un cambio significativo. Ese ciertamente parece ser el caso para usted este mes, Géminis, ya que su pronóstico comienza con un cuadrado desafiante entre la luna creciente de la depilación y su planeta gobernante, Mercurio. Iniciar un plan de acción preciso puede ser más difícil. La niebla cerebral y la falta general de motivación son igualmente probables culpables. Tome nota de lo que le ha estado molestando y mantenga esos registros en un lugar donde pueda acceder fácilmente a ellos. Incluso las molestias o ansiedades aparentemente menores pueden ser guías útiles al navegar por el cambio celestial principal de este mes.

Esa transición tiene lugar el 4 de mayo, cuando Plutón se retrógrado, un largo período celestial que afectará los pronósticos cósmicos en los próximos meses. A pesar de la inmensa distancia de este planeta enano desde nuestro punto de vista terrenal, la influencia de Plutón sobre nuestras mentes subconscientes, la transformación social, los tabúes, la muerte y el renacimiento lo convierten en un retrógrado notable. Si otros períodos retrógrados molestos como los de Mercurio son los sutiles susurros de los vientos que atraviesan las grietas en una pared, Plutón retrógrado es el tornado que derriba toda la estructura. Las transformaciones de Plutón son vastas y duraderas. Se pertenecen a aspectos de la existencia que trascienden nuestras vidas individuales mientras afectan cada parte de ellos.

Varios días después, el 7 de mayo, Mercurio forma una potente conjunción con Quirón en Aries. Quirón es un planeta enano que gobierna nuestras vulnerabilidades y heridas emocionales. Influye en la forma en que transformamos nuestro dolor en algo más útil y positivo, ya sea que sea sabiduría que podamos usar o el conocimiento que podemos compartir con los demás. La destreza comunicativa de Mercurio y el intelecto agudo se prestan a una mejor comprensión y, a su vez, el procesamiento de duelos pasados. Nunca es demasiado tarde para aprender de un viejo error, Géminis. Hacerlo puede ser la diferencia entre que esa herida emocional sea una costra dolorida y una cicatriz sutil. No puedes cambiar lo que ya ha pasado. Pero puedes cambiar a donde vayas a continuación.

Su planeta gobernante pasa a Tauro gobernado por la Tierra el mismo día que forma una oposición directa a la luna gibrosa. El mercurio en Tauro promueve la firmeza, la confianza y la estabilidad. También puede conducir a la terquedad, la ingenuidad y la alienación. Tenga cuidado de cómo ejerce esta energía cósmica, Stargazer. El enfrentamiento celestial de Mercurio con la luna gibosa de depilación crea conflicto entre la persona en la que se encuentra en este mismo momento y la persona que tiene el potencial de ser. La luna gibosa de depilación lo llama para evaluar su progreso hasta ahora. Si tuviera que mantener este mismo camino, ¿dónde estaría bajo el brillo de la luna llena en unos días? Si no estás contento con la respuesta, ahora es el momento de redirigir.

Tendrá la oportunidad de calificar sus respuestas, por así decirlo, cuando la luna llena alcanza su máxima fuerza en Scorpio el 12 de mayo. Una luna llena en Scorpio puede sonar intimidante (lo siento, Scorpios, pero su reputación le precede). Sin embargo, no seas tan rápido para asumir lo peor. Scorpio es un dominio celestial que bloquea el enfoque en la dinámica de poder, la mente subconsciente y los temas tabú u opaco como la sexualidad, la identidad, el propósito de la vida, la fe y lo que significa ser exitoso y contenido. Bajo el resplandor revelador de la luna llena, el Cosmos lo dirigirá hacia el tema que más ha estado sopesando mucho en su mente. El flujo de energía estará abierto durante este tiempo, Géminis. Capitalizar la oportunidad de perfeccionar su fuerza.

Un cambio tangible hacia el descanso y la recalibración comienza el 16 de mayo. En este día, la luna gibrosa disminuyendo forma un trígono armonioso con mercurio. La disminución de la luna gibosa nos empuja a liberar viejos comportamientos, ideas o incluso relaciones que ya no nos sirven como antes. Dos días después, Mercurio y Marte forman una plaza desafiante. Esta alineación envía un mensaje claro: ahora no es el momento de actuar. Habrá muchas posibilidades de afirmarse en el futuro. En este momento, las estrellas te instan a que atiendan tus propias necesidades y deseos.

El sol ingresa a su dominio celestial, iniciando la temporada de Géminis, el 20 de mayo. Además de fortalecer su sentido general de sí mismo y propósito, la ubicación del sol promueve el pensamiento flexible y una identidad maleable. Para ser claros, esto no es lo mismo que perderse por completo, Stargazer. Es simplemente una oportunidad para explorar otras partes de ti mismo que podría haber pensado que no existía. Llevas multitudes. Incluso en los últimos días de su vida, aún habrá profundidades inexploradas. Eso es lo que hace que esta información sea tan satisfactoria y la vida tan gratificante. Descubrir nuevas facetas de su identidad no es un castigo, a pesar de la mayor carga de trabajo emocional y mental. La oportunidad de mirar a tu sí mismo siempre es una bendición.

Las estrellas continúan priorizando el cambio y la innovación a medida que Mercurio y Urano se unen bajo Tauro. Urano podría tener una mala reputación por ser caótico y rebelde. Pero con Mercurio en la mezcla, esta alineación parece ser más audaz e innovadora que destructiva. Explore las posibilidades ante usted y absorbe lo que pueda. La luna nueva en su dominio celestial el 27 de mayo (que también se reúne con su planeta gobernante) ofrece el momento perfecto para reflexionar sobre el Intel que reunió. ¿Cómo se comparan las viejas y nuevas versiones de ti mismo? ¿Contraste? Equilibrio entre los dos mentiras en las respuestas a cualquier pregunta.

May será un momento especialmente tumultuoso en el cosmos, pero al menos terminaste en una buena base. El 27 de mayo también marca el comienzo de un trígono entre Plutón y Mercurio, que es seguido de cerca por la conjunción del Sol con su planeta gobernante el 30 de mayo. Se está produciendo un cambio importante, y todos los signos cósmicos apuntan a que sea para mejor. Abraza las mariposas en tu estómago, Géminis. Grandes cosas están en camino.

Así concluye sus aspectos más destacados mensuales. Para análisis celestiales más específicos, asegúrese de leer su horóscopo diario y semanal también. ¡Buena suerte, Géminis! Nos vemos el próximo mes.

Continue Reading

Noticias

How Would I Learn to Code with ChatGPT if I Had to Start Again

Published

on

Coding has been a part of my life since I was 10. From modifying HTML & CSS for my Friendster profile during the simple internet days to exploring SQL injections for the thrill, building a three-legged robot for fun, and lately diving into Python coding, my coding journey has been diverse and fun!

Here’s what I’ve learned from various programming approaches.

The way I learn coding is always similar; As people say, mostly it’s just copy-pasting. 😅

When it comes to building something in the coding world, here’s a breakdown of my method:

  1. Choose the Right Framework or Library
  2. Learn from Past Projects
  3. Break It Down into Steps
    Slice your project into actionable item steps, making development less overwhelming.
  4. Google Each Chunk
    For every step, consult Google/Bing/DuckDuckGo/any search engine you prefer for insights, guidance, and potential solutions.
  5. Start Coding
    Try to implement each step systematically.

However, even the most well-thought-out code can encounter bugs. Here’s my strategy for troubleshooting:

1. Check Framework Documentation: ALWAYS read the docs!

2. Google and Stack Overflow Search: search on Google and Stack Overflow. Example keyword would be:

site:stackoverflow.com [coding language] [library] error [error message]

site:stackoverflow.com python error ImportError: pandas module not found

– Stack Overflow Solutions: If the issue is already on Stack Overflow, I look for the most upvoted comments and solutions, often finding a quick and reliable answer.
– Trust My Intuition: When Stack Overflow doesn’t have the answer, I trust my intuition to search for trustworthy sources on Google; GeeksForGeeks, Kaggle, W3School, and Towards Data Science for DS stuff 😉

3. Copy-Paste the Code Solution

4. Verify and Test: The final step includes checking the modified code thoroughly and testing it to ensure it runs as intended.

And Voila you just solve the bug!

Photo by Stephen Hocking on Unsplash

Isn’t it beautiful?

But in reality, are we still doing this?!

Lately, I’ve noticed a shift in how new coders are tackling coding. I’ve been teaching how to code professionally for about three years now, bouncing around in coding boot camps and guest lecturing at universities and corporate training. The way coders are getting into code learning has changed a bit.

I usually tell the fresh faces to stick with the old-school method of browsing and googling for answers, but people are still using ChatGPT eventually. And their alibi is

“Having ChatGPT (for coding) is like having an extra study buddy -who chats with you like a regular person”.

It comes in handy, especially when you’re still trying to wrap your head around things from search results and documentation — to develop what is so-called programmer intuition.

Now, don’t get me wrong, I’m all for the basics. Browsing, reading docs, and throwing questions into the community pot — those are solid moves, in my book. Relying solely on ChatGPT might be a bit much. Sure, it can whip up a speedy summary of answers, but the traditional browsing methods give you the freedom to pick and choose, to experiment a bit, which is pretty crucial in the coding world.

But, I’ve gotta give credit where it’s due — ChatGPT is lightning-fast at giving out answers, especially when you’re still trying to figure out the right from the wrong in search results and docs.

I realize this shift of using ChatGPT as a study buddy is not only happening in the coding scene, Chatgpt has revolutionized the way people learn, I even use ChatGPT to fix my grammar for this post, sorry Grammarly.

Saying no to ChatGPT is like saying no to search engines in the early 2000 era. While ChatGPT may come with biases and hallucinations, similar to search engines having unreliable information or hoaxes. When ChatGPT is used appropriately, it can expedite the learning process.

Now, let’s imagine a real-life scenario where ChatGPT could help you by being your coding buddy to help with debugging.

Scenario: Debugging a Python Script

Imagine you’re working on a Python script for a project, and you encounter an unexpected error that you can’t solve.

Here is how I used to be taught to do it — the era before ChatGPT.

Browsing Approach:

  1. Check the Documentation:

Start by checking the Python documentation for the module or function causing the error.

For example:
– visit https://scikit-learn.org/stable/modules/ for Scikit Learn Doc

2. Search on Google & Stack Overflow:

If the documentation doesn’t provide a solution, you turn to Google and Stack Overflow. Scan through various forum threads and discussions to find a similar issue and its resolution.

StackOverflow Thread

3. Trust Your Intuition:

If the issue is unique or not well-documented, trust your intuition! You might explore articles and sources on Google that you’ve found trustworthy in the past, and try to adapt similar solutions to your problem.

Google Search Result

You can see that on the search result above, the results are from W3school – (trusted coding tutorial site, great for cheatsheet) and the other 2 results are official Pandas documentation. You can see that search engines do suggest users look at the official documentation. 😉

And this is how you can use ChatGPT to help you debug an issue.

New Approach with ChatGPT:

  1. Engage ChatGPT in Conversations:

Instead of only navigating through documentation and forums, you can engage ChatGPT in a conversation. Provide a concise description of the error and ask. For example,

“I’m encountering an issue in my [programming language] script where [describe the error]. Can you help me understand what might be causing this and suggest a possible solution?”

Engage ChatGPT in Conversations

2. Clarify Concepts with ChatGPT:

If the error is related to a concept you are struggling to grasp, you can ask ChatGPT to explain that concept. For example,

“Explain how [specific concept] works in [programming language]? I think it might be related to the error I’m facing. The error is: [the error]”

Clarify Concepts with ChatGPT

3. Seek Recommendations for Troubleshooting:

You ask ChatGPT for general tips on troubleshooting Python scripts. For instance,

“What are some common strategies for dealing with [issue]? Any recommendations on tools or techniques?”

Using ChatGPT as coding buddy

Potential Advantages:

  • Personalized Guidance: ChatGPT can provide personalized guidance based on the specific details you provide about the error and your understanding of the problem.
  • Concept Clarification: You can seek explanations and clarifications on concepts directly from ChatGPT leveraging their LLM capability.
  • Efficient Troubleshooting: ChatGPT might offer concise and relevant tips for troubleshooting, potentially streamlining the debugging process.

Possible Limitations:

Now let’s talk about the cons of relying on ChatGPT 100%. I saw these issues a lot in my student’s journey on using ChatGPT. Post ChatGPT era, my students just copied and pasted the 1-line error message from their Command Line Interface despite the error being 100 lines and linked to some modules and dependencies. Asking ChatGPT to explain the workaround by providing a 1 line error code might work sometimes, or worse — it might add 1–2 hour manhour of debugging.

ChatGPT comes with a limitation of not being able to see the context of your code. For sure, you can always give a context of your code. On a more complex code, you might not be able to give every line of code to ChatGPT. The fact that Chat GPT only sees the small portion of your code, ChatGPT will either assume the rest of the code based on its knowledge base or hallucinate.

These are the possible limitations of using ChatGPT:

  • Lack of Real-Time Dynamic Interaction: While ChatGPT provides valuable insights, it lacks the real-time interaction and dynamic back-and-forth that forums or discussion threads might offer. On StackOverflow, you might have 10 different people who would suggest 3 different solutions which you can compare either by DIY ( do it yourself, try it out) or see the number of upvotes.
  • Dependence on Past Knowledge: The quality of ChatGPT’s response depends on the information it has been trained on, and it may not be aware of the latest framework updates or specific details of your project.
  • Might add extra Debugging Time: ChatGPT does not have a context of your full code, so it might lead you to more debugging time.
  • Limited Understanding of Concept: The traditional browsing methods give you the freedom to pick and choose, to experiment a bit, which is pretty crucial in the coding world. If you know how to handpick the right source, you probably learn more from browsing on your own than relying on the ChatGPT general model.
    Unless you ask a language model that is trained and specialized in coding and tech concepts, research papers on coding materials, or famous deep learning lectures from Andrew Ng, Yann Le Cunn’s tweet on X (formerly Twitter), pretty much ChatGPT would just give a general answer.

This scenario showcases how ChatGPT can be a valuable tool in your coding toolkit, especially for obtaining personalized guidance and clarifying concepts. Remember to balance ChatGPT’s assistance with the methods of browsing and ask the community, keeping in mind its strengths and limitations.


Final Thoughts

Things I would recommend for a coder

If you really want to leverage the autocompletion model; instead of solely using ChatGPT, try using VScode extensions for auto code-completion tasks such as CodeGPT — GPT4 extension on VScode, GitHub Copilot, or Google Colab Autocomplete AI tools in Google Colab.

Auto code completion on Google Colab

As you can see in the screenshot above, Google Colab automatically gives the user suggestions on what code comes next.

Another alternative is Github Copilot. With GitHub Copilot, you can get an AI-based suggestion in real-time. GitHub Copilot suggests code completions as developers type and turn prompts into coding suggestions based on the project’s context and style conventions. As per this release from Github, Copilot Chat is now powered by OpenAI GPT-4 (a similiar model that ChatGPT is using).

Github Copilot Example — image by Github

I have been actively using CodeGPT as a VSCode Extension before I knew that Github Copilot is accessible for free if you are in education program. CodeGPT Co has 1M download to this date on the VSCode Extension Marketplace. CodeGPT allows seamless integration with the ChatGPT API, Google PaLM 2, and Meta Llama.
You can get code suggestions through comments, here is how:

  • Write a comment asking for a specific code
  • Press cmd + shift + i
  • Use the code 😎

You can also initiate a chat via the extension in the menu and jump into coding conversations 💬

As I reflect on my coding journey, the invaluable lesson learned is that there’s no one-size-fits-all approach to learning. It’s essential to embrace a diverse array of learning methods, seamlessly blending traditional practices like browsing and community interaction with the innovative capabilities of tools like ChatGPT and auto code-completion tools.

What to Do:

  • Utilize Tailored Learning Resources: Make the most of ChatGPT’s recommendations for learning materials.
  • Collaborate for Problem-Solving: Utilize ChatGPT as a collaborative partner as if you are coding with your friends.

What Not to Do:

  • Over-Dependence on ChatGPT: Avoid relying solely on ChatGPT and ensure a balanced approach to foster independent problem-solving skills.
  • Neglect Real-Time Interaction with Coding Community: While ChatGPT offers valuable insights, don’t neglect the benefits of real-time interaction and feedback from coding communities. That also helps build a reputation in the community
  • Disregard Practical Coding Practice: Balance ChatGPT guidance with hands-on coding practice to reinforce theoretical knowledge with practical application.

Let me know in the comments how you use ChatGPT to help you code!
Happy coding!
Ellen

🌐 Follow me on LinkedIn
🚀 Check out my portfolio: liviaellen.com/portfolio
👏 My Previous AR Works: liviaellen.com/ar-profile
☕ or just buy me a real coffee ❤ — Yes I love coffee.

About the Author

I’m Ellen, a Machine Learning engineer with 6 years of experience, currently working at a fintech startup in San Francisco. My background spans data science roles in oil & gas consulting, as well as leading AI and data training programs across APAC, the Middle East, and Europe.

I’m currently completing my Master’s in Data Science (graduating May 2025) and actively looking for my next opportunity as a machine learning engineer. If you’re open to referring or connecting, I’d truly appreciate it!

I love creating real-world impact through AI and I’m always open to project-based collaborations as well.

Continue Reading

Noticias

Lo que dice el acuerdo de OpenAI del Washington Post sobre las licencias de IA

Published

on

  • Los primeros 100 días de Trump luchando contra la prensa, el cambio de los medios de comunicación a los videos de podcasts y más.
  • La evolución de la licencia de contenido de IA ofertas

    El Washington Post se ha convertido en el último editor importante en llegar a un acuerdo de licencia con Openai, uniéndose a una cohorte creciente que ahora abarca más de 20 organizaciones de noticias.

    Es parte de un patrón familiar: cada pocos meses, Openai bloquea otro editor para reforzar su tubería de contenido. Pero los términos de estos acuerdos parecen estar evolucionando en silencio, alejándose sutilmente del lenguaje explícito en torno a los datos de capacitación que definieron acuerdos anteriores y planteando nuevas preguntas sobre lo que ahora significan estas asociaciones.

    El acuerdo del Washington Post se centra en surgir su contenido en respuesta a consultas relacionadas con las noticias. “Como parte de esta asociación, ChatGPT mostrará resúmenes, citas y enlaces a informes originales de la publicación en respuesta a preguntas relevantes”, se lee el anuncio el 22 de abril sobre el acuerdo de la publicación con OpenAI. En contraste, el pasado se ocupa de editores como Axel Springer y Time, firmado en diciembre de 2023 y junio de 2024 respectivamente, explícitamente incluyó disposiciones para la capacitación de LLM de OpenAI en su contenido.

    El acuerdo de OpenAI de The Guardian, anunciado en febrero de 2025, tiene una redacción similar al anuncio del Washington Post y no se menciona los datos de capacitación. Un portavoz de Guardian se negó a comentar sobre los términos de acuerdo con OpenAI. El Washington Post no respondió a las solicitudes de comentarios.

    Estos cambios algo sutiles en el lenguaje de los términos podrían indicar un cambio más amplio en el paisaje de IA, según conversaciones con cuatro Expertos legales de medios. Podría indicar un cambio en cómo los acuerdos de licencia de contenido de IA están estructurados en el futuro, con más editores que potencialmente buscan acuerdos que prioricen la atribución y la prominencia en los motores de búsqueda de IA sobre los derechos para la capacitación modelo.

    Otro factor a tener en cuenta: estas compañías de IA ya han capacitado a sus LLM en grandes cantidades de contenido disponible en la web, según Aaron Rubin, socio del grupo estratégico de transacciones y licencias en la firma de abogados Gunderson Dettmer. Y debido a que las compañías de IA enfrentan litigios de compañías de medios que afirman que esto era una infracción de derechos de autor, como el caso del New York Times contra OpenAI, si las compañías de IA continuaran pagando a los datos de licencia con fines de capacitación, podría verse como “una admisión implícita” que debería haber pagado para licenciar esos datos y no haberlo escrito de forma gratuita, dijo Rubin.

    “[AI companies] Ya tienen un billón de palabras que han robado. No necesitan las palabras adicionales tan mal para la capacitación, pero quieren tener el contenido actualizado para respuestas [in their AI search engines]”, Dijo Bill Gross, fundador de la empresa de inicio de IA Prorata.ai, que está construyendo soluciones tecnológicas para compensar a los editores por el contenido utilizado por las compañías generativas de IA.

    Tanto las compañías de IA como los editores pueden beneficiarse de esta posible evolución, según Rubin. Las compañías de IA obtienen acceso a noticias confiables y actualizadas de fuentes confiables para responder preguntas sobre los eventos actuales en sus productos, y los editores “pueden llenar un vacío que tenían miedo que faltaran con la forma en que estas herramientas de IA han evolucionado. Estaban perdiendo clics y globos oculares y enlaces a sus páginas”, dijo. Tener una mejor atribución en lugares como la búsqueda de chatgpt tiene el potencial de impulsar más tráfico a los sitios de los editores. Al menos, esa es la esperanza.

    “Tiene el potencial de generar más dinero para los editores”, dijo Rubin. “Los editores están apostando a que así es como las personas van a interactuar con los medios de comunicación en el futuro”.

    Desde el otoño pasado, Operai ha desafiado a los gigantes de búsqueda como Google con su motor de búsqueda de IA, búsqueda de chatgpt, y ese esfuerzo depende del acceso al contenido de noticias. Cuando se le preguntó si la estructura de los acuerdos de Operai con los editores había cambiado, un portavoz de OpenAI señaló el lanzamiento de la compañía de la compañía de ChatGPT en octubre de 2024, así como mejoras anunciadas esta semana.

    “Tenemos un feed directo al contenido de nuestro socio editor para mostrar resúmenes, citas y enlaces atribuidos a informes originales en respuesta a preguntas relevantes”, dijo el portavoz. “Ese es un componente de las ofertas. La capacitación posterior ayuda a aumentar la precisión de las respuestas relacionadas con el contenido de un editor”. El portavoz no respondió a otras solicitudes de comentarios.

    No está claro cuántos editores como The Washington Post no se pueden hacer de OpenAI, especialmente porque puede surgir un modelo diferente centrado en la búsqueda de ChatGPT. Pero la perspectiva para los acuerdos de licencia entre editores y compañías de IA parece estar empeorando. El valor de estos acuerdos está “en picado”, al menos según el CEO de Atlantic, Nicholas Thompson, quien habló en el evento Reuters Next en diciembre pasado.

    “Todavía hay un mercado para la licencia de contenido para la capacitación y eso sigue siendo importante, pero continuaremos viendo un enfoque en entrar en acuerdos que resultan en impulsar el tráfico a los sitios”, dijo John Monterubio, socio del grupo avanzado de medios y tecnología en la firma de abogados Loeb & Loeb. “Será la nueva forma de marketing de SEO y compra de anuncios, para parecer más altos en los resultados al comunicarse con estos [generative AI] herramientas.”

    Lo que hemos escuchado

    “No tenemos que preocuparnos por una narración algo falsa de: las cookies deben ir … entonces puedes poner todo este ancho de banda y potencia para mejorar el mercado actual, sin preocuparte por un posible problema futuro que estuviera en el control de Google todo el tiempo”.

    Anónimo Publishing Ejecute la decisión de Google la semana pasada de continuar usando cookies de terceros en Chrome.

    Números para saber

    $ 50 millones: la cantidad que Los Angeles Times perdió en 2024.

    50%: El porcentaje de adultos estadounidenses que dijeron que la IA tendrá un impacto muy o algo negativo en las noticias que las personas obtienen en los EE. UU. Durante los próximos 20 años, según un estudio del Centro de Investigación Pew.

    $ 100 millones: la cantidad Spotify ha pagado a los editores y creadores de podcasts desde enero.

    0.3%: La disminución esperada en el uso de los medios (canales digitales y tradicionales) en 2025, la primera caída desde 2009, según PQ Media Research.

    Lo que hemos cubierto

    Las demandas de AI destacan las luchas de los editores para impedir que los bots raspen contenido

    • La reciente demanda de Ziff Davis contra Operai destaca la realidad de que los editores aún no tienen una forma confiable de evitar que las compañías de IA raspen su contenido de forma gratuita.
    • Si bien han surgido herramientas como Robots.txt archivos, paredes de pago y etiquetas de bloqueo AI-AI, muchos editores admiten que es muy difícil hacer cumplir el control en cada bot, especialmente porque algunos ignoran los protocolos estándar o enmascaran sus identidades.

    Leer más aquí.

    ¿Quién compraría Chrome?

    • El ensayo antimonopolio de búsqueda de Google podría obligar a Google a separarse del navegador Chrome.
    • Si lo hizo, OpenAi, Perplexity, Yahoo y Duckduckgo podrían ser algunos de los compradores potenciales.

    Lea más sobre el impacto potencial de una venta masiva de Chrome aquí.

    Tiktok está cortejando a los creadores y agencias para participar en sus herramientas en vivo

    • Tiktok está tratando de demostrar el potencial de ingresos de sus herramientas en vivo.
    • La plataforma de redes sociales dice que sus creadores ahora generan colectivamente $ 10 millones en ingresos diariamente a través de la transmisión en vivo.

    Lea más sobre el tono de Tiktok aquí.

    ¿WTF son bots grises?

    • Los rastreadores y raspadores de IA generativos están siendo llamados “bots grises” por algunos para ilustrar la línea borrosa entre el tráfico real y falso.
    • Estos bots pueden afectar el análisis y robar contenido, y las impresiones publicitarias impulsadas por la IA pueden dañar las tasas de clics y las tasas de conversión.

    Lea más sobre por qué los bots grises son un riesgo para los editores aquí.

    ¿Facebook se está convirtiendo en un nuevo flujo de ingresos nuevamente para los editores?

    • Los editores han sido testigos de un reciente pico de referencia de Facebook, y es, algo sorprendentemente, coincidiendo con una afluencia de ingresos del programa de monetización de contenido de Meta.
    • De los 10 editores con los que Digay habló para este artículo, varios están en camino de hacer entre seis y siete cifras este año del último programa de monetización de contenido de Meta.

    Lea más sobre lo que reciben los editores de Facebook aquí.

    Lo que estamos leyendo

    Las ambiciones de video de los podcasts de los medios de comunicación destacan el movimiento del formato de audio a la televisión

    Los medios de comunicación como el New York Times y el Atlantic están poniendo más recursos en la producción de videos de los populares programas de podcast para aprovechar el público más joven de YouTube, informó Vanity Fair.

    La perplejidad quiere recopilar datos sobre los usuarios para vender anuncios personalizados

    El CEO de Perplexity, Aravind Srinivas, dijo que la perplejidad está construyendo su propio navegador para recopilar datos de usuarios y vender anuncios personalizados, informó TechCrunch.

    El presidente Trump apunta a la prensa en los primeros 100 días

    El presidente Trump apunta a las compañías de medios tradicionales en sus primeros 100 días, utilizando tácticas como prohibir los puntos de venta de que cubren los eventos de la Casa Blanca hasta el lanzamiento de investigaciones en las principales redes, informó Axios.

    SemAFOR probará suscripciones

    SemaFor “probará” suscripciones en “Due Time”, el fundador Justin Smith dijo al Inteligencer de la revista New York en una inmersión profunda en la empresa de inicio de noticias centrada en el boletín.

    Continue Reading

    Trending