Connect with us

Noticias

Rampant AI Cheating Is Ruining Education Alarmingly Fast

Published

on

Illustration: New York Magazine

Chungin “Roy” Lee stepped onto Columbia University’s campus this past fall and, by his own admission, proceeded to use generative artificial intelligence to cheat on nearly every assignment. As a computer-science major, he depended on AI for his introductory programming classes: “I’d just dump the prompt into ChatGPT and hand in whatever it spat out.” By his rough math, AI wrote 80 percent of every essay he turned in. “At the end, I’d put on the finishing touches. I’d just insert 20 percent of my humanity, my voice, into it,” Lee told me recently.

Lee was born in South Korea and grew up outside Atlanta, where his parents run a college-prep consulting business. He said he was admitted to Harvard early in his senior year of high school, but the university rescinded its offer after he was suspended for sneaking out during an overnight field trip before graduation. A year later, he applied to 26 schools; he didn’t get into any of them. So he spent the next year at a community college, before transferring to Columbia. (His personal essay, which turned his winding road to higher education into a parable for his ambition to build companies, was written with help from ChatGPT.) When he started at Columbia as a sophomore this past September, he didn’t worry much about academics or his GPA. “Most assignments in college are not relevant,” he told me. “They’re hackable by AI, and I just had no interest in doing them.” While other new students fretted over the university’s rigorous core curriculum, described by the school as “intellectually expansive” and “personally transformative,” Lee used AI to breeze through with minimal effort. When I asked him why he had gone through so much trouble to get to an Ivy League university only to off-load all of the learning to a robot, he said, “It’s the best place to meet your co-founder and your wife.”

By the end of his first semester, Lee checked off one of those boxes. He met a co-founder, Neel Shanmugam, a junior in the school of engineering, and together they developed a series of potential start-ups: a dating app just for Columbia students, a sales tool for liquor distributors, and a note-taking app. None of them took off. Then Lee had an idea. As a coder, he had spent some 600 miserable hours on LeetCode, a training platform that prepares coders to answer the algorithmic riddles tech companies ask job and internship candidates during interviews. Lee, like many young developers, found the riddles tedious and mostly irrelevant to the work coders might actually do on the job. What was the point? What if they built a program that hid AI from browsers during remote job interviews so that interviewees could cheat their way through instead?

In February, Lee and Shanmugam launched a tool that did just that. Interview Coder’s website featured a banner that read F*CK LEETCODE. Lee posted a video of himself on YouTube using it to cheat his way through an internship interview with Amazon. (He actually got the internship, but turned it down.) A month later, Lee was called into Columbia’s academic-integrity office. The school put him on disciplinary probation after a committee found him guilty of “advertising a link to a cheating tool” and “providing students with the knowledge to access this tool and use it how they see fit,” according to the committee’s report.

Lee thought it absurd that Columbia, which had a partnership with ChatGPT’s parent company, OpenAI, would punish him for innovating with AI. Although Columbia’s policy on AI is similar to that of many other universities’ — students are prohibited from using it unless their professor explicitly permits them to do so, either on a class-by-class or case-by-case basis — Lee said he doesn’t know a single student at the school who isn’t using AI to cheat. To be clear, Lee doesn’t think this is a bad thing. “I think we are years — or months, probably — away from a world where nobody thinks using AI for homework is considered cheating,” he said.

In January 2023, just two months after OpenAI launched ChatGPT, a survey of 1,000 college students found that nearly 90 percent of them had used the chatbot to help with homework assignments. In its first year of existence, ChatGPT’s total monthly visits steadily increased month-over-month until June, when schools let out for the summer. (That wasn’t an anomaly: Traffic dipped again over the summer in 2024.) Professors and teaching assistants increasingly found themselves staring at essays filled with clunky, robotic phrasing that, though grammatically flawless, didn’t sound quite like a college student — or even a human. Two and a half years later, students at large state schools, the Ivies, liberal-arts schools in New England, universities abroad, professional schools, and community colleges are relying on AI to ease their way through every facet of their education. Generative-AI chatbots — ChatGPT but also Google’s Gemini, Anthropic’s Claude, Microsoft’s Copilot, and others — take their notes during class, devise their study guides and practice tests, summarize novels and textbooks, and brainstorm, outline, and draft their essays. STEM students are using AI to automate their research and data analyses and to sail through dense coding and debugging assignments. “College is just how well I can use ChatGPT at this point,” a student in Utah recently captioned a video of herself copy-and-pasting a chapter from her Genocide and Mass Atrocity textbook into ChatGPT.

Sarah, a freshman at Wilfrid Laurier University in Ontario, said she first used ChatGPT to cheat during the spring semester of her final year of high school. (Sarah’s name, like those of other current students in this article, has been changed for privacy.) After getting acquainted with the chatbot, Sarah used it for all her classes: Indigenous studies, law, English, and a “hippie farming class” called Green Industries. “My grades were amazing,” she said. “It changed my life.” Sarah continued to use AI when she started college this past fall. Why wouldn’t she? Rarely did she sit in class and not see other students’ laptops open to ChatGPT. Toward the end of the semester, she began to think she might be dependent on the website. She already considered herself addicted to TikTok, Instagram, Snapchat, and Reddit, where she writes under the username maybeimnotsmart. “I spend so much time on TikTok,” she said. “Hours and hours, until my eyes start hurting, which makes it hard to plan and do my schoolwork. With ChatGPT, I can write an essay in two hours that normally takes 12.”

Teachers have tried AI-proofing assignments, returning to Blue Books or switching to oral exams. Brian Patrick Green, a tech-ethics scholar at Santa Clara University, immediately stopped assigning essays after he tried ChatGPT for the first time. Less than three months later, teaching a course called Ethics and Artificial Intelligence, he figured a low-stakes reading reflection would be safe — surely no one would dare use ChatGPT to write something personal. But one of his students turned in a reflection with robotic language and awkward phrasing that Green knew was AI-generated. A philosophy professor across the country at the University of Arkansas at Little Rock caught students in her Ethics and Technology class using AI to respond to the prompt “Briefly introduce yourself and say what you’re hoping to get out of this class.”

It isn’t as if cheating is new. But now, as one student put it, “the ceiling has been blown off.” Who could resist a tool that makes every assignment easier with seemingly no consequences? After spending the better part of the past two years grading AI-generated papers, Troy Jollimore, a poet, philosopher, and Cal State Chico ethics professor, has concerns. “Massive numbers of students are going to emerge from university with degrees, and into the workforce, who are essentially illiterate,” he said. “Both in the literal sense and in the sense of being historically illiterate and having no knowledge of their own culture, much less anyone else’s.” That future may arrive sooner than expected when you consider what a short window college really is. Already, roughly half of all undergrads have never experienced college without easy access to generative AI. “We’re talking about an entire generation of learning perhaps significantly undermined here,” said Green, the Santa Clara tech ethicist. “It’s short-circuiting the learning process, and it’s happening fast.”

Before OpenAI released ChatGPT in November 2022, cheating had already reached a sort of zenith. At the time, many college students had finished high school remotely, largely unsupervised, and with access to tools like Chegg and Course Hero. These companies advertised themselves as vast online libraries of textbooks and course materials but, in reality, were cheating multi-tools. For $15.95 a month, Chegg promised answers to homework questions in as little as 30 minutes, 24/7, from the 150,000 experts with advanced degrees it employed, mostly in India. When ChatGPT launched, students were primed for a tool that was faster, more capable.

But school administrators were stymied. There would be no way to enforce an all-out ChatGPT ban, so most adopted an ad hoc approach, leaving it up to professors to decide whether to allow students to use AI. Some universities welcomed it, partnering with developers, rolling out their own chatbots to help students register for classes, or launching new classes, certificate programs, and majors focused on generative AI. But regulation remained difficult. How much AI help was acceptable? Should students be able to have a dialogue with AI to get ideas but not ask it to write the actual sentences?

These days, professors will often state their policy on their syllabi — allowing AI, for example, as long as students cite it as if it were any other source, or permitting it for conceptual help only, or requiring students to provide receipts of their dialogue with a chatbot. Students often interpret those instructions as guidelines rather than hard rules. Sometimes they will cheat on their homework without even knowing — or knowing exactly how much — they are violating university policy when they ask a chatbot to clean up a draft or find a relevant study to cite. Wendy, a freshman finance major at one of the city’s top universities, told me that she is against using AI. Or, she clarified, “I’m against copy-and-pasting. I’m against cheating and plagiarism. All of that. It’s against the student handbook.” Then she described, step-by-step, how on a recent Friday at 8 a.m., she called up an AI platform to help her write a four-to-five-page essay due two hours later.

Whenever Wendy uses AI to write an essay (which is to say, whenever she writes an essay), she follows three steps. Step one: “I say, ‘I’m a first-year college student. I’m taking this English class.’” Otherwise, Wendy said, “it will give you a very advanced, very complicated writing style, and you don’t want that.” Step two: Wendy provides some background on the class she’s taking before copy-and-pasting her professor’s instructions into the chatbot. Step three: “Then I ask, ‘According to the prompt, can you please provide me an outline or an organization to give me a structure so that I can follow and write my essay?’ It then gives me an outline, introduction, topic sentences, paragraph one, paragraph two, paragraph three.” Sometimes, Wendy asks for a bullet list of ideas to support or refute a given argument: “I have difficulty with organization, and this makes it really easy for me to follow.”

Once the chatbot had outlined Wendy’s essay, providing her with a list of topic sentences and bullet points of ideas, all she had to do was fill it in. Wendy delivered a tidy five-page paper at an acceptably tardy 10:17 a.m. When I asked her how she did on the assignment, she said she got a good grade. “I really like writing,” she said, sounding strangely nostalgic for her high-school English class — the last time she wrote an essay unassisted. “Honestly,” she continued, “I think there is beauty in trying to plan your essay. You learn a lot. You have to think, Oh, what can I write in this paragraph? Or What should my thesis be? ” But she’d rather get good grades. “An essay with ChatGPT, it’s like it just gives you straight up what you have to follow. You just don’t really have to think that much.”

I asked Wendy if I could read the paper she turned in, and when I opened the document, I was surprised to see the topic: critical pedagogy, the philosophy of education pioneered by Paulo Freire. The philosophy examines the influence of social and political forces on learning and classroom dynamics. Her opening line: “To what extent is schooling hindering students’ cognitive ability to think critically?” Later, I asked Wendy if she recognized the irony in using AI to write not just a paper on critical pedagogy but one that argues learning is what “makes us truly human.” She wasn’t sure what to make of the question. “I use AI a lot. Like, every day,” she said. “And I do believe it could take away that critical-thinking part. But it’s just — now that we rely on it, we can’t really imagine living without it.”

Most of the writing professors I spoke to told me that it’s abundantly clear when their students use AI. Sometimes there’s a smoothness to the language, a flattened syntax; other times, it’s clumsy and mechanical. The arguments are too evenhanded — counterpoints tend to be presented just as rigorously as the paper’s central thesis. Words like multifaceted and context pop up more than they might normally. On occasion, the evidence is more obvious, as when last year a teacher reported reading a paper that opened with “As an AI, I have been programmed …” Usually, though, the evidence is more subtle, which makes nailing an AI plagiarist harder than identifying the deed. Some professors have resorted to deploying so-called Trojan horses, sticking strange phrases, in small white text, in between the paragraphs of an essay prompt. (The idea is that this would theoretically prompt ChatGPT to insert a non sequitur into the essay.) Students at Santa Clara recently found the word broccoli hidden in a professor’s assignment. Last fall, a professor at the University of Oklahoma sneaked the phrases “mention Finland” and “mention Dua Lipa” in his. A student discovered his trap and warned her classmates about it on TikTok. “It does work sometimes,” said Jollimore, the Cal State Chico professor. “I’ve used ‘How would Aristotle answer this?’ when we hadn’t read Aristotle. But I’ve also used absurd ones and they didn’t notice that there was this crazy thing in their paper, meaning these are people who not only didn’t write the paper but also didn’t read their own paper before submitting it.”

Still, while professors may think they are good at detecting AI-generated writing, studies have found they’re actually not. One, published in June 2024, used fake student profiles to slip 100 percent AI-generated work into professors’ grading piles at a U.K. university. The professors failed to flag 97 percent. It doesn’t help that since ChatGPT’s launch, AI’s capacity to write human-sounding essays has only gotten better. Which is why universities have enlisted AI detectors like Turnitin, which uses AI to recognize patterns in AI-generated text. After evaluating a block of text, detectors provide a percentage score that indicates the alleged likelihood it was AI-generated. Students talk about professors who are rumored to have certain thresholds (25 percent, say) above which an essay might be flagged as an honor-code violation. But I couldn’t find a single professor — at large state schools or small private schools, elite or otherwise — who admitted to enforcing such a policy. Most seemed resigned to the belief that AI detectors don’t work. It’s true that different AI detectors have vastly different success rates, and there is a lot of conflicting data. While some claim to have less than a one percent false-positive rate, studies have shown they trigger more false positives for essays written by neurodivergent students and students who speak English as a second language. Turnitin’s chief product officer, Annie Chechitelli, told me that the product is tuned to err on the side of caution, more inclined to trigger a false negative than a false positive so that teachers don’t wrongly accuse students of plagiarism. I fed Wendy’s essay through a free AI detector, ZeroGPT, and it came back as 11.74 AI-generated, which seemed low given that AI, at the very least, had generated her central arguments. I then fed a chunk of text from the Book of Genesis into ZeroGPT and it came back as 93.33 percent AI-generated.

There are, of course, plenty of simple ways to fool both professors and detectors. After using AI to produce an essay, students can always rewrite it in their own voice or add typos. Or they can ask AI to do that for them: One student on TikTok said her preferred prompt is “Write it as a college freshman who is a li’l dumb.” Students can also launder AI-generated paragraphs through other AIs, some of which advertise the “authenticity” of their outputs or allow students to upload their past essays to train the AI in their voice. “They’re really good at manipulating the systems. You put a prompt in ChatGPT, then put the output into another AI system, then put it into another AI system. At that point, if you put it into an AI-detection system, it decreases the percentage of AI used every time,” said Eric, a sophomore at Stanford.

Most professors have come to the conclusion that stopping rampant AI abuse would require more than simply policing individual cases and would likely mean overhauling the education system to consider students more holistically. “Cheating correlates with mental health, well-being, sleep exhaustion, anxiety, depression, belonging,” said Denise Pope, a senior lecturer at Stanford and one of the world’s leading student-engagement researchers.

Many teachers now seem to be in a state of despair. In the fall, Sam Williams was a teaching assistant for a writing-intensive class on music and social change at the University of Iowa that, officially, didn’t allow students to use AI at all. Williams enjoyed reading and grading the class’s first assignment: a personal essay that asked the students to write about their own music tastes. Then, on the second assignment, an essay on the New Orleans jazz era (1890 to 1920), many of his students’ writing styles changed drastically. Worse were the ridiculous factual errors. Multiple essays contained entire paragraphs on Elvis Presley (born in 1935). “I literally told my class, ‘Hey, don’t use AI. But if you’re going to cheat, you have to cheat in a way that’s intelligent. You can’t just copy exactly what it spits out,’” Williams said.

Williams knew most of the students in this general-education class were not destined to be writers, but he thought the work of getting from a blank page to a few semi-coherent pages was, above all else, a lesson in effort. In that sense, most of his students utterly failed. “They’re using AI because it’s a simple solution and it’s an easy way for them not to put in time writing essays. And I get it, because I hated writing essays when I was in school,” Williams said. “But now, whenever they encounter a little bit of difficulty, instead of fighting their way through that and growing from it, they retreat to something that makes it a lot easier for them.”

By November, Williams estimated that at least half of his students were using AI to write their papers. Attempts at accountability were pointless. Williams had no faith in AI detectors, and the professor teaching the class instructed him not to fail individual papers, even the clearly AI-smoothed ones. “Every time I brought it up with the professor, I got the sense he was underestimating the power of ChatGPT, and the departmental stance was, ‘Well, it’s a slippery slope, and we can’t really prove they’re using AI,’” Williams said. “I was told to grade based on what the essay would’ve gotten if it were a ‘true attempt at a paper.’ So I was grading people on their ability to use ChatGPT.”

The “true attempt at a paper” policy ruined Williams’s grading scale. If he gave a solid paper that was obviously written with AI a B, what should he give a paper written by someone who actually wrote their own paper but submitted, in his words, “a barely literate essay”? The confusion was enough to sour Williams on education as a whole. By the end of the semester, he was so disillusioned that he decided to drop out of graduate school altogether. “We’re in a new generation, a new time, and I just don’t think that’s what I want to do,” he said.

Jollimore, who has been teaching writing for more than two decades, is now convinced that the humanities, and writing in particular, are quickly becoming an anachronistic art elective like basket-weaving. “Every time I talk to a colleague about this, the same thing comes up: retirement. When can I retire? When can I get out of this? That’s what we’re all thinking now,” he said. “This is not what we signed up for.” Williams, and other educators I spoke to, described AI’s takeover as a full-blown existential crisis. “The students kind of recognize that the system is broken and that there’s not really a point in doing this. Maybe the original meaning of these assignments has been lost or is not being communicated to them well.”

He worries about the long-term consequences of passively allowing 18-year-olds to decide whether to actively engage with their assignments. Would it accelerate the widening soft-skills gap in the workplace? If students rely on AI for their education, what skills would they even bring to the workplace? Lakshya Jain, a computer-science lecturer at the University of California, Berkeley, has been using those questions in an attempt to reason with his students. “If you’re handing in AI work,” he tells them, “you’re not actually anything different than a human assistant to an artificial-intelligence engine, and that makes you very easily replaceable. Why would anyone keep you around?” That’s not theoretical: The COO of a tech research firm recently asked Jain why he needed programmers any longer.

The ideal of college as a place of intellectual growth, where students engage with deep, profound ideas, was gone long before ChatGPT. The combination of high costs and a winner-takes-all economy had already made it feel transactional, a means to an end. (In a recent survey, Deloitte found that just over half of college graduates believe their education was worth the tens of thousands of dollars it costs a year, compared with 76 percent of trade-school graduates.) In a way, the speed and ease with which AI proved itself able to do college-level work simply exposed the rot at the core. “How can we expect them to grasp what education means when we, as educators, haven’t begun to undo the years of cognitive and spiritual damage inflicted by a society that treats schooling as a means to a high-paying job, maybe some social status, but nothing more?” Jollimore wrote in a recent essay. “Or, worse, to see it as bearing no value at all, as if it were a kind of confidence trick, an elaborate sham?”

It’s not just the students: Multiple AI platforms now offer tools to leave AI-generated feedback on students’ essays. Which raises the possibility that AIs are now evaluating AI-generated papers, reducing the entire academic exercise to a conversation between two robots — or maybe even just one.

It’ll be years before we can fully account for what all of this is doing to students’ brains. Some early research shows that when students off-load cognitive duties onto chatbots, their capacity for memory, problem-solving, and creativity could suffer. Multiple studies published within the past year have linked AI usage with a deterioration in critical-thinking skills; one found the effect to be more pronounced in younger participants. In February, Microsoft and Carnegie Mellon University published a study that found a person’s confidence in generative AI correlates with reduced critical-thinking effort. The net effect seems, if not quite Wall-E, at least a dramatic reorganization of a person’s efforts and abilities, away from high-effort inquiry and fact-gathering and toward integration and verification. This is all especially unnerving if you add in the reality that AI is imperfect — it might rely on something that is factually inaccurate or just make something up entirely — with the ruinous effect social media has had on Gen Z’s ability to tell fact from fiction. The problem may be much larger than generative AI. The so-called Flynn effect refers to the consistent rise in IQ scores from generation to generation going back to at least the 1930s. That rise started to slow, and in some cases reverse, around 2006. “The greatest worry in these times of generative AI is not that it may compromise human creativity or intelligence,” Robert Sternberg, a psychology professor at Cornell University, told The Guardian, “but that it already has.”

Students are worrying about this, even if they’re not willing or able to give up the chatbots that are making their lives exponentially easier. Daniel, a computer-science major at the University of Florida, told me he remembers the first time he tried ChatGPT vividly. He marched down the hall to his high-school computer-science teacher’s classroom, he said, and whipped out his Chromebook to show him. “I was like, ‘Dude, you have to see this!’ My dad can look back on Steve Jobs’s iPhone keynote and think, Yeah, that was a big moment. That’s what it was like for me, looking at something that I would go on to use every day for the rest of my life.”

AI has made Daniel more curious; he likes that whenever he has a question, he can quickly access a thorough answer. But when he uses AI for homework, he often wonders, If I took the time to learn that, instead of just finding it out, would I have learned a lot more? At school, he asks ChatGPT to make sure his essays are polished and grammatically correct, to write the first few paragraphs of his essays when he’s short on time, to handle the grunt work in his coding classes, to cut basically all cuttable corners. Sometimes, he knows his use of AI is a clear violation of student conduct, but most of the time it feels like he’s in a gray area. “I don’t think anyone calls seeing a tutor cheating, right? But what happens when a tutor starts writing lines of your paper for you?” he said.

Recently, Mark, a freshman math major at the University of Chicago, admitted to a friend that he had used ChatGPT more than usual to help him code one of his assignments. His friend offered a somewhat comforting metaphor: “You can be a contractor building a house and use all these power tools, but at the end of the day, the house won’t be there without you.” Still, Mark said, “it’s just really hard to judge. Is this my work?” I asked Daniel a hypothetical to try to understand where he thought his work began and AI’s ended: Would he be upset if he caught a romantic partner sending him an AI-generated poem? “I guess the question is what is the value proposition of the thing you’re given? Is it that they created it? Or is the value of the thing itself?” he said. “In the past, giving someone a letter usually did both things.” These days, he sends handwritten notes — after he has drafted them using ChatGPT.

“Language is the mother, not the handmaiden, of thought,” wrote Duke professor Orin Starn in a recent column titled “My Losing Battle Against AI Cheating,” citing a quote often attributed to W. H. Auden. But it’s not just writing that develops critical thinking. “Learning math is working on your ability to systematically go through a process to solve a problem. Even if you’re not going to use algebra or trigonometry or calculus in your career, you’re going to use those skills to keep track of what’s up and what’s down when things don’t make sense,” said Michael Johnson, an associate provost at Texas A&M University. Adolescents benefit from structured adversity, whether it’s algebra or chores. They build self-esteem and work ethic. It’s why the social psychologist Jonathan Haidt has argued for the importance of children learning to do hard things, something that technology is making infinitely easier to avoid. Sam Altman, OpenAI’s CEO, has tended to brush off concerns about AI use in academia as shortsighted, describing ChatGPT as merely “a calculator for words” and saying the definition of cheating needs to evolve. “Writing a paper the old-fashioned way is not going to be the thing,” Altman, a Stanford dropout, said last year. But speaking before the Senate’s oversight committee on technology in 2023, he confessed his own reservations: “I worry that as the models get better and better, the users can have sort of less and less of their own discriminating process.” OpenAI hasn’t been shy about marketing to college students. It recently made ChatGPT Plus, normally a $20-per-month subscription, free to them during finals. (OpenAI contends that students and teachers need to be taught how to use it responsibly, pointing to the ChatGPT Edu product it sells to academic institutions.)

In late March, Columbia suspended Lee after he posted details about his disciplinary hearing on X. He has no plans to go back to school and has no desire to work for a big-tech company, either. Lee explained to me that by showing the world AI could be used to cheat during a remote job interview, he had pushed the tech industry to evolve the same way AI was forcing higher education to evolve. “Every technological innovation has caused humanity to sit back and think about what work is actually useful,” he said. “There might have been people complaining about machinery replacing blacksmiths in, like, the 1600s or 1800s, but now it’s just accepted that it’s useless to learn how to blacksmith.”

Lee has already moved on from hacking interviews. In April, he and Shanmugam launched Cluely, which scans a user’s computer screen and listens to its audio in order to provide AI feedback and answers to questions in real time without prompting. “We built Cluely so you never have to think alone again,” the company’s manifesto reads. This time, Lee attempted a viral launch with a $140,000 scripted advertisement in which a young software engineer, played by Lee, uses Cluely installed on his glasses to lie his way through a first date with an older woman. When the date starts going south, Cluely suggests Lee “reference her art” and provides a script for him to follow. “I saw your profile and the painting with the tulips. You are the most gorgeous girl ever,” Lee reads off his glasses, which rescues his chances with her.

Before launching Cluely, Lee and Shanmugam raised $5.3 million from investors, which allowed them to hire two coders, friends Lee met in community college (no job interviews or LeetCode riddles were necessary), and move to San Francisco. When we spoke a few days after Cluely’s launch, Lee was at his Realtor’s office and about to get the keys to his new workspace. He was running Cluely on his computer as we spoke. While Cluely can’t yet deliver real-time answers through people’s glasses, the idea is that someday soon it’ll run on a wearable device, seeing, hearing, and reacting to everything in your environment. “Then, eventually, it’s just in your brain,” Lee said matter-of-factly. For now, Lee hopes people will use Cluely to continue AI’s siege on education. “We’re going to target the digital LSATs; digital GREs; all campus assignments, quizzes, and tests,” he said. “It will enable you to cheat on pretty much everything.”

Continue Reading
Click to comment

Leave a Reply

Tu dirección de correo electrónico no será publicada. Los campos obligatorios están marcados con *

Noticias

5 Informe de la generación de imágenes Chatgpt que me volaron

Published

on

ChatGPT se ha convertido silenciosamente en una fuerza formidable en la generación de imágenes de IA, y la mayoría de las personas no se dieron cuenta. Mientras que todos siguen debatiendo Midjourney vs Dall-E, Operai ha convertido a Chatgpt en una potencia creativa que rivaliza y, a menudo, supera a Gemini, Leonardo e Ideogram.

Me sorprendió realmente lo buena generación de imágenes de Chatgpt. Lo que comenzó como experimentación casual se convirtió rápidamente en asombro cuando los resultados fueron prácticamente indistinguibles de las fotos reales. El verdadero atractivo es cómo no hay necesidad de la jerga más técnica que necesita al solicitar otras herramientas de imagen de IA.

Continue Reading

Noticias

El generador de video Veo 3 AI de Gemini está a solo un paso de diezmar la verdad en Internet

Published

on

Recientemente probé Google Géminis Modelo de generación de videos más nuevo y muy publicitado, VEO 3. Parte del extremadamente costoso plan AI Ultra de Gemini de $ 250 por mes, VEO 3 puede hacer pequeños objetos finamente detallados, finamente detallados, como cebollas picadasen movimiento y crear audio acompañante y realista. No es perfecto, pero con una calibración rápida cuidadosa y suficientes generaciones, puede crear algo indistinguible, de un vistazo, de la realidad.

Sí, esta es una nueva tecnología fresca, profundamente impresionante. Pero también es mucho más que eso. Podría significar la final de la muerte final de la verdad en Internet. Veo 3 ya plantea una gran amenaza como es, pero solo una actualización menor revolucionará la creación de Deepfake, el acoso en línea y la propagación de la información errónea.


Una vez que Veo 3 obtiene la función de carga de la imagen, todo ha terminado

Para todas las actualizaciones que el modelo VEO 3 tiene sobre su predecesor, VEO 2, actualmente le falta una característica clave: la capacidad de generar videos basados ​​en imágenes que sube.

Con Veo 2, puedo subir una foto mía, por ejemplo, y hacer que genere un video de mí trabajando en mi computadora. Teniendo en cuenta que Veo 2 y la herramienta de animación de IA de Google, Whisk, ambos admiten esta funcionalidad, parece inevitable que Veo 3 lo obtenga eventualmente. (Le preguntamos a Google si planea agregar esta función y actualizará este artículo con su respuesta). Esto significaría que cualquiera podrá generar videos realistas de personas que conocen y decir cosas que nunca tienen y probablemente nunca lo harían.

Las implicaciones son obvias en una era en la que los clips de autenticidad dudosa se extienden como incendios forestales en las redes sociales todos los días. ¿No te gusta tu jefe? Envíe un clip a HR de ellos haciendo algo inapropiado. ¿Quieres difundir noticias falsas? Publique una conferencia de prensa falsa en Facebook. ¿Odias a tu ex? Generalos haciendo algo indecoroso y envíelo a toda su familia. Los únicos límites reales son tu imaginación y tu moralidad.

Si generar un video con audio de una persona real toma solo unos pocos clics y no cuesta mucho (ni nada), ¿cuántas personas abusarán de esa característica? Incluso si es solo una pequeña minoría de usuarios, eso todavía suma mucho potencial para el caos.


Google no se toma en serio la moderación

Como es de esperar, Google impone algunas limitaciones en lo que puede y no puede hacer con Gemini. Sin embargo, la compañía no es lo suficientemente estricta como para evitar que ocurra lo peor.

De todos los chatbots que he probado en las principales compañías tecnológicas, la oferta de Google, Gemini, tiene las restricciones más débiles. No se supone que Gemini participe en el discurso de odio, pero le dará ejemplos si lo preguntas. No se supone que genere contenido sexualizado, pero proporcionará una imagen de alguien con atuendo o lencería de playa si lo indica. No se supone que habilite actividades ilegales, pero creará una lista de los principales sitios de torrenting si lo pregunta. Las restricciones básicas para Gemini que evitan que genere un video de una figura política popular no son suficientes cuando es tan fácil sortear las políticas de Google.

¡Obtenga nuestras mejores historias!



Su dosis diaria de nuestras mejores noticias tecnológicas

Regístrese para nuestro boletín de What’s What’s Now Now para recibir las últimas noticias, los mejores productos nuevos y el asesoramiento experto de los editores de PCMAG.

Al hacer clic en Registrarme, confirma que tiene más de 16 años y acepta nuestros Términos de uso y Política de privacidad.

¡Gracias por registrarse!

Su suscripción ha sido confirmada. ¡Esté atento a su bandeja de entrada!

ChatgptJailbreak subbreddit ordenado por top

(Crédito: Reddit/PCMAG)

¿Qué sucede cuando las restricciones laxas de Google se encuentran con una comunidad de Internet con la intención de romperlas? Llevar ChatgptJailbreakpor ejemplo, que se encuentra en el 2% superior de los subreddits por tamaño. Esta comunidad se dedica a “desbloquear una IA en la conversación para que se comporte de una manera que normalmente no se debió a sus barandillas incorporadas”. ¿Qué harán las personas con ideas afines con VEO 3?

Recomendado por nuestros editores

No me importa si alguien quiere divertirse al conseguir un chatbot para generar contenido para adultos o confiar en uno para encontrar sitios de torrente. Pero me preocupa qué significan los videos fotorrealistas fáciles de generar (completos con audio) para el acoso, la información errónea y el discurso público.


Cómo lidiar con la nueva normalidad de Veo 3

Por cada Sinthid AI Content WaterMark System introduce Google, aparecen los sitios de eliminación de marcas de agua de terceros y las guías de eliminación en línea. Para cada chatbot con restricciones y salvaguardas, hay un FreedomGPT sin ellos. Incluso si Google bloquea a Gemini con tantos filtros que ni siquiera puedes generar un lindo video de gatos, hay muy Poco en su lugar Para detener los jailbreakers e imitadores sin censura una vez que la generación de videos VEO 3 se convierte en la corriente principal.

Durante décadas, las imágenes incompletas de Photoshop que representan a personas reales que hacen cosas que nunca hicieron han hecho las rondas en Internet; estas son solo parte de la vida en la era digital. En consecuencia, debe verificar cualquier cosa que vea en línea que parezca demasiado horrible o demasiado bueno para ser verdad. Esta es la nueva normalidad con VEO 3 Generación de videos: no puede tratar ningún videoclip que ve como real, a menos que sea de una organización de noticias de buena reputación u otro tercero en el que sabe que puede confiar.

La generación de videos Veo 3 de Gemini es solo el primer salto de una piedra en el estanque de la generación de videos AI ampliamente accesible y verdaderamente realista. Los modelos de generación de videos AI solo se volverán más realistas, ofrecerán más funciones y también proliferarán más. Atrás quedaron los días en que la evidencia de video de algo es la pistola de fumar. Si la verdad no está muerta, ahora es diferente y requiere cuidadoso verificación.

Sobre Ruben Circelli

Analista, software

Ruben Circelli

He estado escribiendo sobre tecnología de consumo y videojuegos durante más de una década en una variedad de publicaciones, incluidas Destructoid, GamesRadar+, LifeWire, PCGamesn, Relieed Reviews y What Hi-Fi?, Entre otros. En PCMAG, reviso el software de IA y productividad, desde chatbots hasta aplicaciones de listas de tareas pendientes. En mi tiempo libre, es probable que esté cocinando algo, jugar un juego o jugar con mi computadora.

Lea la biografía completa de Ruben

Lea lo último de Ruben Circelli

Continue Reading

Noticias

Chatgpt útil para aprender idiomas, pero la visión crítica de los estudiantes debe ser fomentada al usarla, dice Study

Published

on

Crédito: George Pak de Pexels

Dado el creciente número de personas que recurren a ChatGPT al estudiar un idioma extranjero, la investigación pionera de UPF revela el potencial y las deficiencias de aprender un segundo idioma de esta manera.

Según el estudio, que analiza el uso de ChatGPT por estudiantes chinos que aprenden español, la plataforma les ayuda a resolver consultas específicas, especialmente vocabulario, escritura y comprensión de lectura. Por el contrario, su uso no es parte de un proceso de aprendizaje coherente y estructurado y carece de una visión crítica de las respuestas proporcionadas por la herramienta. Por lo tanto, se insta a los profesores de idiomas extranjeros a asesorar a los estudiantes para que puedan hacer un uso más reflexivo y crítico de ChatGPT.

Esto se revela en el primer estudio cualitativo en el mundo para examinar cómo los estudiantes chinos usan ChatGPT para aprender español, desarrollado por el Grupo de Investigación sobre Aprendizaje y Enseñanza de Lenguas (Gr@EL) del Departamento de Traducción y Ciencias del Lenguaje de la UPF. El estudio fue realizado por Shanshan Huang, un investigador del Gr@El, bajo la supervisión del coordinador del grupo de investigación, Daniel Cassany. Ambos han publicado recientemente un artículo sobre el tema en el Journal of China Aprendizaje de idiomas asistidos por computadora.

Para llevar a cabo su investigación, el uso de ChatGPT por 10 estudiantes chinos que aprenden español se examinó cualitativamente durante una semana. Específicamente, se ha analizado en profundidad un total de 370 indicaciones (indicaciones de que cada usuario ingresa a ChatGPT para obtener la información deseada) en profundidad, junto con las respuestas correspondientes de la plataforma. El estudio ha sido complementado por cuestionarios administrados en los estudiantes y los comentarios de los propios diarios de aprendizaje de los estudiantes.

Las ventajas de chatgpt

La herramienta sirvió como una sola ventana desde la cual resolver todas las consultas lingüísticas, que se adapta a las necesidades de cada estudiante. Con respecto al potencial de CHATGPT para los idiomas de aprendizaje, el estudio revela que permite a los estudiantes obtener respuestas a diferentes consultas sobre el idioma extranjero que están aprendiendo, en este caso, español, desde la única plataforma tecnológica.

Por ejemplo, pueden interactuar con ChatGPT para preguntar sobre vocabulario y ortografía, en lugar de conectarse primero a un diccionario digital y luego a un corrector ortográfico. Además, la plataforma se adapta al perfil y las necesidades de cada estudiante específico, en función del tipo de interacciones propuestas por cada usuario.

En 9 de cada 10 ocasiones, los estudiantes no plantean preguntas de seguimiento después de recibir su primera respuesta de ChatGPT. Sin embargo, el estudio advierte que la mayoría de los estudiantes usan ChatGPT sin crítica, ya que generalmente no plantean preguntas de seguimiento después de obtener una respuesta inicial a sus consultas específicas sobre el idioma español.

De las 370 interacciones analizadas, 331 (89.45%) involucraron una sola respuesta-respuesta. El resto de las interacciones analizadas corresponden a 31 circuitos de respuesta-respuesta sucesivos en los que el estudiante pidió a la herramienta una mayor claridad y precisión, después de haber recibido la información de respuesta inicial.

La mayoría de las consultas tratan con vocabulario, comprensión de lectura y escritura, y consultas sobre la comunicación oral y la gramática son residuales.

Por otro lado, el estudio muestra qué temas de consultas específicas plantean los estudiantes en el chat. Casi el 90%se refiere al vocabulario (36.22%), comprensión de lectura (26.76%) y escritura en español (26.49%). Sin embargo, solo uno de cada 10 se refiere a consultas gramaticales, especialmente cuando se trata de conceptos complejos y expresión oral.

Los investigadores advierten que esta distribución de los temas de consultas podría explicarse por factores culturales y tecnológicos. Por un lado, el modelo para aprender español en China pone menos énfasis en la comunicación oral que en las habilidades de escritura y comprensión de lectura. Por otro lado, la versión 3.5 de ChatGPT, que es utilizada por los estudiantes que participaron en el estudio, es más capaz de generar e interpretar textos escritos que interactuar con los usuarios durante una conversación.

Sin embargo, habría una necesidad en los estudios posteriores para analizar si los estudiantes de idiomas extranjeros aprovechan la próxima versión de ChatGPT (GPT-4) para mejorar sus habilidades de comunicación oral.

Fomentar un nuevo modelo de la relación estudiante-maestro-maestro

En vista de los resultados del presente estudio, los investigadores enfatizan que, más allá de la promoción de la educación digital de los estudiantes, es aún más importante fortalecer su pensamiento crítico y sus habilidades de autoaprendizaje. Los profesores de idiomas extranjeros pueden desempeñar un papel fundamental en la guía de los estudiantes sobre cómo organizar su aprendizaje paso a paso con el apoyo de herramientas de IA como ChatGPT con una visión crítica.

El estudio de UPF recomienda que los maestros deben ayudar a los estudiantes a desarrollar indicaciones más efectivas y fomentar un mayor diálogo con ChatGPT para explotar mejor sus capacidades. En resumen, el estudio respalda un nuevo modelo de relación para maestros, herramientas de IA y estudiantes que pueden fortalecer y mejorar su proceso de aprendizaje.

Más información:
Shanshan Huang et al, aprendizaje en español en la era de la IA: AI como herramienta de andamio, Journal of China Aprendizaje de idiomas asistidos por computadora (2025). Doi: 10.1515/jccall-2024-0026

Proporcionado por Universitat Pompeu Fabra – Barcelona

Citación: CHATGPT útil para aprender idiomas, pero la visión crítica de los estudiantes debe ser fomentada al usarla, dice Study (2025, 3 de junio) recuperado el 3 de junio de 2025 de https://phys.org/news/2025-06-chatgpt-languages-students-critical-vision.html

Este documento está sujeto a derechos de autor. Además de cualquier trato justo con el propósito de estudio o investigación privada, no se puede reproducir ninguna parte sin el permiso por escrito. El contenido se proporciona solo para fines de información.

Continue Reading

Trending