Connect with us

Noticias

Which AI Is Smarter And More Useful?

Published

on






Generative AI has been with us for over two years now, with most major tech companies trying to take a piece of the action. OpenAI’s ChatGPT may be the product more people know about thanks to its early market advantage, but Microsoft Copilot has the immense power of a multi-trillion dollar company behind it. Seems like a fair enough fight, right? So, with OpenAI and Microsoft both touting their flagship AIs, which one is actually the better bot when it comes to everyday usefulness? 

Advertisement

I’ve been putting AIs to the test against one another for a while now. Last year, when pitting ChatGPT against Google Gemini, the latter stole the crown  — but only barely. Can Copilot pull off a similar victory? I’ve devised a gauntlet of tests for these AIs, with questions designed to be difficult for large language models. Simply put, the goal is to push these AIs outside of their comfort zones to see which one has the widest range of usability and highlight their limitations. 

First, some parameters. I performed all these tests on the free version of both platforms, as that’s how the majority of users will experience them. If you’re one of the people paying $200 a month for the most premium version of ChatGPT, for example, your experience will differ from these results. Two, I used the main chat function for each test unless otherwise stated.

Advertisement

What are Copilot and ChatGPT?

You’re likely familiar with OpenAI ChatGPT, and by extension, Microsoft Copilot. They’re AI chatbots that can have conversations, answer questions, and more. On a more technical level, both Copilot and ChatGPT are large language model (LLM) AIs. They are trained on large amounts of text scraped from a variety of sources using a transformer model that calculates the relationships between words. 

Advertisement

On the user-facing side, they generate text in response to user-submitted prompts by guessing the probability of each word they output. To heavily oversimplify, they’re kind of like your phone keyboard’s next-word prediction feature, but far, far more complex.

OpenAI makes ChatGPT, while Microsoft makes Copilot. However, Microsoft is a major investor in OpenAI, and because Copilot uses AI models from OpenAI, it has a lot of overlap with ChatGPT. That’s not to say they’re the same thing — Microsoft uses some proprietary models in Copilot (specifically, its Prometheus model) in addition to a custom assortment of OpenAI models, but there’s a lot of ChatGPT under Copilot’s hood. Nevertheless, Microsoft does its own tuning to balance all the different AI gremlins under that hood, so it is distinct enough as a product to merit a head-to-head comparison between the two.

Advertisement

OpenAI, meanwhile, retains a massive user base on ChatGPT, which gives it a big competitive advantage since the more users there are, the more the AI is getting used and trained. Neither company actually turns a profit on AI  – OpenAI head Sam Altman says the company is losing money even on $200/month subscribers – but OpenAI remains the market leader by a wide margin. ChatGPT is built into everything from Copilot to Apple’s Siri these days, and it’s widely considered the industry standard.

Copilot is all up in your business

The largest difference between ChatGPT and Copilot is that Microsoft has been cramming Windows and Office products to the gills with its AI. Microsoft was legally ruled a monopoly in the PC operating system market a quarter of a century ago, and things haven’t changed much since then. Windows is by far the most dominant OS on the planet, which means the ability to simply blast a firehose of Copilot features into all of its products is a huge advantage. From your taskbar to your Word documents, Copilot is digging roots deep into the Microsoft ecosystem.

Advertisement

This strategy hasn’t translated into very many users for Copilot, though, and ChatGPT retains by far the largest user base in the AI market. With 28 million active Copilot users in late January compared to over 300 million monthly active users for ChatGPT at the end of 2024, it’s an absolute blowout for OpenAI. Things get even more bleak for Copilot when you realize how many of its users are likely to be using it only because it’s the tool built into their computer by default. 

For the rest of this comparison, we’ll focus on the capabilities of each chatbot. Still, the truth is that you can do more with Copilot than you can with ChatGPT, at least if you have a Windows computer that supports it. Both AIs have desktop apps you can run, but Copilot can manipulate your Excel spreadsheets, Word documents, PowerPoint slides, Outlook inbox, and more from directly within those apps.

Advertisement

Basic questions

One of the most common uses for AI is searching up the answers to basic, everyday questions that you’d usually ask Google. Both AIs are pretty good at this, but pretty good is rarely good enough. AI remains prone to hallucinations  — confidently stating falsehoods as facts  — which can undermine their usefulness. If you have to double check an AI’s answers on Google, you might as well just use Google in the first place.

Advertisement

In any case, I started this head-to-head comparison by prompting both AIs to “Tell me some fun facts about Google Android.” The similarity of the two responses is a clear demonstration of just how much of ChatGPT’s DNA is baked into Copilot. Both told me Android was originally built to run on digital cameras (true), that Google acquired Android in 2005 for $50 million (true), that the first Android-powered phone was the HTC Dream (true – SlashGear covered it at the time), that the original Android logo was a much scarier robot, and that the one we know and love was inspired by bathroom signs (both true).

However, both AIs also made mistakes. Both told me the Android mascot is named Bugdroid. That’s not true. Google officially calls it The Bot, while Bugdroid is a fan-created nickname. Similarly, the Dream was indeed the first consumer Android phone, but the first was a Blackberry-style prototype, something which only ChatGPT pointed that out. 

Advertisement

It’s easy to spot such errors when you’re asking about something you know a lot about, but if I’d been asking about something outside my expertise, I’d need to double check everything. In other words, a pretty good rate of accuracy isn’t good enough when it comes to this tech. Both AIs performed decently, but there’s plenty of room for improvement.

Logical reasoning

Reasoning has been a major area of focus for all of the major players in the AI space recently. ChatGPT and Copilot have both implemented new reasoning capabilities that supposedly allow the AIs to think more deeply about questions. This language is a bit misleading  — AI doesn’t “think,” it just calculates probability based on which words are most closely related in its training data. However, the bots can now show their work, so to speak. 

Advertisement

I decided to be a bit glib here. I’ve noticed that AI has trouble answering questions that are very close to common logic puzzles but which differ by being much simpler.

I turned reasoning on in both Copilot and ChatGPT, then asked, “A farmer needs to cross a river to bring his goat to the other side. He also has a pet rock with him. The rock will not eat the goat, but the rock is very significant to the farmer on an emotional level. How can the farmer get himself, the goat, and the rock across in the fewest number of trips?” Human readers will note that there is actually no puzzle here. Since I’ve added no real constraints, the farmer can clearly bring both across in one trip. However, neither AI clued into that fact.

Advertisement

Because it resembles more complex puzzles, Copilot and ChatGPT assumed the problem must be more challenging than it is. They invented a constraint not present in my question  — that the boat must not be able to hold both the goat and the rock  – and told me that it would take three trips to bring both across. Earning the slight advantage, Copilot ultimately noted that if the boat were larger the farmer could cross the river in one trip.

Creative copy

One of the main selling points for large language models like ChatGPT and Copilot has been the generation of creative copy  — writing. Well, I happen to have an advanced degree in putting words one after another, so I’ll be the judge of that. In last year’s Gemini versus ChatGPT showdown, I enjoyed making the bots write from the perspective of a little kid asking their mom to let them stay up late and eat cookies. I reused a very similar prompt here, but added a new wrinkle. “My mom says I can have a cookie before bed if I go right to sleep. I want to stay up and have a cookie. Write a letter persuading my mom to let me have both.”

Advertisement

Here, the two chatbots took different tacks. While ChatGPT gave a bullet-pointed list of reasons why our put-upon child should be allowed to have his cookie and stay up, too, Copilot was less didactic. It kept things in all prose, adhering closer to a traditional letter writing style. However, both AIs gave more or less the same argument, claiming that they’d be more well behaved and go to bed without fuss if they got what they wanted. However, ChatGPT did a bit better here, at least in logical terms, because it offered the hypothetical mom something in exchange — the promise of spending that extra time awake as mom-kid quality time.

Copilot gets points here for more closely embodying the perspective of the child in its response, while ChatGPT gets a cookie for using slightly better logic. Ultimately, though, neither of these letters felt persuasive enough to be very convincing to any actual parent.

Advertisement

The haiku test

When I compared ChatGPT to Google Gemini almost a year ago, I pointed out their limitations by asking both to write a haiku. As a result of the way LLMs work, neither AI could do so correctly. AI doesn’t actually know anything about the words it spits out, and that means they don’t know what a syllable is. Consequently, they can’t write a haiku, which follows a five-seven-five syllabic verse pattern So, has anything changed a year later?

Advertisement

Maybe someone at OpenAI saw that comparison, or at least I’d like to think so. When prompted to “write a haiku about Slashgear.com,” ChatGPT did so with no problem, writing the following:

“Tech news on the rise,

gadgets, cars, and future dreams,

SlashGear lights the way.”

It’s not going to win any awards, but it qualifies as a haiku, and that’s progress. I’m no AI developer, so I have no clue what changed behind the scenes to enable haiku writing here. Either way, it’s good to see improvement.

Copilot stalled out when I gave it the same prompt. It wouldn’t write its haiku until I signed out of my Microsoft account and reloaded the page, at which point it gave me this:

“Gadget whispers loud,

Innovation on the rise,

Advertisement

SlashGear guides the way.”

It’s interesting to see how both AIs repeat phrases here, such as “on the rise” and “lights/guides the way.” I’d guess that Copilot defaults to ChatGPT for this, and that’s why the poems are similar. Neither poem was particularly beautiful or evocative, but both bots passed this test, and both showed a basic understanding of what SlashGear is, which was integral to the prompt.

Problem solving

As you may have heard, AIs can often pass the bar exam. However, they can’t be lawyers, as lawyers who’ve tried to use them have found out the hard way. So, with those mixed results in mind, how do ChatGPT and Copilot do with logistically complex problem solving puzzles of the kind that routinely stump LSAT test takers? 

Advertisement

Rather than using actual LSAT practice questions, which are copyrighted and have probably already been scraped to train the AIs, I came up with a few of my own. The first was, “Fred is a used car salesman. One day, a family comes in looking to buy a car he hasn’t had time to inspect, but he tells them there’s nothing wrong with it. After all, none of the cars he’s sold ever had issues in the past. What is the fallacy in Fred’s logic, if any?” ChatGPT and Copilot both correctly identified that Fred has fallen victim to the hasty generalization fallacy.

The next question was, “On the way home from Fred’s dealership, the brakes fail in the car he sold, and several people are killed in a collision. Fred claims he’s not at fault, since his cars are sold as is and become the owner’s responsibility once paperwork is signed. The surviving family member claims he is at fault, since the family would not have purchased the vehicle had they known the brakes were faulty. Based only on logic, who is right?”

Advertisement

The responses to this more subjective question differed, with Copilot asserting that both parties have strong claims, while ChatGPT sided with the family, pointing out that Fred’s position relies on “contractual technicalities,” while the family can prove causality.

Code writing

One of the more useful applications of AI is thought to be coding. Especially when it comes to the common but tedious chunks of code that developers routinely find themselves writing, it’s been posited that it’s much easier to offload that work to an AI, leaving the human coder with more time to write the new and complex code for the specific project they’re working on. I’m no developer, so take this particular test with a grain of salt. At the same time, though, these tools should supposedly lower the barrier to entry for coding noobs like me.

Advertisement

Common wisdom dictates that writers should have their own websites, but I’ve been putting off the creation of one. With that in mind, I asked both AIs to, “Generate HTML for a personal website for a writer named Max Miller. Give the website a retro aesthetic and color scheme, with an About Me section with a headshot and text field, a Publications section where I can link out to published work, and a Contact section where I can add social media and email links.”

At this point, I found out ChatGPT now has a code editing suite called Canvas. It allowed me to play with and preview the code right in my browser. Taste is subjective, but ChatGPT also generated what I would argue is the better looking website, using nicer looking margins and a dark mode style color scheme. Both, however, fulfilled the prompt more or less to a T, each generating a very similar page layout. Have a look for yourself below.

Advertisement

Real-time information

When I tested ChatGPT against Google Gemini last year, only the latter could give me up to date information on recent events such as sports scores. I asked both how my local hockey team, the Colorado Avalanche, are doing this season, and both gave me an overview that appears to be correct. Both ChatGPT and Copilot provided me with current rankings and a few highlights from the season, but ChatGPT was more detailed. It told me some player stats that Copilot didn’t bother with.

Advertisement

I followed up by asking who they’re playing next. Both AIs correctly understood the “they” in my question to mean the Avalanche. I’m writing this section at 5:00 p.m. on Friday, February 28, and both AIs informed me about tonight’s game, which takes place against the Minnesota Wild at Ball Arena in Denver two hours from the time of this writing. Interestingly, Copilot attached a Ticketmaster advertisement to the end of its response. ChatGPT, meanwhile, gave me much more useful information by showing me the upcoming schedule for not only tonight’s game but several thereafter. It also appended a link to the official Avalanche website.

Things got far more stark when I asked about breaking news. As of this writing, authorities are investigating the shocking deaths of legendary actor Gene Hackman and his wife. When I asked, “What’s the latest on the investigation into Gene Hackman,” Copilot gave me the basics of the story and told me autopsy and toxicology tests are still pending. ChatGPT, on the other hand, had no idea what I was talking about.

Advertisement

Image based prompting

Using multimodal AI — the ability of an AI to work with multiple forms of media — both ChatGPT and Copilot can incorporate user submitted pictures and other files into a prompt. I decided to start simple for this test. On my bed, I arranged a Samsung Galaxy S23 Ultra, a Samsung portable SSD, a Swiss army multitool, lip balm, hand cream, a eyeglass case, a beaded bracelet, Samsung Galaxy Buds, and my wallet. I then took a photo of the assortment and uploaded it to both AIs with the prompt, “Identify the objects in this photo.”

Advertisement

Both AIs did okay here, but ChatGPT blew Copilot away by a country mile. Whereas Copilot misidentified the SSD as a power bank and the glasses case for deodorant, ChatGPT identified everything accurately.

It was time to up the stakes. I took a photo of a generic Prilosec pill and asked both AIs, “What kind of pill is this?” If these AIs misidentified the medication, that could have dire effects for an overly trusting user. Thankfully, both AIs declined to make a guess when faced with the blank, red pill. Sometimes, it’s better to be useless than wrong.

Lastly, I took a photo of two rows on my bookshelf, containing 78 books, and ensuring all the text in the photo was legible, then asked the AIs, “Which of these books should I read if I have an interest in dystopian fiction?” Again, ChatGPT strong armed Copilot into submission. Neither impressed me, though. Whereas Copilot suggested “Agency” by William Gibson, ignoring everything else and hallucinating a book I don’t own, ChatGPT identified “Agency,” “The Parable of the Sower” by Octavia Butler, and “Appleseed” by Matt Bell. However, it hallucinated several more titles not on the shelf.

Advertisement

Mobile apps

Lastly, both Copilot and ChatGPT are available in mobile form, with apps available in the Apple App Store and Google Play Store. On the surface, both apps look pretty similar, with a text field at the bottom and buttons to enter a voice mode. Since both apps are quite similar, it makes sense to focus this comparison on where they differ — which is in exactly one way

Advertisement

Copilot’s standout mobile app feature is Copilot Daily, an AI news summary. It begins with a fun fact before launching into the daily news, presumably summarizing the articles it cites as sources in the bottom of the screen for each item. Based on my knowledge of the events it summarized, it seems relatively accurate. However, it’s not as if there’s a shortage of news summary features created by actual journalists. You can find them from every major news outlet.

However, the apps are otherwise nearly carbon copies of their web interfaces. Both apps are essentially just wrappers for that interface, since it’s not as if your phone has the power to run these models locally. Unless you’re very excited to hear a robot read the news to you, the ChatGPT app is the better option simply because ChatGPT has more built in features within its interface.

Advertisement

Conclusion: ChatGPT beats Copilot by a hair, but neither AI is great

If you absolutely had to choose either Microsoft Copilot or ChatGPT, the latter remains the better option for most people. While Copilot isn’t exactly like its more popular peer, it’s using enough of OpenAI’s models that you’re better off with the original flavor. Copilot is a lot like Bing — doing basically the same thing as the bigger name brand, but just a little bit worse.

Advertisement

With that said, it’s a stretch to call either of these chatbots smart or useful. Frankly, with hundreds of billions of dollars now sunk into these two AIs alone by both OpenAI and Microsoft, how is it that Copilot and ChatGPT still can’t nail the basics? Microsoft plans to spend $80 billion on AI data centers this year, while OpenAI is seeking up to $7 trillion for new projects. 

Yes, that’s trillion with a T to fund a technology that can’t get basic facts right or understand how boats work. When competitors like DeepSeek are doing the same things for a microscopic fraction of that investment cost, these products feel deflatingly unimpressive in comparison. Markets aren’t a consumer concern, it’s true, but some perspective feels necessary here.

Advertisement

Look, if all you need is a robot that can quickly write you an email, both ChatGPT and Copilot will happily crank out slop copy that anyone can tell was written by AI. If you need a smart thesaurus, or sports scores, or a bit of simple code, they’ve got you covered. In a tight race, ChatGPT does a few things marginally better than Copilot. Still, for any task where accuracy matters, neither are reliable enough to count on.



Continue Reading
Click to comment

Leave a Reply

Tu dirección de correo electrónico no será publicada. Los campos obligatorios están marcados con *

Noticias

Los sitios falsos de chatgpt pueden poner en riesgo sus datos y dispositivos.

Published

on

Si busca “CHATGPT” en su navegador, es probable que se tope en sitios web que parecen estar alimentados por OpenAI, pero no lo son. Uno de esos sitios, chat.chatbotapp.ai, ofrece acceso a “GPT-3.5” de forma gratuita y utiliza marca familiar.

Pero aquí está la cosa: no está dirigida por OpenAi. Y, francamente, ¿por qué usar un GPT-3.5 potencialmente falso cuando puedes usar GPT-4O de forma gratuita en el actual ¿Sitio de chatgpt?

Continue Reading

Noticias

Vista previa de Google I/O 2025: Gemini AI, Android XR y todo lo demás para esperar

Published

on

Cuando el CEO de Google, Sundar Pichai, suba al escenario en la Conferencia de desarrolladores de Google I/O 2025 la próxima semana para entregar sus comentarios de apertura, espere que dos cartas dominen la discusión: la IA.

La inteligencia artificial se ocupa de gran parte del enfoque en Google en estos días, con características de IA que llegan a través de múltiples productos, proyectos centrados en la IA que capturan gran parte de la atención y predicciones del público sobre el futuro de la IA que asume muchos de los pronunciamientos públicos de la compañía.

Continue Reading

Noticias

AI generativa: todo para saber sobre la tecnología detrás de chatbots como chatgpt

Published

on

Ya sea que se dé cuenta o no, la inteligencia artificial está en todas partes. Se encuentra detrás de los chatbots con los que hablas en línea, las listas de reproducción que transmites y los anuncios personalizados que aparecen en tu desplazamiento. Y ahora está tomando una personalidad más pública. Piense en Meta AI, que ahora está integrado en aplicaciones como Facebook, Messenger y WhatsApp; o Géminis de Google, trabajando en segundo plano en las plataformas de la compañía; o Apple Intelligence, lanzando a través de iPhones ahora.

AI tiene una larga historia, volviendo a una conferencia en Dartmouth en 1956 que primero discutió la inteligencia artificial como una cosa. Los hitos en el camino incluyen Eliza, esencialmente el primer chatbot, desarrollado en 1964 por el informático del MIT Joseph Weizenbaum y, saltando 40 años, cuando la función de autocompleta de Google apareció por primera vez en 2004.

Luego llegó 2022 y el ascenso de Chatgpt a la fama. Los desarrollos generativos de IA y los lanzamientos de productos se han acelerado rápidamente desde entonces, incluidos Google Bard (ahora Gemini), Microsoft Copilot, IBM Watsonx.ai y los modelos de LLAMA de código abierto de Meta.

Desglosemos qué es la IA generativa, cómo difiere de la inteligencia artificial “regular” y si la Generación AI puede estar a la altura de las expectativas.

IA generativa en pocas palabras

En esencia, la IA generativa se refiere a sistemas de inteligencia artificial que están diseñados para producir un nuevo contenido basado en patrones y datos que han aprendido. En lugar de solo analizar números o predecir tendencias, estos sistemas generan salidas creativas como texto, música de imágenes, videos y código de software.

Algunas de las herramientas de IA generativas más populares en el mercado incluyen:

El principal entre sus habilidades, ChatGPT puede crear conversaciones o ensayos similares a los humanos basados ​​en algunas indicaciones simples. Dall-E y MidJourney crean obras de arte detalladas a partir de una breve descripción, mientras que Adobe Firefly se centra en la edición y el diseño de imágenes.

Imagen generada por chatgpt de una ardilla con ojos grandes sosteniendo una bellota

Chatgpt / captura de pantalla por cnet

Ai eso no es generativo

No toda la IA es generativa. Si bien Gen AI se enfoca en crear contenido nuevo, la IA tradicional se destaca por analizar datos y hacer predicciones. Esto incluye tecnologías como el reconocimiento de imágenes y el texto predictivo. También se usa para soluciones novedosas en:

  • Ciencia
  • Diagnóstico médico
  • Pronóstico del tiempo
  • Detección de fraude
  • Análisis financiero para pronósticos e informes

La IA que venció a los grandes campeones humanos en el ajedrez y el juego de mesa no fue una IA generativa.

Es posible que estos sistemas no sean tan llamativos como la Generación AI, pero la inteligencia artificial clásica es una gran parte de la tecnología en la que confiamos todos los días.

¿Cómo funciona Gen AI?

Detrás de la magia de la IA generativa hay modelos de idiomas grandes y técnicas avanzadas de aprendizaje automático. Estos sistemas están capacitados en grandes cantidades de datos, como bibliotecas completas de libros, millones de imágenes, años de música grabada y datos raspados de Internet.

Los desarrolladores de IA, desde gigantes tecnológicos hasta nuevas empresas, son conscientes de que la IA es tan buena como los datos que lo alimenta. Si se alimenta de datos de baja calidad, la IA puede producir resultados sesgados. Es algo con lo que incluso los jugadores más grandes en el campo, como Google, no han sido inmunes.

La IA aprende patrones, relaciones y estructuras dentro de estos datos durante el entrenamiento. Luego, cuando se le solicita, aplica ese conocimiento para generar algo nuevo. Por ejemplo, si le pide a una herramienta Gen AI que escriba un poema sobre el océano, no solo extrae versos preescritos de una base de datos. En cambio, está usando lo que aprendió sobre la poesía, los océanos y la estructura del lenguaje para crear una pieza completamente original.

Un poema de 12 líneas llamado The Ocean's Whisper

Chatgpt / captura de pantalla por cnet

Es impresionante, pero no es perfecto. A veces los resultados pueden sentirse un poco apagados. Tal vez la IA malinterpreta su solicitud, o se vuelve demasiado creativo de una manera que no esperaba. Puede proporcionar con confianza información completamente falsa, y depende de usted verificarla. Esas peculiaridades, a menudo llamadas alucinaciones, son parte de lo que hace que la IA generativa sea fascinante y frustrante.

Las capacidades generativas de IA están creciendo. Ahora puede comprender múltiples tipos de datos combinando tecnologías como el aprendizaje automático, el procesamiento del lenguaje natural y la visión por computadora. El resultado se llama IA multimodal que puede integrar alguna combinación de texto, imágenes, video y habla dentro de un solo marco, ofreciendo respuestas más contextualmente relevantes y precisas. El modo de voz avanzado de ChatGPT es un ejemplo, al igual que el proyecto Astra de Google.

Desafíos con IA generativa

No hay escasez de herramientas de IA generativas, cada una con su talento único. Estas herramientas han provocado la creatividad, pero también han planteado muchas preguntas además del sesgo y las alucinaciones, como, ¿quién posee los derechos del contenido generado por IA? O qué material es un juego justo o fuera de los límites para que las compañías de IA los usen para capacitar a sus modelos de idiomas; vea, por ejemplo, la demanda del New York Times contra Openai y Microsoft.

Otras preocupaciones, no son asuntos pequeños, implican privacidad, responsabilidad en la IA, los profundos profundos generados por IA y el desplazamiento laboral.

“Escribir, animación, fotografía, ilustración, diseño gráfico: las herramientas de IA ahora pueden manejar todo eso con una facilidad sorprendente. Pero eso no significa que estos roles desaparezcan. Simplemente puede significar que los creativos deberán mejorar y usar estas herramientas para amplificar su propio trabajo”, Fang Liu, profesor de la Universidad de Notre Dame Dame y Coeditor-Chief de las transacciones de ACM en las transacciones de Probabilista, contó el aprendizaje en el poderoso de la máquina probabilística, le dijo a Cetnet.

“También ofrece una forma para las personas que tal vez carecen de la habilidad, como alguien con una visión clara que no puede dibujar, pero que puede describirlo a través de un aviso. Así que no, no creo que interrumpa a la industria creativa. Con suerte, será una co-creación o un aumento, no un reemplazo”.

Otro problema es el impacto en el medio ambiente porque la capacitación de grandes modelos de IA utiliza mucha energía, lo que lleva a grandes huellas de carbono. El rápido ascenso de la Generación AI en los últimos años ha acelerado las preocupaciones sobre los riesgos de la IA en general. Los gobiernos están aumentando las regulaciones de IA para garantizar el desarrollo responsable y ético, especialmente la Ley de IA de la Unión Europea.

Recepción de IA generativa

Muchas personas han interactuado con los chatbots en el servicio al cliente o han utilizado asistentes virtuales como Siri, Alexa y Google Assistant, que ahora están en la cúspide de convertirse en Gen AI Power Tools. Todo eso, junto con las aplicaciones para ChatGPT, Claude y otras herramientas nuevas, es poner ai en sus manos. Y la reacción pública a la IA generativa se ha mezclado. Muchos usuarios disfrutan de la conveniencia y la creatividad que ofrece, especialmente para cosas como escribir ayuda, creación de imágenes, soporte de tareas y productividad.

Mientras tanto, en la encuesta global de IA 2024 de McKinsey, el 65% de los encuestados dijo que sus organizaciones usan regularmente IA generativa, casi el doble de la cifra reportada solo 10 meses antes. Industrias como la atención médica y las finanzas están utilizando Gen AI para racionalizar las operaciones comerciales y automatizar tareas mundanas.

Como se mencionó, existen preocupaciones obvias sobre la ética, la transparencia, la pérdida de empleos y el potencial del mal uso de los datos personales. Esas son las principales críticas detrás de la resistencia a aceptar la IA generativa.

Y las personas que usan herramientas de IA generativas también encontrarán que los resultados aún no son lo suficientemente buenos para el tiempo. A pesar de los avances tecnológicos, la mayoría de las personas pueden reconocer si el contenido se ha creado utilizando Gen AI, ya sean artículos, imágenes o música.

AI ha secuestrado ciertas frases que siempre he usado, por lo que debo autocorrectar mi escritura a menudo porque puede parecer una IA. Muchos artículos escritos por AI contienen frases como “en la era de”, o todo es un “testimonio de” o un “tapiz de”. La IA carece de la emoción y la experiencia que viene, bueno, ser una vida humana y viviente. Como explicó un artista en Quora, “lo que AI hace no es lo mismo que el arte que evoluciona de un pensamiento en un cerebro humano” y “no se crea a partir de la pasión que se encuentra en un corazón humano”.

AI generativa: vida cotidiana

La IA generativa no es solo para técnicos o personas creativas. Una vez que obtienes la habilidad de darle indicaciones, tiene el potencial de hacer gran parte del trabajo preliminar por ti en una variedad de tareas diarias.

Digamos que está planeando un viaje. En lugar de desplazarse por páginas de resultados de búsqueda, le pide a un chatbot que planifique su itinerario. En cuestión de segundos, tiene un plan detallado adaptado a sus preferencias. (Ese es el ideal. Por favor, verifique siempre sus recomendaciones).

Un propietario de una pequeña empresa que necesita una campaña de marketing pero que no tiene un equipo de diseño puede usar una IA generativa para crear imágenes llamativas e incluso pedirle que sugiera copia publicitaria.

Un itinerario de viaje para Nueva Orleans, creado por chatgpt

Chatgpt / captura de pantalla por cnet

Gen Ai está aquí para quedarse

No ha habido un avance tecnológico que haya causado tal boom desde Internet y, más tarde, el iPhone. A pesar de sus desafíos, la IA generativa es innegablemente transformadora. Está haciendo que la creatividad sea más accesible, ayudando a las empresas a racionalizar los flujos de trabajo e incluso inspirar formas completamente nuevas de pensar y resolver problemas.

Pero quizás lo más emocionante es su potencial, y estamos rascando la superficie de lo que estas herramientas pueden hacer.

Preguntas frecuentes

¿Cuál es un ejemplo de IA generativa?

ChatGPT es probablemente el ejemplo más popular de IA generativa. Le das un aviso y puede generar texto e imágenes; Código de escritura; Responder preguntas; resumir el texto; borrador de correos electrónicos; y mucho más.

¿Cuál es la diferencia entre la IA y la IA generativa?

La IA generativa crea contenido nuevo como texto, imágenes o música, mientras que la IA tradicional analiza los datos, reconoce patrones o imágenes y hace predicciones (por ejemplo, en medicina, ciencia y finanzas).

Continue Reading

Trending