Connect with us

Noticias

10 players who are better than Stephen Curry according to ChatGPT

Published

on

Stephen Curry is one of the most polarizing players in NBA history. He is clearly the best three-point shooter the league has ever seen, and he deserves to be discussed among the best players in NBA history because he has won four championships with the Golden State Warriors. However, not everyone views him as a top 10 player in league history. He has never been known as an elite defensive player, and he isn’t quite the playmaker of some of his point guard peers.

Personally, I view Curry as a top five to seven player ever and the best point guard to ever walk the planet, but there is certainly an argument against that. ChatGPT gave us 10 players who have a case of having had a better career than Curry, though, so check out the gallery to see why artificial intelligence ranks these all-time greats as even better than the Warriors legend. We have also provided our own analysis on whether we agree or disagree with what ChatGPT had to say.

1. Michael Jordan

Imagn Images

ChatGPT reasoning:

  • Why: Michael Jordan’s combination of scoring, defense, leadership, and dominance in critical moments makes him the consensus greatest of all time for many. He won six championships, five MVPs, and was a 10-time scoring champion. His mental toughness, work ethic, and ability to perform in clutch moments (particularly in the NBA Finals) are unmatched.
  • Better Than Curry: Jordan’s defensive prowess, complete offensive game, and six titles outshine Curry’s individual accolades.

Did ChatGPT get it right? Michael Jordan is the greatest player in NBA history, so ChatGPT was correct in justifying him as being better than Stephen Curry. Jordan has an edge on Curry in championships (six to Curry’s four), defense (Jordan was a Defensive Player of the Year and the all-time leader in First-Team All-Defensive nods), and as a scorer in every aspect except for three-point shooting.

While you may see arguments on the internet for Curry ranking above most of the rest of the players on this list, you won’t see many taking Curry above Jordan. Jordan did it all, ranking as both one of the best defensive and offensive players ever. His athleticism was unmatched, and he is the greatest scorer to ever walk the planet.

2. LeBron James

LeBron James Lakers
Kevin Jairaj-Imagn Images

ChatGPT reasoning:

  • Why: LeBron is often viewed as the most complete player to ever play the game. His combination of size, athleticism, basketball IQ, and versatility allows him to impact every aspect of the game. He’s won four championships, four MVPs, and has been a dominant force for nearly two decades.
  • Better Than Curry: LeBron’s ability to affect the game in all areas — scoring, passing, rebounding, and defense — gives him a broader influence than Curry, despite Curry’s shooting dominance.

Did ChatGPT get it right? LeBron James and Stephen Curry often clashed in the NBA Finals, with Curry actually often ending up on top in the iconic rivalry. Curry’s Warriors bested James’ Cleveland Cavaliers in three of the four championship series that they matched up in.

Even so, James still has had the better overall career. Both players are still going strong, but James has double the All-Star nods (20) and MVP trophies (four). He even has the same amount of championship rings (four). Not to mention, when he did beat Curry in the Finals, it was in iconic fashion. James’ 3-1 comeback will go down as arguably the greatest NBA Finals series ever.

Curry is best known for shooting, but James happens to be in the top 10 in three-point makes in his own right. He has Curry bested in almost every other facet of basketball, too. Thanks to his incredible versatility, James is also a great rebounder and an elite playmaker, and he has unmatched basketball IQ, too.

3. Kareem Abdul-Jabbar

Kareem Abdul Jabbar Bucks Lou Alcindor
Darryl Norenberg-Imagn Images

ChatGPT reasoning:

  • Why: Kareem is the NBA’s all-time leading scorer, with six MVPs (most in NBA history) and six championships. His unstoppable skyhook, elite defense, and longevity make him one of the greatest players ever.
  • Better Than Curry: Kareem’s dominance as a center and his impact on both ends of the floor, combined with his long career, places him ahead of Curry.

Did ChatGPT get it right? Stephen Curry certainly revolutionized the point guard position and the game of basketball as a whole, but back in Kareem Abdul-Jabbar’s day, centers were the focal points of NBA organizations. The big man formerly known as Lew Alcindor was the best of the best, too.

After completing arguably the best college basketball career ever, Abdul-Jabbar went on to win the most MVP trophies (six) in league history. He won six championships as well, in addition to 19 earning All-Star appearances. Abdul-Jabbar’s skyhook was arguably the most unguardable signature shot in league history, and he was a force on the defensive end, too. Long before deep Curry heaves filled basketball highlight reels, Abdul-Jabbar was the NBA’s biggest source of entertainment, and he deserves to be credited as a top three or four NBA player ever. In fact, a select few even call him the G.O.A.T.

Note: It is worth noting that ChatGPT was wrong about Abdul-Jabbar being the NBA’s all-time leading scorer. While he owned that record for decades, LeBron James has since surpassed him.

4. Magic Johnson

Magic Johnson best point guards ever
MPS-Imagn Images

ChatGPT reasoning:

  • Why: Magic is widely regarded as the greatest point guard in NBA history. His ability to run an offense, vision, and leadership helped the Lakers win five championships in the 1980s. He revolutionized the point guard position with his 6’9″ frame and playmaking skills.
  • Better Than Curry: Magic’s playmaking and leadership were more impactful in terms of team success, especially in the 1980s. Curry’s shooting revolutionized the game, but Magic’s leadership and ability to orchestrate a dynasty give him the edge.

Did ChatGPT get it right? You will see even more debates about who the best point guard ever is over debates on where Stephen Curry ranks all-time regardless of position. Of all five NBA positions, the Stephen Curry vs. Magic Johnson debate at point guard is the most controversial topic.

Luckily, you really can’t go wrong with either. Johnson is arguably the best playmaker/passer ever, whereas Curry is clearly the best shooter we’ve ever seen. They both dominated their respective eras with a flashy, never-before-seen style of play, and they won a lot of championships in the process.

Curry gets the slight edge over Johnson in our opinion, but that might even be considered a hot take. The Showtime Lakers were vital for popularizing basketball at the professional level, but Curry really changed the way the sport is played. He hasn’t shown any signs of slowing down, either, whereas Johnson’s career was prematurely ended because of an HIV diagnosis.

5. Bill Russell

Bill Russell NBA players better than Stephen Curry
Malcolm Emmons-Imagn Images

ChatGPT reasoning:

  • Why: Bill Russell’s 11 championships in 13 seasons is a feat that has never been duplicated, and his defensive prowess, shot-blocking, and rebounding were key to the Boston Celtics’ dominance in the 1950s and 60s.
  • Better Than Curry: Russell’s impact on defense and winning championships is unparalleled. While Curry is more dynamic offensively, Russell’s defensive legacy and leadership in winning titles place him above Curry in this comparison.

Did ChatGPT get it right? Comparing old school players to modern day superstars is never easy. Bill Russell’s resume is much greater than Stephen Curry’s, as the Boston Celtics legend has 11 championships and five MVPs to his name. However, he played during a time where there were only eight other teams in the league.

Curry, meanwhile, was going toe-to-toe with seven-footers and freakish athletes like LeBron James, and he usually got the best of them despite standing 6-foot-2 and weighing 185 pounds, too. Russell’s impact on the league and dominance during his time cannot be overstated. However, no one during his time would dream of doing the things that Curry can do on a basketball court. From dribble combinations that leave defenders on the floor to shots from nearly half-court that leave fans jaws dropped, Curry’s skill level is so advanced in comparison to what Russell’s was that we have to give the point guard an edge and disagree with ChatGPT here.

6. Wilt Chamberlain

Wilt Chamberlain NBA players better than Stephen Curry
Malcolm Emmons-Imagn Images

ChatGPT reasoning:

  • Why: Wilt Chamberlain is one of the most dominant forces the game has ever seen. He holds the record for the most points scored in a single game (100) and led the league in scoring and rebounding multiple seasons. He won two championships and was a four-time MVP.
  • Better Than Curry: Wilt’s physical dominance and statistical achievements dwarf Curry’s individual records. Though Curry revolutionized the game with his shooting, Chamberlain’s raw impact and efficiency on both ends place him ahead.

Did ChatGPT get it right? Like Bill Russell, Wilt Chamberlain played in the NBA’s early days, so a lot of the reasons we have Curry ahead of Russell are valid for having the Davidson product above Chamberlain, too. Chamberlain didn’t have the championship success of Russell (or Curry), but no one put up numbers like he did.

Related Golden State Warriors NewsArticle continues below

Even so, Curry’s statistics are more than impressive in his own right. His 3,878 three-point makes are the most ever, and his 91% efficiency from the charity stripe is the best ever, too. All in all, Curry has at least 15 different shooting records to call his own.

7. Larry Bird

Larry Bird NBA players better than Stephen Curry
Imagn Images

ChatGPT reasoning:

  • Why: Larry Bird was one of the most complete players of all time. A three-time champion and two-time Finals MVP, Bird combined elite shooting with incredible basketball IQ, passing, and defense. His rivalry with Magic Johnson helped elevate the NBA to new heights in the 1980s.
  • Better Than Curry Bird’s all-around game, versatility, and mental toughness give him an edge over Curry. Bird was a more complete player in terms of scoring, rebounding, and playmaking, which gave him a broader impact on games.

Did ChatGPT get it right?: Before there was Curry, Larry Bird was the uber-efficient shooter with unmatched touch on his jumper. Because of that, these two should rank super close on all-time rankings. Bird was the first 50-40-90 club member ever, and Curry followed suit in 2016.

They are also both multi-time winners of the three-point contest. Additionally, despite both being known for shooting, both don’t get nearly the credit that they deserve as passers or on defense. Curry has led the league in steals before, and Bird has been a Second-Team All-Defensive member before.

Curry has four championships to Bird’s three, but Bird has a one-trophy lead with three MVPs. The lifelong Warrior has only made the All-NBA First-Team four times in comparison to Bird’s nine, though, which means the small forward might have an edge on Curry in all-time discussions.

8. Shaquille O’Neal

Shaquille O'Neal NBA players better than Stephen Curry
Imagn Images

ChatGPT reasoning:

  • Why: Shaq was arguably the most dominant physical force the NBA has ever seen. His combination of size, skill, and power was unparalleled, leading to four NBA championships and one of the most dominant stretches in NBA history during the early 2000s.
  • Better Than Curry: Shaq’s dominance in the paint and his ability to take over games on both ends of the floor gives him a greater all-around impact compared to Curry’s outside shooting.

Did ChatGPT get it right? When you think of what a basketball player should look like, a physically imposing player of Shaquille O’Neal’s stature might come to mind. That is what makes Stephen Curry so special, though, as he dominated despite being below average size for an NBA player.

Both O’Neal and Curry won four championships, but Curry won the MVP award twice in comparison to O’Neal’s one time being voted as the best player in the league. Additionally, a lot of people respect that Curry has never left the team that drafted him. Whereas O’Neal suited up for six different teams during his career.

Then again, O’Neal, like Bird, has Curry beat in a big way with First-Team All-NBA nods, as The Diesel was elected to that team eight times. Again, this battle is close, so we can see why ChatGPT likes O’Neal more than Curry. Curry certainly has a case over O’Neal, though.

9. Tim Duncan

Tim Duncan complaining about a call Spurs
Gary A. Vasquez-Imagn Images

ChatGPT reasoning:

  • Why: Tim Duncan was the cornerstone of the San Antonio Spurs’ dynasty, winning five championships and two MVPs. His ability to dominate both offensively and defensively in a quiet, efficient manner made him one of the most complete players in NBA history.
  • Better Than Curry: Duncan’s consistency, leadership, and defensive presence make him a more well-rounded player than Curry, whose game is more limited to perimeter shooting.

Did ChatGPT get it right? Stephen Curry is the ultimate flashy player, while Tim Duncan was known as The Big Fundamental. While their styles were completely different, they both founds tons of success and led arguably the two best dynasties of the 21st century.

Duncan’s laid-back attitude and style of play probably led to him being underrated. After all, he certainly deserves to have at least one Defensive Player of the Year to his name. Curry did rack up impressive accolades, though, none more impressive than when he became the only unanimous MVP ever. Because of that, we think ChatGPT was wrong in ranking Duncan above Curry.

10. Kobe Bryant

Kobe Bryant drive on Lou Williams
Howard Smith-Imagn Images

ChatGPT reasoning:

  • Why: Kobe Bryant’s skillset and mentality were unmatched. He won five championships, two Finals MVPs, and was a 15-time All-NBA player. Bryant’s scoring ability, footwork, and competitive spirit made him one of the greatest of all time.
  • Better Than Curry: While Curry’s shooting is unparalleled, Kobe’s overall offensive game (including post moves, mid-range shooting, and ability to create shots) and his mentality put him ahead. Kobe also had stronger defensive credentials.

Did ChatGPT get it right? We did another article asking ChatGPT about 10 players who are better than Kobe Bryant, and Stephen Curry did not make that list. Bryant is here on Curry’s list, though. Bryant is the closest thing we’ve had to Michael Jordan, and he deserves to be discussed amongst the very best players in NBA history.

Kobe did whatever it took to win, and it was obvious when looking at his Mamba mentality. Bryant wasn’t the three-point shooter of Curry, but he certainly expanded on that part of his game more than the player he modeled his game after (Jordan). Like Jordan, though, he stands above the rest of the NBA in terms of mid-range shooting, above-the-rim finishing, and point-of-attack defense.

Curry and Bryant find themselves in a weird place when discussing championships. Bryant was the second option to Shaquille O’Neal for three of his five championships, while Curry was arguably the number two to Kevin Durant for two of his four championships. Both players were clearly the top dog the two other times they won the NBA Finals. They should be close in all-time rankings, but we are okay with Bryant having a slight edge over Curry.

Continue Reading
Click to comment

Leave a Reply

Tu dirección de correo electrónico no será publicada. Los campos obligatorios están marcados con *

Noticias

Why Google’s search engine trial is about AI : NPR

Published

on

An illustration photograph taken on Feb. 20, 2025 shows Grok, DeepSeek and ChatGPT apps displayed on a phone screen. The Justice Department’s 2020 complaint against Google has few mentions of artificial intelligence or AI chatbots. But nearly five years later, as the remedy phase of the trial enters its second week of testimony, the focus has shifted to AI.

Michael M. Santiago/Getty Images/Getty Images North America


hide caption

toggle caption

Michael M. Santiago/Getty Images/Getty Images North America

When the U.S. Department of Justice originally broughtand then won — its case against Google, arguing that the tech behemoth monopolized the search engine market, the focus was on, well … search.

Back then, in 2020, the government’s antitrust complaint against Google had few mentions of artificial intelligence or AI chatbots. But nearly five years later, as the remedy phase of the trial enters its second week of testimony, the focus has shifted to AI, underscoring just how quickly this emerging technology has expanded.

In the past few days, before a federal judge who will assess penalties against Google, the DOJ has argued that the company could use its artificial intelligence products to strengthen its monopoly in online search — and to use the data from its powerful search index to become the dominant player in AI.

In his opening statements last Monday, David Dahlquist, the acting deputy director of the DOJ’s antitrust civil litigation division, argued that the court should consider remedies that could nip a potential Google AI monopoly in the bud. “This court’s remedy should be forward-looking and not ignore what is on the horizon,” he said.

Dahlquist argued that Google has created a system in which its control of search helps improve its AI products, sending more users back to Google search — creating a cycle that maintains the tech company’s dominance and blocks competitors out of both marketplaces.

The integration of search and Gemini, the company’s AI chatbot — which the DOJ sees as powerful fuel for this cycle — is a big focus of the government’s proposed remedies. The DOJ is arguing that to be most effective, those remedies must address all ways users access Google search, so any penalties approved by the court that don’t include Gemini (or other Google AI products now or in the future) would undermine their broader efforts.

Department of Justice lawyer David Dahlquist leaves the Washington, D.C. federal courthouse on Sept. 20, 2023 during the original trial phase of the antitrust case against Google.

Department of Justice lawyer David Dahlquist leaves the Washington, D.C. federal courthouse on Sept. 20, 2023 during the original trial phase of the antitrust case against Google.

Jose Luis Magana/AP/FR159526 AP


hide caption

toggle caption

Jose Luis Magana/AP/FR159526 AP

AI and search are connected like this: Search engine indices are essentially giant databases of pages and information on the web. Google has its own such index, which contains hundreds of billions of webpages and is over 100,000,000 gigabytes, according to court documents. This is the data Google’s search engine scans when responding to a user’s query.

AI developers use these kinds of databases to build and train the models used to power chatbots. In court, attorneys for the DOJ have argued that Google’s Gemini pulls information from the company’s search index, including citing search links and results, extending what they say is a self-serving cycle. They argue that Google’s ability to monopolize the search market gives it user data, at a huge scale — an advantage over other AI developers.

The Justice Department argues Google’s monopoly over search could have a direct effect on the development of generative AI, a type of artificial intelligence that uses existing data to create new content like text, videos or photos, based on a user’s prompts or questions. Last week, the government called executives from several major AI companies, like OpenAI and Perplexity, in an attempt to argue that Google’s stranglehold on search is preventing some of those companies from truly growing.

The government argues that to level the playing field, Google should be forced to open its search data — like users’ search queries, clicks and results — and license it to other competitors at a cost.

This is on top of demands related to Google’s search engine business, most notably that it should be forced to sell off its Chrome browser.

Google flatly rejects the argument that it could monopolize the field of generative AI, saying competition in the AI race is healthy. In a recent blog post on Google’s website, Lee-Anne Mulholland, the company’s vice president of regulatory affairs, wrote that since the federal judge first ruled against Google over a year ago, “AI has already rapidly reshaped the industry, with new entrants and new ways of finding information, making it even more competitive.”

In court, Google’s lawyers have argued that there are a host of AI companies with chatbots — some of which are outperforming Gemini. OpenAI has ChatGPT, Meta has MetaAI and Perplexity has Perplexity AI.

“There is no shortage of competition in that market, and ChatGPT and Meta are way ahead of everybody in terms of the distribution and usage at this point,” said John E. Schmidtlein, a lawyer for Google, during his opening statement. “But don’t take my word for it. Look at the data. Hundreds and hundreds of millions of downloads by ChatGPT.”

Competing in a growing AI field

It should be no surprise that AI is coming up so much at this point in the trial, said Alissa Cooper, the executive director of the Knight-Georgetown Institute, a nonpartisan tech research and policy center at Georgetown University focusing on AI, disinformation and data privacy.

“If you look at search as a product today, you can’t really think about search without thinking about AI,” she said. “I think the case is a really great opportunity to try to … analyze how Google has benefited specifically from the monopoly that it has in search, and ensure that the behavior that led to that can’t be used to gain an unfair advantage in these other markets which are more nascent.”

Having access to Google’s data, she said, “would provide them with the ability to build better chatbots, build better search engines, and potentially build other products that we haven’t even thought of.”

To make that point, the DOJ called Nick Turley, OpenAI’s head of product for ChatGPT, to the stand last Tuesday. During a long day of testimony, Turley detailed how without access to Google’s search index and data, engineers for the growing company tried to build their own.

ChatGPT, a large language model that can generate human-like responses, engage in conversations and perform tasks like explaining a tough-to-understand math lesson, was never intended to be a product for OpenAI, Turley said. But once it launched and went viral, the company found that people were using it for a host of needs.

Though popular, ChatGPT had its drawbacks, like the bot’s limited “knowledge,” Turley said. Early on, ChatGPT was not connected to the internet and could only use information that it had been fed up to a certain point in its training. For example, Turley said, if a user asked “Who is the president?” the program would give a 2022 answer — from when its “knowledge” effectively stopped.

OpenAI couldn’t build their own index fast enough to address their problems; they found that process incredibly expensive, time consuming and potentially years from coming to fruition, Turley said.

So instead, they sought a partnership with a third party search provider. At one point, OpenAI tried to make a deal with Google to gain access to their search, but Google declined, seeing OpenAI as a direct competitor, Turley testified.

But Google says companies like OpenAI are doing just fine without gaining access to the tech giant’s own technology — which it spent decades developing. These companies just want “handouts,” said Schmidtlein.

On the third day of the remedy trial, internal Google documents shared in court by the company’s lawyers compared how many people are using Gemini versus its competitors. According to those documents, ChatGPT and MetaAI are the two leaders, with Gemini coming in third.

They showed that this March, Gemini saw 35 million active daily users and 350 million monthly active users worldwide. That was up from 9 million daily active users in October 2024. But according to those documents, Gemini was still lagging behind ChatGPT, which reached 160 million daily users and around 600 million active users in March.

These numbers show that competitors have no need to use Google’s search data, valuable intellectual property that the tech giant spent decades building and maintaining, the company argues.

“The notion that somehow ChatGPT can’t get distribution is absurd,” Schmidtlein said in court last week. “They have more distribution than anyone.”

Google’s exclusive deals 

In his ruling last year, U.S. District Judge Amit Mehta said Google’s exclusive agreements with device makers, like Apple and Samsung, to make its search engine the default on those companies’ phones helped maintain its monopoly. It remains a core issue for this remedy trial.

Now, the DOJ is arguing that Google’s deals with device manufacturers are also directly affecting AI companies and AI tech.

In court, the DOJ argued that Google has replicated this kind of distribution deal by agreeing to pay Samsung what Dahlquist called a monthly “enormous sum” for Gemini to be installed on smartphones and other devices.

Last Wednesday, the DOJ also called Dmitry Shevelenko, Perplexity’s chief business officer, to testify that Google has effectively cut his company out from making deals with manufacturers and mobile carriers.

Perplexity AIs not preloaded on any mobile devices in the U.S., despite many efforts to get phone companies to establish Perplexity as a default or exclusive app on devices, Shevelenko said. He compared Google’s control in that space to that of a “mob boss.”

But Google’s attorney, Christopher Yeager, noted in questioning Shevelenko that Perplexity has reached a valuation of over $9 billion — insinuating the company is doing just fine in the marketplace.

Despite testifying in court (for which he was subpoenaed, Shevelenko noted), he and other leaders at Perplexity are against the breakup of Google. In a statement on the company’s website, the Perplexity team wrote that neither forcing Google to sell off Chrome nor to license search data to its competitors are the best solutions. “Neither of these address the root issue: consumers deserve choice,” they wrote.

Google and Alphabet CEO Sundar Pichai departs federal court after testifying in October 2023 in Washington, DC. Pichai testified to defend his company in the original antitrust trial. Pichai is expected to testify again during the remedy phase of the legal proceedings.

Google and Alphabet CEO Sundar Pichai departs federal court after testifying in October 2023 in Washington, DC. Pichai testified to defend his company in the original antitrust trial. Pichai is expected to testify again during the remedy phase of the legal proceedings.

Drew Angerer/Getty Images/Getty Images North America


hide caption

toggle caption

Drew Angerer/Getty Images/Getty Images North America

What to expect next

This week the trial continues, with the DOJ calling its final witnesses this morning to testify about the feasibility of a Chrome divestiture and how the government’s proposed remedies would help rivals compete. On Tuesday afternoon, Google will begin presenting its case, which is expected to feature the testimony of CEO Sundar Pichai, although the date of his appearance has not been specified.

Closing arguments are expected at the end of May, and then Mehta will make his ruling. Google says once this phase is settled the company will appeal Mehta’s ruling in the underlying case.

Whatever Mehta decides in this remedy phase, Cooper thinks it will have effects beyond just the business of search engines. No matter what it is, she said, “it will be having some kind of impact on AI.”

Google is a financial supporter of NPR.

Continue Reading

Noticias

API de Meta Oleleshes Llama que se ejecuta 18 veces más rápido que OpenAI: Cerebras Partnership ofrece 2.600 tokens por segundo

Published

on

Únase a nuestros boletines diarios y semanales para obtener las últimas actualizaciones y contenido exclusivo sobre la cobertura de IA líder de la industria. Obtenga más información


Meta anunció hoy una asociación con Cerebras Systems para alimentar su nueva API de LLAMA, ofreciendo a los desarrolladores acceso a velocidades de inferencia hasta 18 veces más rápido que las soluciones tradicionales basadas en GPU.

El anuncio, realizado en la Conferencia inaugural de desarrolladores de Llamacon de Meta en Menlo Park, posiciona a la compañía para competir directamente con Operai, Anthrope y Google en el mercado de servicios de inferencia de IA en rápido crecimiento, donde los desarrolladores compran tokens por miles de millones para impulsar sus aplicaciones.

“Meta ha seleccionado a Cerebras para colaborar para ofrecer la inferencia ultra rápida que necesitan para servir a los desarrolladores a través de su nueva API de LLAMA”, dijo Julie Shin Choi, directora de marketing de Cerebras, durante una sesión de prensa. “En Cerebras estamos muy, muy emocionados de anunciar nuestra primera asociación HyperScaler CSP para ofrecer una inferencia ultra rápida a todos los desarrolladores”.

La asociación marca la entrada formal de Meta en el negocio de la venta de AI Computation, transformando sus populares modelos de llama de código abierto en un servicio comercial. Si bien los modelos de LLAMA de Meta se han acumulado en mil millones de descargas, hasta ahora la compañía no había ofrecido una infraestructura en la nube de primera parte para que los desarrolladores creen aplicaciones con ellos.

“Esto es muy emocionante, incluso sin hablar sobre cerebras específicamente”, dijo James Wang, un ejecutivo senior de Cerebras. “Openai, Anthrope, Google: han construido un nuevo negocio de IA completamente nuevo desde cero, que es el negocio de inferencia de IA. Los desarrolladores que están construyendo aplicaciones de IA comprarán tokens por millones, a veces por miles de millones. Y estas son como las nuevas instrucciones de cómputo que las personas necesitan para construir aplicaciones AI”.

Una tabla de referencia muestra a Cerebras Processing Llama 4 a 2,648 tokens por segundo, superando drásticamente a los competidores Sambanova (747), Groq (600) y servicios basados ​​en GPU de Google y otros, explicando la elección de hardware de Meta para su nueva API. (Crédito: Cerebras)

Breaking the Speed ​​Barrier: Cómo modelos de Llama de Cerebras Supercharges

Lo que distingue a la oferta de Meta es el aumento de la velocidad dramática proporcionado por los chips de IA especializados de Cerebras. El sistema de cerebras ofrece más de 2.600 fichas por segundo para Llama 4 Scout, en comparación con aproximadamente 130 tokens por segundo para ChatGPT y alrededor de 25 tokens por segundo para Deepseek, según puntos de referencia del análisis artificial.

“Si solo se compara con API a API, Gemini y GPT, todos son grandes modelos, pero todos se ejecutan a velocidades de GPU, que son aproximadamente 100 tokens por segundo”, explicó Wang. “Y 100 tokens por segundo están bien para el chat, pero es muy lento para el razonamiento. Es muy lento para los agentes. Y la gente está luchando con eso hoy”.

Esta ventaja de velocidad permite categorías completamente nuevas de aplicaciones que antes no eran prácticas, incluidos los agentes en tiempo real, los sistemas de voz de baja latencia conversacional, la generación de código interactivo y el razonamiento instantáneo de múltiples pasos, todos los cuales requieren encadenamiento de múltiples llamadas de modelo de lenguaje grandes que ahora se pueden completar en segundos en lugar de minutos.

La API de LLAMA representa un cambio significativo en la estrategia de IA de Meta, en la transición de ser un proveedor de modelos a convertirse en una compañía de infraestructura de IA de servicio completo. Al ofrecer un servicio API, Meta está creando un flujo de ingresos a partir de sus inversiones de IA mientras mantiene su compromiso de abrir modelos.

“Meta ahora está en el negocio de vender tokens, y es excelente para el tipo de ecosistema de IA estadounidense”, señaló Wang durante la conferencia de prensa. “Traen mucho a la mesa”.

La API ofrecerá herramientas para el ajuste y la evaluación, comenzando con el modelo LLAMA 3.3 8B, permitiendo a los desarrolladores generar datos, entrenar y probar la calidad de sus modelos personalizados. Meta enfatiza que no utilizará datos de clientes para capacitar a sus propios modelos, y los modelos construidos con la API de LLAMA se pueden transferir a otros hosts, una clara diferenciación de los enfoques más cerrados de algunos competidores.

Las cerebras alimentarán el nuevo servicio de Meta a través de su red de centros de datos ubicados en toda América del Norte, incluidas las instalaciones en Dallas, Oklahoma, Minnesota, Montreal y California.

“Todos nuestros centros de datos que sirven a la inferencia están en América del Norte en este momento”, explicó Choi. “Serviremos Meta con toda la capacidad de las cerebras. La carga de trabajo se equilibrará en todos estos diferentes centros de datos”.

El arreglo comercial sigue lo que Choi describió como “el proveedor de cómputo clásico para un modelo hiperscalador”, similar a la forma en que NVIDIA proporciona hardware a los principales proveedores de la nube. “Están reservando bloques de nuestro cómputo para que puedan servir a su población de desarrolladores”, dijo.

Más allá de las cerebras, Meta también ha anunciado una asociación con Groq para proporcionar opciones de inferencia rápida, brindando a los desarrolladores múltiples alternativas de alto rendimiento más allá de la inferencia tradicional basada en GPU.

La entrada de Meta en el mercado de API de inferencia con métricas de rendimiento superiores podría potencialmente alterar el orden establecido dominado por Operai, Google y Anthrope. Al combinar la popularidad de sus modelos de código abierto con capacidades de inferencia dramáticamente más rápidas, Meta se está posicionando como un competidor formidable en el espacio comercial de IA.

“Meta está en una posición única con 3 mil millones de usuarios, centros de datos de hiper escala y un gran ecosistema de desarrolladores”, según los materiales de presentación de Cerebras. La integración de la tecnología de cerebras “ayuda a Meta Leapfrog OpenAi y Google en rendimiento en aproximadamente 20X”.

Para las cerebras, esta asociación representa un hito importante y la validación de su enfoque especializado de hardware de IA. “Hemos estado construyendo este motor a escala de obleas durante años, y siempre supimos que la primera tarifa de la tecnología, pero en última instancia tiene que terminar como parte de la nube de hiperescala de otra persona. Ese fue el objetivo final desde una perspectiva de estrategia comercial, y finalmente hemos alcanzado ese hito”, dijo Wang.

La API de LLAMA está actualmente disponible como una vista previa limitada, con Meta planifica un despliegue más amplio en las próximas semanas y meses. Los desarrolladores interesados ​​en acceder a la inferencia Ultra-Fast Llama 4 pueden solicitar el acceso temprano seleccionando cerebras de las opciones del modelo dentro de la API de LLAMA.

“Si te imaginas a un desarrollador que no sabe nada sobre cerebras porque somos una empresa relativamente pequeña, solo pueden hacer clic en dos botones en el SDK estándar de SDK estándar de Meta, generar una tecla API, seleccionar la bandera de cerebras y luego, de repente, sus tokens se procesan en un motor gigante a escala de dafers”, explicó las cejas. “Ese tipo de hacernos estar en el back -end del ecosistema de desarrolladores de Meta todo el ecosistema es tremendo para nosotros”.

La elección de Meta de silicio especializada señala algo profundo: en la siguiente fase de la IA, no es solo lo que saben sus modelos, sino lo rápido que pueden pensarlo. En ese futuro, la velocidad no es solo una característica, es todo el punto.

Continue Reading

Noticias

Todo lo que necesitas saber

Published

on

El ritmo rápido de la innovación generativa de IA coloca en los proveedores que empujan nuevos modelos de idiomas grandes (LLM) aparentemente sin pausa.

Entre estos destacados proveedores de LLM está Google. Su familia Gemini Model es el modelo de lenguaje sucesor de Pathways (Palm). Google Gemini debutó en diciembre de 2023 con el lanzamiento de 1.0, y Gemini 1.5 Pro siguió en febrero de 2024. Gemini 2.0, anunciado en diciembre de 2024, estuvo disponible en febrero de 2025. El 25 de marzo de 2025, Google anunció Gemini 2.5 Pro Experimental, continuó el ritmo rápido de la innovación.

El modelo Google Gemini 2.5 Pro ingresó al panorama de LLM a medida que su mercado cambia hacia modelos de razonamiento, como Deepseek R1 y Open AI’s O3, así como modelos de razonamiento híbridos, incluido el soneto 3.7 de Anthrope’s Claude.

¿Qué es Gemini 2.5 Pro?

Gemini 2.5 Pro es un LLM desarrollado por Google Deepmind. Cuando debutó en marzo de 2025, fue el modelo de IA más avanzado de Google, superando las capacidades y el rendimiento de las iteraciones anteriores de Gemini.

Al igual que con Gemini 2.0, Gemini 2.5 Pro es un LLM multimodal, lo que significa que no es simplemente para texto. Procesa y analiza texto, imágenes, audio y video. El modelo también tiene fuertes capacidades de codificación, superando los modelos de Géminis anteriores.

El modelo Gemini 2.5 Pro es el primero de la serie Gemini en ser construido especialmente como un “modelo de pensamiento” con la funcionalidad de razonamiento avanzado como capacidad central. En algunos aspectos, el modelo Gemini 2.5 Pro se basa en una versión de Gemini 2.0, Flash Thinking, que proporciona capacidades de razonamiento limitadas. Los modelos avanzados como Gemini 2.5 Pro usan un razonamiento de más tiempo a través de o “pensar” a través de los pasos requeridos para ejecutar un aviso, superando la mera institución de la cadena de pensamiento para permitir una producción más matizada, a menudo con mayor profundidad y precisión.

Google aplicó técnicas avanzadas, incluidos el aprendizaje de refuerzo y el mejor entrenamiento posterior, para aumentar el rendimiento de Gemini 2.5 Pro sobre los modelos anteriores. El modelo se lanzó con una ventana de contexto de un millón, con planes de expandirse a 2 millones de tokens.

¿Qué hay de nuevo en Gemini 2.5 Pro?

Las nuevas capacidades de Gemini 2.5 Pro y la funcionalidad mejorada elevan a la familia Google Gemini LLM.

Las mejoras clave incluyen lo siguiente:

  • Razonamiento mejorado. La función principal de Gemini 2.5 Pro es su capacidad de razonamiento mejorada. Según Google, Gemini 2.5 Pro supera a Openai O3, el soneto antrópico Claude 3.7 y Deepseek R1 sobre los puntos de referencia de razonamiento y conocimiento, incluido el último examen de la humanidad.
  • Capacidades de codificación avanzadas. Según Google, Gemini 2.5 Pro también supera las iteraciones anteriores en términos de capacidades de codificación. Similar a sus predecesores, este modelo genera y depura el código y crea aplicaciones visualmente atractivas. El modelo admite la generación y ejecución del código, lo que le permite probar y refinar sus soluciones. Gemini 2.5 Pro calificó 63.8% en SWE-Bench Verified, un estándar de la industria para evaluaciones de código de agente, con una configuración de agente personalizado que supera el soneto de Claude 3.7 de OpenAI GPT-4.5.
  • Habilidades avanzadas de matemáticas y ciencias. Google también afirma mejorar las capacidades de matemáticas y ciencias. En el punto de referencia de matemáticas AIME 2025, Gemini 2.5 Pro obtuvo un 86.7%; En el referencia de GPQA Diamond Science, logró un 84%. Ambos puntajes superaron a sus rivales.
  • Multimodalidad nativa. Sobre la base de fortalezas familiares, Gemini 2.5 Pro mantiene capacidades multimodales nativas, lo que permite la comprensión y el trabajo con textos, audio, imágenes, video y repositorios completos de código.
  • Procesamiento en tiempo real. A pesar del aumento de las capacidades, el modelo mantiene una latencia razonable, lo que lo hace adecuado para aplicaciones en tiempo real y casos de uso interactivo.

https://www.youtube.com/watch?v=ntceobo-saa

¿Cómo mejora Gemini 2.5 Pro Google?

El modelo Gemini 2.5 Pro mejora los servicios de Google, y su posición entre los compañeros, de las siguientes maneras:

Liderazgo competitivo

El altamente competitivo LLM Market presenta a los principales competidores globales: Meta’s Llama Family, OpenAI’s GPT-4O y O3, Claude de Anthrope y Xai’s Grok, además de profundos de China, todos compiten por la participación de mercado. En su lanzamiento, Gemini 2.5 Pro inmediatamente se disparó a la cima de la clasificación de LLM Arena para la evaluación comparativa de IA, mejorando su posición como desarrollador líder de LLM para que las organizaciones lo consideren.

Mejores resultados en las aplicaciones de Google

En el lanzamiento, Gemini 2.5 Pro no se integró en el suite de productos de Google, incluidas las aplicaciones de búsqueda de búsqueda y Google Works. Sin embargo, su integración exitosa promete mejorar múltiples servicios. Para la búsqueda de Google, las capacidades de razonamiento mejoradas proporcionan respuestas más matizadas y precisas a consultas complejas. En Google Docs y otras aplicaciones del espacio de trabajo, la comprensión mejorada del contexto del modelo permite un análisis de documentos y generación de contenido más sofisticados.

Enfoque de desarrollador

Las habilidades de ejecución y generación de códigos avanzados del modelo también fortalecen la posición de Google en las herramientas y servicios del desarrollador, mejorando las llamadas de funciones y la automatización del flujo de trabajo en los servicios en la nube de Google.

Usos para Gemini 2.5 Pro

Gemini 2.5 Pro admite una variedad de tareas, que incluyen:

  • Pregunta y respuesta. Gemini es un recurso para las interacciones de conocimiento de preguntas y respuestas fundamentales, basándose en los datos de capacitación de Google.
  • Resumen de contenido multimodal. Como modelo multimodal, Gemini 2.5 Pro resume el contenido de texto, audio o video de forma larga.
  • Respuesta de preguntas multimodales. El modelo combina información de texto, imágenes, audio y video para responder preguntas que abarcan múltiples modalidades.
  • Generación de contenido de texto. Similar a sus predecesores, Gemini 2.5 Pro maneja la generación de texto.
  • Resolución compleja de problemas. Con sus capacidades de razonamiento avanzado, Gemini 2.5 Pro administra tareas que requieren razonamiento lógico, como matemáticas, ciencias y análisis estructurado.
  • Investigación profunda. La ventana de contexto extendida del modelo y las capacidades de razonamiento lo hacen ideal para analizar documentos largos, sintetizar información de múltiples fuentes y realizar investigaciones en profundidad.
  • Tareas de codificación avanzadas. Gemini 2.5 Pro genera y depura el código que admite tareas de desarrollo de aplicaciones.
  • Ai de agente. El razonamiento avanzado, las llamadas de funciones y el uso de la herramienta del modelo respaldan su valor como parte de un flujo de trabajo de AI agente.

¿Qué plataformas aceptan la integración Gemini 2.5 Pro?

Siguiendo los pasos de la familia Gemini, Gemini 2.5 está establecido para la integración en una serie de servicios de Google, que incluyen:

  • Google AI Studio. En el lanzamiento, el nuevo modelo está disponible con Google AI Studio, una herramienta basada en la web que permite a los desarrolladores probar modelos directamente en el navegador.
  • Aplicación Géminis. En el menú de selección del modelo desplegable, los suscriptores del servicio avanzado de Gemini pueden acceder al modelo a través de la aplicación Gemini en las plataformas de escritorio y móviles.
  • Vértice ai. Google planea poner a disposición Gemini 2.5 Pro con su plataforma Vertex AI, lo que permite a las empresas utilizar el modelo para implementaciones a mayor escala.
  • API GEMINI. Aunque no estaba disponible en el lanzamiento, todas las versiones anteriores de Gemini estaban disponibles utilizando una interfaz de programación de aplicaciones que permite a los desarrolladores integrar el modelo directamente en sus aplicaciones.

Sean Michael Kerner es un consultor de TI, entusiasta de la tecnología y tinkerer. Ha sacado el anillo de tokens, configurado NetWare y se sabe que compiló su propio kernel Linux. Consulta con organizaciones de la industria y los medios de comunicación sobre temas de tecnología.

Continue Reading

Trending