Noticias
ChatGPT predicts all 32 first-round picks in 2025
Published
2 semanas agoon
2025 NFL Draft offers tons of offensive and defensive line help
USA TODAY Sports’ Tyler Dragon breaks down the deepest part of the 2025 NFL Draft, offensive and defensive lineman.
Sports Pulse
Mock drafts can be like steak. Many people love consuming them, but each one is made a little differently.
And just as some people like their steaks with A1 sauce, others like to see what happens when AI completes a mock draft.
USA TODAY Sports consulted OpenAI’s AI chatbot, ChatGPT, for its take on a first-round mock draft ahead of the 2025 NFL draft. Artificial intelligence made its selection for each pick and added some of its own justifications for them as well.
Though each pick was one that ChatGPT eventually landed on, a human writer made sure that each selection was a (relatively) realistic pick for each of the 32 teams.
Here’s how the first round of the 2025 NFL draft could go, according to ChatGPT:
MORE AI MOCK DRAFTS: Meta AI predicts the entire first round
2025 NFL mock draft: ChatGPT’s first-round picks
1. Tennessee Titans: Cam Ward, QB, Miami (FL)
With the first overall pick in the 2025 NFL draft, ChatGPT did not stray from the status quo. The AI chatbot pointed to Tennessee’s major need for a quarterback with Will Levis not proving he’s a long-term answer and “a new regime possibly wanting their own guy.”
The Titans hired Mike Borgonzi as their new general manager in January, and head coach Brian Callahan will be entering his second season in charge in Nashville.
2. Cleveland Browns: Travis Hunter, CB/WR, Colorado
Though quarterback is a draft need for Cleveland, ChatGPT ultimately did not go for the No. 2 quarterback in the draft – Colorado’s Shedeur Sanders – with the No. 2 overall pick. Instead, it selected Sanders’ teammate, Hunter, pointing to his versatility to play both wide receiver and cornerback at an “elite level—basically two first-rounders in one,” it wrote.
3. New York Giants: Abdul Carter, Edge, Penn State
No quarterback for “Big Blue” if ChatGPT has anything to say about it. The chatbot had also considered quarterback Shedeur Sanders and defensive tackle Mason Graham with the third pick, but it ultimately couldn’t pass up the opportunity to pair Carter with Kayvon Thibodeaux and Brian Burns in the Giants’ pass rush.
“That’s a nightmare for opposing QBs … If you can’t land your QB (like Ward), then ruin other people’s QBs instead.”
The OpenAI product also suggested Sanders’ NFL readiness was too questionable for New York to take that shot: “The Giants might not want to swing and miss again at QB this high.”
4. New England Patriots: Will Campbell, OT, LSU
ChatGPT is not concerned with Campbell’s arm size. It does like his three years of starting tackle experience in the SEC and the idea of getting quarterback Drake Maye better protection up front.
The artificial intelligence also considered defensive tackle Mason Graham to build the trenches on the other side of the ball and receiver Emeka Egbuka. However, the apparent need for help on the O-line was enough to make Campbell the No. 4 pick.
5. Jacksonville Jaguars: Mason Graham, DT, Michigan
“He’s the best interior defensive lineman in this class—quick off the snap, disruptive against both the run and pass,” ChatGPT wrote. “Jacksonville’s run defense and interior pass rush have been soft for years. Graham can anchor that front from Day 1.”
It’s hard to disagree with any of that analysis. The Jaguars allowed the second-most total offensive yards in 2024 and were second-worst in expected points added (EPA) per play, only behind the Carolina Panthers. They ranked 32nd in pass-rush win rate and 27th in run-stop win rate, according to ESPN.
NFL DRAFT BIG BOARD: Travis Hunter leads top 101 players in class
6. Las Vegas Raiders: Armand Membou, OT, Missouri
Instead of bringing in some wide receiver help with a player like Arizona’s Tetairoa McMillan, ChatGPT opted to get a player with “trench cornerstone” upside. The AI chatbot wrote about Membou’s “elite traits” including “great feet, powerful hands and enough athleticism to hold his own against speed rushers off the edge.”
Membou would likely play right tackle or kick inside for Las Vegas in this scenario given Kolton Miller’s excellent play at left tackle, which ChatGPT acknowledged.
7. New York Jets: Tetairoa McMillan, WR, Arizona
After considering the Raiders as a landing spot for McMillan, ChatGPT ultimately had the Jets draft the big-bodied receiver with the No. 7 pick. The AI pointed to McMillan’s 6-foot-5 frame, body control and ball-tracking ability as parts of what make the Arizona product such an exciting prospect.
ChatGPT also liked pairing a big, downfield threat like McMillan with a veteran presence in Garrett Wilson. “Garrett Wilson is your route-running stud. McMillan gives you a massive red-zone threat and vertical presence,” it wrote.
8. Carolina Panthers: Emeka Egbuka, WR, Ohio State
ChatGPT continues to give the offense a lot of love in its mock draft. This time, it gives the Panthers another weapon for quarterback Bryce Young to throw to.
The AI wrote about Egbuka’s skills as a route-runner and versatility to play in the slot and outside – though mainly in the slot – as reasons for Carolina to bring him in. Given Adam Thielen’s age, adding Egbuka to pair with second-year receivers Xavier Legette and Jalen Coker is a good way for the Panthers to move forward in building its young receiver corps.
9. New Orleans Saints: Shedeur Sanders, QB, Colorado
Sanders’ brief slide ends within the top 10 as New Orleans takes another swing at bringing in its quarterback of the future. ChatGPT thinks the Saints are getting “tremendous value” by bringing in “a high-upside QB prospect with proven poise and experience” with the ninth overall pick.
The chatbot also wrote that the Saints have the option to let Sanders sit and develop behind incumbent veteran Derek Carr to begin the year if they so choose. “You’ve got options.”
10. Chicago Bears: Kelvin Banks Jr., OT, Texas
Darnell Wright is locked in on the right side for Chicago, but the Bears could still use a longer-term answer at left tackle that isn’t Braxton Jones, who’s missed 13 games across the last two seasons and is a free agent in 2026.
That’s why ChatGPT is bringing in another player with franchise tackle potential across from Wright and in front of franchise-quarterback hopeful Caleb Williams. And as the AI chatbot wrote, “Offensive tackle is always worth a top-10 pick, especially when you have a young QB to protect.”
11. San Francisco 49ers: Luther Burden III, WR, Missouri
A third wide receiver comes off the board within the first dozen picks. While there is a need for wide receiver help with Brandon Aiyuk recovering from an ACL tear and Deebo Samuel traded to the Commanders this offseason, it is not the team’s most pressing need. That would be on the defensive line, where San Francisco lost three out of its four starters.
Regardless, ChatGPT was a fan of Burden’s yards-after-catch abilities and how he pairs with running back Christian McCaffrey and tight end George Kittle in head coach Kyle Shanahan’s offensive system.
12. Dallas Cowboys: Tyler Booker, OG, Alabama
Once again, ChatGPT went for a depth need for a team rather than a more pressing need at another spot on the roster. It pointed to Dallas’ need for a successor to Zack Martin on the interior following the 34-year-old’s retirement earlier this offseason.
The AI chatbot also pointed to how successful previous Cowboys teams were when they had dominant offensive lines. Booker would be a nice option across from left guard Tyler Smith to fortify Dallas’ line in 2025 and beyond.
13. Miami Dolphins: Walter Nolen, DT, Mississippi
For the first time since the Jaguars took Mason Graham with the sixth overall pick, another defensive player comes off the board. ChatGPT did a nice job making this selection for Miami.
Calais Campbell headed to Arizona in free agency, and the Dolphins did not do much to address their interior defensive line earlier this offseason. They currently have four D-linemen on the roster: Zach Sieler, Benito Jones, Matt Dickerson and Neil Farrell. ChatGPT says Nolen has potential to reach “top-5-in-the-NFL type disruption if he continues to develop.”
14. Indianapolis Colts: Colston Loveland, TE, Michigan
ChatGPT filled the Colts’ clear biggest need with the No. 14 pick. “He gives the Colts something they don’t have—a legit TE1 with Pro Bowl upside—and makes the offense more dangerous from Day 1,” it wrote.
Indeed, Indianapolis is in dire need of an upgrade at tight end after no player at the position finished with more than 200 receiving yards for the Colts last year. Loveland’s large frame, good hands and route-running prowess were all enough to make him the best pick here, according to the AI chatbot.
15. Atlanta Falcons: Mike Green, Edge, Marshall
The Falcons have to keep taking swings at pass-rushing talent until they succeed. Even after trading for veteran Matthew Judon last offseason, Atlanta finished 27th in pass-rush win rate, according to ESPN.
ChatGPT appeared to be well aware of the Falcons’ need for a talented young edge rusher, and it sent the nation’s sack leader in 2024 to Atlanta with the 15th pick.
16. Arizona Cardinals: James Pearce Jr., Edge, Tennessee
Back-to-back edge rusher picks for teams with a need at the position. Arizona was even worse than Atlanta at rushing the passer according to ESPN’s analytics, ranking 28th in the league with a 33% win rate.
ChatGPT gave the Cardinals James Pearce Jr. – “one of the top pure edge rushers in the draft,” it wrote – to swing things in a better direction for the NFC West contender.
2025 NFL DRAFT: Which teams should draft a quarterback? Ranking all 32
17. Cincinnati Bengals: Jalon Walker, Edge, Georgia
There is a bit of a run-on-edge rushers here through the halfway point of the first round. Walker is a special case because he has the versatility to play both on the edge and as an off-ball linebacker.
The Bengals need some insurance at edge with a disgruntled Trey Hendrickson recently requesting a trade out of Cincinnati. ChatGPT addressed that need and another Bengals need for an off-ball linebacker with a versatile athlete in Walker.
18. Seattle Seahawks: Jahdae Barron, CB, Texas
Cornerback is far from the top priority for Seattle in the draft, but there is an argument to be made. Devon Witherspoon is the only corner the Seahawks will have under contract past this year barring any contract extensions. Barron would provide excellent depth that would extend into the team’s future.
ChatGPT wrote that Barron’s versatility and athleticism gave him a high ceiling as a prospect in its justification for the pick.
19. Tampa Bay Buccaneers: Shemar Stewart, Edge, Texas A&M
After a long stretch of primarily offensive players, ChatGPT has flipped the script with a long run of only defensive players. This time it’s another edge rusher, Stewart, going to Tampa Bay with the 19th overall pick.
The AI chatbot pointed to his “explosive athleticism and ability to disrupt the quarterback” as a good fit for a Buccaneers pass-rush attack that could use enhancing. The team has veteran Haason Reddick for the coming season, but it could use a longer-term answer at the key position.
20. Denver Broncos: Matthew Golden, WR, Texas
Courtland Sutton got off to a great start building his chemistry with rookie quarterback Bo Nix last year, but the team still needs receiver help to go along with Sutton in the passing game. That’s where Golden comes in, as ChatGPT suggested his “explosive speed and ability to make plays after the catch” were part of what made him a good fit.
The chatbot wrote it was also considering Ashton Jeanty, but his slide continues out of the top 20 instead.
21. Pittsburgh Steelers: Jalen Milroe, QB, Alabama
As of the time of writing, the Steelers would be entering the 2025 season with Mason Rudolph as their starting quarterback and Skylar Thompson backing him up. ChatGPT gave them a younger option – Milroe – to develop before giving him a chance to take over as their future franchise quarterback.
“If the Steelers were to select him at pick No. 21,” the chatbot wrote, “it would be with the intent to develop his potential … aligning with their long-term strategic goals.”
22. Los Angeles Chargers: Ashton Jeanty, RB, Boise State
The slide finally stops for the best running back prospect in the 2025 NFL draft class. ChatGPT partnered the Heisman Trophy runner-up with head coach Jim Harbaugh in Los Angeles after running backs Gus Edwards and J.K. Dobbins hit free agency.
“Jeanty offers excellent speed, vision, and versatility as a dual-threat back,” it wrote. “He would add a dynamic element to the Chargers’ offense, complementing the passing game and making the running game much more potent.”
23. Green Bay Packers: Derrick Harmon, DT, Oregon
Despite the Packers’ more pressing need for an edge rusher, there were just too many off of the board at this point in the mock draft. ChatGPT went with a versatile defensive tackle with experience playing at the nose tackle and 3-technique positions on the defensive line.
Given that Green Bay generally needs help rushing the passer, regardless of which position is doing it, it doesn’t hurt to bring in the power conferences’ leader in pressures from the interior.
24. Minnesota Vikings: Grey Zabel, OG, North Dakota State
Minnesota poached a couple of interior offensive linemen – center Ryan Kelly and right guard Will Fries – from the Indianapolis Colts in free agency. However, some extra help could still be needed on the left side of the interior.
Zabel has played four out of the five positions on the offensive line and projects as a guard at the pro level. ChatGPT wrote, “His fit within the Vikings’ zone-blocking scheme and his technical prowess in the run game address a critical gap on the offensive line.”
25. Houston Texans: Donovan Jackson, OL, Ohio State
ChatGPT had this to say about why Jackson is a good fit in Houston: “Ultra-athletic, battle-tested interior lineman with positional versatility. He’s great at pulling, climbing to the second level, and would instantly help protect C.J. Stroud—his former college teammate.”
Indeed, Jackson played in all 13 of Ohio State’s games as a freshman with Stroud under center and was named a starter for the team the following year, Stroud’s final one with the Buckeyes. He could immediately replace Shaq Mason on the right side after the Texans released their former guard.
26. Los Angeles Rams: Maxwell Hairston, CB, Kentucky
No player invited to the 2025 NFL combine was faster than Kentucky cornerback Maxwell Hairston. The speedster is a scheme-versatile cornerback with excellent ball skills, tallying five interceptions in a full 2023 season and one in an injury-shortened 2024.
ChatGPT pointed out the Rams’ need to defend excellent receivers in a stacked NFC West: San Francisco’s Jauan Jennings and Brandon Aiyuk, Arizona’s Marvin Harrison Jr. and Seattle’s Jaxon Smith-Njigba and Cooper Kupp. The AI chatbot envisions Hairston as a rotational corner in Los Angeles’ secondary, if not a potential starter.
27. Baltimore Ravens: Donovan Ezeiruaku, Edge, Boston College
A 16.5-sack season from Ezeiruaku pushed the Boston College star firmly into first-round discussion. His 2024 tape showed a player that has the potential to be a plug-and-play, Week 1 starter for the team that drafts him, making him a great fit for a Ravens squad that needs help in its pass-rush attack.
ChatGPT wrote, “His proven track record of quarterback pressures and sacks positions him as a valuable asset to the Ravens’ defense.”
28. Detroit Lions: Princely Umanmielen, Edge, Mississippi
Given the amount of edge rusher talent off the board at this late stage of the first round, the Lions had to reach slightly to get the next best available. Most analysts project Umanmielen as a second-round pick, but ChatGPT listed him as a “Back-end Round 1 pass rusher with serious juice.”
The AI bot likes his upside and potential to contribute as a rotational pass rusher early on. “Twitched-up pass rusher with great first-step quickness and bend,” it wrote. “Still refining his game, but he’s shown flashes of Round 1 ability. Could rotate early and eventually start opposite Aidan Hutchinson.”
29. Washington Commanders: Azareye’h Thomas, CB, Florida State
ChatGPT wrote: “With good size (6’1″, 197 lbs) and solid production at Florida State, he could help immediately on Washington’s defense. His physical play style and coverage skills make him a solid pick for a team looking to improve at cornerback.”
Thomas recorded 53 tackles and an interception in 2024, one year after a 2023 season that featured 29 tackles, a forced fumble, 10 passes defensed and 0.5 sacks. ChatGPT pointed to Thomas’ size and coverage ability as major parts of what makes him a good fit for Washington’s defense.
30. Buffalo Bills: Nick Emmanwori, S, South Carolina
So many edge rushers are off the board, leaving the Bills almost no choice but to address a different position with the No. 30 pick. So that’s precisely what ChatGPT did, sending one of this draft class’s best athletes to Buffalo to bolster its defensive secondary.
“While players like Damar Hamlin and Taylor Rapp are on the roster, Emmanwori’s skill set offers a unique blend of coverage ability and physicality,” it wrote. “His proficiency in both zone and man coverage schemes, coupled with his tackling prowess, aligns well with the Bills’ defensive philosophy.”
31. Kansas City Chiefs: Josh Conerly Jr., OT, Oregon
ChatGPT wrote, “He gives KC a future starter at tackle with the athletic profile to thrive in their pass-heavy offense. He’s raw, but that’s what Reid’s staff does best — develop traits into production.”
It is hard to disagree with that logic, especially after Kansas City was forced to lean on left guard Joe Thuney to play at its left tackle spot for several games. The line will need even more help now that Thuney is gone via trade to the Bears.
32. Philadelphia Eagles: Josh Simmons, OT, Ohio State
“The Eagles value versatility, and Simmons’ ability to play both tackle spots adds value,” the chatbot wrote. “Additionally, having a young, athletic offensive lineman who can be developed into a potential starter when the team needs it fits their long-term strategy. Even though Simmons might not start immediately, his growth potential aligns well with the Eagles’ track record of developing players on the offensive line.”
This is all true. The Eagles greatly value what they have in offensive line coach Jeff Stoutland, who has been a massive factor in getting Philadelphia’s O-line to the outstanding level it’s played at in recent years. Simmons only makes the future of the Eagles’ offensive line even more formidable.
You may like

An illustration photograph taken on Feb. 20, 2025 shows Grok, DeepSeek and ChatGPT apps displayed on a phone screen. The Justice Department’s 2020 complaint against Google has few mentions of artificial intelligence or AI chatbots. But nearly five years later, as the remedy phase of the trial enters its second week of testimony, the focus has shifted to AI.
Michael M. Santiago/Getty Images/Getty Images North America
hide caption
toggle caption
Michael M. Santiago/Getty Images/Getty Images North America
When the U.S. Department of Justice originally brought — and then won — its case against Google, arguing that the tech behemoth monopolized the search engine market, the focus was on, well … search.
Back then, in 2020, the government’s antitrust complaint against Google had few mentions of artificial intelligence or AI chatbots. But nearly five years later, as the remedy phase of the trial enters its second week of testimony, the focus has shifted to AI, underscoring just how quickly this emerging technology has expanded.
In the past few days, before a federal judge who will assess penalties against Google, the DOJ has argued that the company could use its artificial intelligence products to strengthen its monopoly in online search — and to use the data from its powerful search index to become the dominant player in AI.
In his opening statements last Monday, David Dahlquist, the acting deputy director of the DOJ’s antitrust civil litigation division, argued that the court should consider remedies that could nip a potential Google AI monopoly in the bud. “This court’s remedy should be forward-looking and not ignore what is on the horizon,” he said.
Dahlquist argued that Google has created a system in which its control of search helps improve its AI products, sending more users back to Google search — creating a cycle that maintains the tech company’s dominance and blocks competitors out of both marketplaces.
The integration of search and Gemini, the company’s AI chatbot — which the DOJ sees as powerful fuel for this cycle — is a big focus of the government’s proposed remedies. The DOJ is arguing that to be most effective, those remedies must address all ways users access Google search, so any penalties approved by the court that don’t include Gemini (or other Google AI products now or in the future) would undermine their broader efforts.

Department of Justice lawyer David Dahlquist leaves the Washington, D.C. federal courthouse on Sept. 20, 2023 during the original trial phase of the antitrust case against Google.
Jose Luis Magana/AP/FR159526 AP
hide caption
toggle caption
Jose Luis Magana/AP/FR159526 AP
AI and search are connected like this: Search engine indices are essentially giant databases of pages and information on the web. Google has its own such index, which contains hundreds of billions of webpages and is over 100,000,000 gigabytes, according to court documents. This is the data Google’s search engine scans when responding to a user’s query.
AI developers use these kinds of databases to build and train the models used to power chatbots. In court, attorneys for the DOJ have argued that Google’s Gemini pulls information from the company’s search index, including citing search links and results, extending what they say is a self-serving cycle. They argue that Google’s ability to monopolize the search market gives it user data, at a huge scale — an advantage over other AI developers.
The Justice Department argues Google’s monopoly over search could have a direct effect on the development of generative AI, a type of artificial intelligence that uses existing data to create new content like text, videos or photos, based on a user’s prompts or questions. Last week, the government called executives from several major AI companies, like OpenAI and Perplexity, in an attempt to argue that Google’s stranglehold on search is preventing some of those companies from truly growing.
The government argues that to level the playing field, Google should be forced to open its search data — like users’ search queries, clicks and results — and license it to other competitors at a cost.
This is on top of demands related to Google’s search engine business, most notably that it should be forced to sell off its Chrome browser.
Google flatly rejects the argument that it could monopolize the field of generative AI, saying competition in the AI race is healthy. In a recent blog post on Google’s website, Lee-Anne Mulholland, the company’s vice president of regulatory affairs, wrote that since the federal judge first ruled against Google over a year ago, “AI has already rapidly reshaped the industry, with new entrants and new ways of finding information, making it even more competitive.”
In court, Google’s lawyers have argued that there are a host of AI companies with chatbots — some of which are outperforming Gemini. OpenAI has ChatGPT, Meta has MetaAI and Perplexity has Perplexity AI.
“There is no shortage of competition in that market, and ChatGPT and Meta are way ahead of everybody in terms of the distribution and usage at this point,” said John E. Schmidtlein, a lawyer for Google, during his opening statement. “But don’t take my word for it. Look at the data. Hundreds and hundreds of millions of downloads by ChatGPT.”
Competing in a growing AI field
It should be no surprise that AI is coming up so much at this point in the trial, said Alissa Cooper, the executive director of the Knight-Georgetown Institute, a nonpartisan tech research and policy center at Georgetown University focusing on AI, disinformation and data privacy.
“If you look at search as a product today, you can’t really think about search without thinking about AI,” she said. “I think the case is a really great opportunity to try to … analyze how Google has benefited specifically from the monopoly that it has in search, and ensure that the behavior that led to that can’t be used to gain an unfair advantage in these other markets which are more nascent.”
Having access to Google’s data, she said, “would provide them with the ability to build better chatbots, build better search engines, and potentially build other products that we haven’t even thought of.”
To make that point, the DOJ called Nick Turley, OpenAI’s head of product for ChatGPT, to the stand last Tuesday. During a long day of testimony, Turley detailed how without access to Google’s search index and data, engineers for the growing company tried to build their own.
ChatGPT, a large language model that can generate human-like responses, engage in conversations and perform tasks like explaining a tough-to-understand math lesson, was never intended to be a product for OpenAI, Turley said. But once it launched and went viral, the company found that people were using it for a host of needs.
Though popular, ChatGPT had its drawbacks, like the bot’s limited “knowledge,” Turley said. Early on, ChatGPT was not connected to the internet and could only use information that it had been fed up to a certain point in its training. For example, Turley said, if a user asked “Who is the president?” the program would give a 2022 answer — from when its “knowledge” effectively stopped.
OpenAI couldn’t build their own index fast enough to address their problems; they found that process incredibly expensive, time consuming and potentially years from coming to fruition, Turley said.
So instead, they sought a partnership with a third party search provider. At one point, OpenAI tried to make a deal with Google to gain access to their search, but Google declined, seeing OpenAI as a direct competitor, Turley testified.
But Google says companies like OpenAI are doing just fine without gaining access to the tech giant’s own technology — which it spent decades developing. These companies just want “handouts,” said Schmidtlein.
On the third day of the remedy trial, internal Google documents shared in court by the company’s lawyers compared how many people are using Gemini versus its competitors. According to those documents, ChatGPT and MetaAI are the two leaders, with Gemini coming in third.
They showed that this March, Gemini saw 35 million active daily users and 350 million monthly active users worldwide. That was up from 9 million daily active users in October 2024. But according to those documents, Gemini was still lagging behind ChatGPT, which reached 160 million daily users and around 600 million active users in March.
These numbers show that competitors have no need to use Google’s search data, valuable intellectual property that the tech giant spent decades building and maintaining, the company argues.
“The notion that somehow ChatGPT can’t get distribution is absurd,” Schmidtlein said in court last week. “They have more distribution than anyone.”
Google’s exclusive deals
In his ruling last year, U.S. District Judge Amit Mehta said Google’s exclusive agreements with device makers, like Apple and Samsung, to make its search engine the default on those companies’ phones helped maintain its monopoly. It remains a core issue for this remedy trial.
Now, the DOJ is arguing that Google’s deals with device manufacturers are also directly affecting AI companies and AI tech.
In court, the DOJ argued that Google has replicated this kind of distribution deal by agreeing to pay Samsung what Dahlquist called a monthly “enormous sum” for Gemini to be installed on smartphones and other devices.
Last Wednesday, the DOJ also called Dmitry Shevelenko, Perplexity’s chief business officer, to testify that Google has effectively cut his company out from making deals with manufacturers and mobile carriers.
Perplexity AIs not preloaded on any mobile devices in the U.S., despite many efforts to get phone companies to establish Perplexity as a default or exclusive app on devices, Shevelenko said. He compared Google’s control in that space to that of a “mob boss.”
But Google’s attorney, Christopher Yeager, noted in questioning Shevelenko that Perplexity has reached a valuation of over $9 billion — insinuating the company is doing just fine in the marketplace.
Despite testifying in court (for which he was subpoenaed, Shevelenko noted), he and other leaders at Perplexity are against the breakup of Google. In a statement on the company’s website, the Perplexity team wrote that neither forcing Google to sell off Chrome nor to license search data to its competitors are the best solutions. “Neither of these address the root issue: consumers deserve choice,” they wrote.

Google and Alphabet CEO Sundar Pichai departs federal court after testifying in October 2023 in Washington, DC. Pichai testified to defend his company in the original antitrust trial. Pichai is expected to testify again during the remedy phase of the legal proceedings.
Drew Angerer/Getty Images/Getty Images North America
hide caption
toggle caption
Drew Angerer/Getty Images/Getty Images North America
What to expect next
This week the trial continues, with the DOJ calling its final witnesses this morning to testify about the feasibility of a Chrome divestiture and how the government’s proposed remedies would help rivals compete. On Tuesday afternoon, Google will begin presenting its case, which is expected to feature the testimony of CEO Sundar Pichai, although the date of his appearance has not been specified.
Closing arguments are expected at the end of May, and then Mehta will make his ruling. Google says once this phase is settled the company will appeal Mehta’s ruling in the underlying case.
Whatever Mehta decides in this remedy phase, Cooper thinks it will have effects beyond just the business of search engines. No matter what it is, she said, “it will be having some kind of impact on AI.”
Google is a financial supporter of NPR.
Noticias
API de Meta Oleleshes Llama que se ejecuta 18 veces más rápido que OpenAI: Cerebras Partnership ofrece 2.600 tokens por segundo
Published
2 horas agoon
29 abril, 2025
Únase a nuestros boletines diarios y semanales para obtener las últimas actualizaciones y contenido exclusivo sobre la cobertura de IA líder de la industria. Obtenga más información
Meta anunció hoy una asociación con Cerebras Systems para alimentar su nueva API de LLAMA, ofreciendo a los desarrolladores acceso a velocidades de inferencia hasta 18 veces más rápido que las soluciones tradicionales basadas en GPU.
El anuncio, realizado en la Conferencia inaugural de desarrolladores de Llamacon de Meta en Menlo Park, posiciona a la compañía para competir directamente con Operai, Anthrope y Google en el mercado de servicios de inferencia de IA en rápido crecimiento, donde los desarrolladores compran tokens por miles de millones para impulsar sus aplicaciones.
“Meta ha seleccionado a Cerebras para colaborar para ofrecer la inferencia ultra rápida que necesitan para servir a los desarrolladores a través de su nueva API de LLAMA”, dijo Julie Shin Choi, directora de marketing de Cerebras, durante una sesión de prensa. “En Cerebras estamos muy, muy emocionados de anunciar nuestra primera asociación HyperScaler CSP para ofrecer una inferencia ultra rápida a todos los desarrolladores”.
La asociación marca la entrada formal de Meta en el negocio de la venta de AI Computation, transformando sus populares modelos de llama de código abierto en un servicio comercial. Si bien los modelos de LLAMA de Meta se han acumulado en mil millones de descargas, hasta ahora la compañía no había ofrecido una infraestructura en la nube de primera parte para que los desarrolladores creen aplicaciones con ellos.
“Esto es muy emocionante, incluso sin hablar sobre cerebras específicamente”, dijo James Wang, un ejecutivo senior de Cerebras. “Openai, Anthrope, Google: han construido un nuevo negocio de IA completamente nuevo desde cero, que es el negocio de inferencia de IA. Los desarrolladores que están construyendo aplicaciones de IA comprarán tokens por millones, a veces por miles de millones. Y estas son como las nuevas instrucciones de cómputo que las personas necesitan para construir aplicaciones AI”.
Breaking the Speed Barrier: Cómo modelos de Llama de Cerebras Supercharges
Lo que distingue a la oferta de Meta es el aumento de la velocidad dramática proporcionado por los chips de IA especializados de Cerebras. El sistema de cerebras ofrece más de 2.600 fichas por segundo para Llama 4 Scout, en comparación con aproximadamente 130 tokens por segundo para ChatGPT y alrededor de 25 tokens por segundo para Deepseek, según puntos de referencia del análisis artificial.
“Si solo se compara con API a API, Gemini y GPT, todos son grandes modelos, pero todos se ejecutan a velocidades de GPU, que son aproximadamente 100 tokens por segundo”, explicó Wang. “Y 100 tokens por segundo están bien para el chat, pero es muy lento para el razonamiento. Es muy lento para los agentes. Y la gente está luchando con eso hoy”.
Esta ventaja de velocidad permite categorías completamente nuevas de aplicaciones que antes no eran prácticas, incluidos los agentes en tiempo real, los sistemas de voz de baja latencia conversacional, la generación de código interactivo y el razonamiento instantáneo de múltiples pasos, todos los cuales requieren encadenamiento de múltiples llamadas de modelo de lenguaje grandes que ahora se pueden completar en segundos en lugar de minutos.
La API de LLAMA representa un cambio significativo en la estrategia de IA de Meta, en la transición de ser un proveedor de modelos a convertirse en una compañía de infraestructura de IA de servicio completo. Al ofrecer un servicio API, Meta está creando un flujo de ingresos a partir de sus inversiones de IA mientras mantiene su compromiso de abrir modelos.
“Meta ahora está en el negocio de vender tokens, y es excelente para el tipo de ecosistema de IA estadounidense”, señaló Wang durante la conferencia de prensa. “Traen mucho a la mesa”.
La API ofrecerá herramientas para el ajuste y la evaluación, comenzando con el modelo LLAMA 3.3 8B, permitiendo a los desarrolladores generar datos, entrenar y probar la calidad de sus modelos personalizados. Meta enfatiza que no utilizará datos de clientes para capacitar a sus propios modelos, y los modelos construidos con la API de LLAMA se pueden transferir a otros hosts, una clara diferenciación de los enfoques más cerrados de algunos competidores.
Las cerebras alimentarán el nuevo servicio de Meta a través de su red de centros de datos ubicados en toda América del Norte, incluidas las instalaciones en Dallas, Oklahoma, Minnesota, Montreal y California.
“Todos nuestros centros de datos que sirven a la inferencia están en América del Norte en este momento”, explicó Choi. “Serviremos Meta con toda la capacidad de las cerebras. La carga de trabajo se equilibrará en todos estos diferentes centros de datos”.
El arreglo comercial sigue lo que Choi describió como “el proveedor de cómputo clásico para un modelo hiperscalador”, similar a la forma en que NVIDIA proporciona hardware a los principales proveedores de la nube. “Están reservando bloques de nuestro cómputo para que puedan servir a su población de desarrolladores”, dijo.
Más allá de las cerebras, Meta también ha anunciado una asociación con Groq para proporcionar opciones de inferencia rápida, brindando a los desarrolladores múltiples alternativas de alto rendimiento más allá de la inferencia tradicional basada en GPU.
La entrada de Meta en el mercado de API de inferencia con métricas de rendimiento superiores podría potencialmente alterar el orden establecido dominado por Operai, Google y Anthrope. Al combinar la popularidad de sus modelos de código abierto con capacidades de inferencia dramáticamente más rápidas, Meta se está posicionando como un competidor formidable en el espacio comercial de IA.
“Meta está en una posición única con 3 mil millones de usuarios, centros de datos de hiper escala y un gran ecosistema de desarrolladores”, según los materiales de presentación de Cerebras. La integración de la tecnología de cerebras “ayuda a Meta Leapfrog OpenAi y Google en rendimiento en aproximadamente 20X”.
Para las cerebras, esta asociación representa un hito importante y la validación de su enfoque especializado de hardware de IA. “Hemos estado construyendo este motor a escala de obleas durante años, y siempre supimos que la primera tarifa de la tecnología, pero en última instancia tiene que terminar como parte de la nube de hiperescala de otra persona. Ese fue el objetivo final desde una perspectiva de estrategia comercial, y finalmente hemos alcanzado ese hito”, dijo Wang.
La API de LLAMA está actualmente disponible como una vista previa limitada, con Meta planifica un despliegue más amplio en las próximas semanas y meses. Los desarrolladores interesados en acceder a la inferencia Ultra-Fast Llama 4 pueden solicitar el acceso temprano seleccionando cerebras de las opciones del modelo dentro de la API de LLAMA.
“Si te imaginas a un desarrollador que no sabe nada sobre cerebras porque somos una empresa relativamente pequeña, solo pueden hacer clic en dos botones en el SDK estándar de SDK estándar de Meta, generar una tecla API, seleccionar la bandera de cerebras y luego, de repente, sus tokens se procesan en un motor gigante a escala de dafers”, explicó las cejas. “Ese tipo de hacernos estar en el back -end del ecosistema de desarrolladores de Meta todo el ecosistema es tremendo para nosotros”.
La elección de Meta de silicio especializada señala algo profundo: en la siguiente fase de la IA, no es solo lo que saben sus modelos, sino lo rápido que pueden pensarlo. En ese futuro, la velocidad no es solo una característica, es todo el punto.
Insights diarias sobre casos de uso comercial con VB diariamente
Si quieres impresionar a tu jefe, VB Daily te tiene cubierto. Le damos la cuenta interior de lo que las empresas están haciendo con la IA generativa, desde cambios regulatorios hasta implementaciones prácticas, por lo que puede compartir ideas para el ROI máximo.
Lea nuestra Política de privacidad
Gracias por suscribirse. Mira más boletines de VB aquí.
Ocurrió un error.


El ritmo rápido de la innovación generativa de IA coloca en los proveedores que empujan nuevos modelos de idiomas grandes (LLM) aparentemente sin pausa.
Entre estos destacados proveedores de LLM está Google. Su familia Gemini Model es el modelo de lenguaje sucesor de Pathways (Palm). Google Gemini debutó en diciembre de 2023 con el lanzamiento de 1.0, y Gemini 1.5 Pro siguió en febrero de 2024. Gemini 2.0, anunciado en diciembre de 2024, estuvo disponible en febrero de 2025. El 25 de marzo de 2025, Google anunció Gemini 2.5 Pro Experimental, continuó el ritmo rápido de la innovación.
El modelo Google Gemini 2.5 Pro ingresó al panorama de LLM a medida que su mercado cambia hacia modelos de razonamiento, como Deepseek R1 y Open AI’s O3, así como modelos de razonamiento híbridos, incluido el soneto 3.7 de Anthrope’s Claude.
¿Qué es Gemini 2.5 Pro?
Gemini 2.5 Pro es un LLM desarrollado por Google Deepmind. Cuando debutó en marzo de 2025, fue el modelo de IA más avanzado de Google, superando las capacidades y el rendimiento de las iteraciones anteriores de Gemini.
Al igual que con Gemini 2.0, Gemini 2.5 Pro es un LLM multimodal, lo que significa que no es simplemente para texto. Procesa y analiza texto, imágenes, audio y video. El modelo también tiene fuertes capacidades de codificación, superando los modelos de Géminis anteriores.
El modelo Gemini 2.5 Pro es el primero de la serie Gemini en ser construido especialmente como un “modelo de pensamiento” con la funcionalidad de razonamiento avanzado como capacidad central. En algunos aspectos, el modelo Gemini 2.5 Pro se basa en una versión de Gemini 2.0, Flash Thinking, que proporciona capacidades de razonamiento limitadas. Los modelos avanzados como Gemini 2.5 Pro usan un razonamiento de más tiempo a través de o “pensar” a través de los pasos requeridos para ejecutar un aviso, superando la mera institución de la cadena de pensamiento para permitir una producción más matizada, a menudo con mayor profundidad y precisión.
Google aplicó técnicas avanzadas, incluidos el aprendizaje de refuerzo y el mejor entrenamiento posterior, para aumentar el rendimiento de Gemini 2.5 Pro sobre los modelos anteriores. El modelo se lanzó con una ventana de contexto de un millón, con planes de expandirse a 2 millones de tokens.
¿Qué hay de nuevo en Gemini 2.5 Pro?
Las nuevas capacidades de Gemini 2.5 Pro y la funcionalidad mejorada elevan a la familia Google Gemini LLM.
Las mejoras clave incluyen lo siguiente:
- Razonamiento mejorado. La función principal de Gemini 2.5 Pro es su capacidad de razonamiento mejorada. Según Google, Gemini 2.5 Pro supera a Openai O3, el soneto antrópico Claude 3.7 y Deepseek R1 sobre los puntos de referencia de razonamiento y conocimiento, incluido el último examen de la humanidad.
- Capacidades de codificación avanzadas. Según Google, Gemini 2.5 Pro también supera las iteraciones anteriores en términos de capacidades de codificación. Similar a sus predecesores, este modelo genera y depura el código y crea aplicaciones visualmente atractivas. El modelo admite la generación y ejecución del código, lo que le permite probar y refinar sus soluciones. Gemini 2.5 Pro calificó 63.8% en SWE-Bench Verified, un estándar de la industria para evaluaciones de código de agente, con una configuración de agente personalizado que supera el soneto de Claude 3.7 de OpenAI GPT-4.5.
- Habilidades avanzadas de matemáticas y ciencias. Google también afirma mejorar las capacidades de matemáticas y ciencias. En el punto de referencia de matemáticas AIME 2025, Gemini 2.5 Pro obtuvo un 86.7%; En el referencia de GPQA Diamond Science, logró un 84%. Ambos puntajes superaron a sus rivales.
- Multimodalidad nativa. Sobre la base de fortalezas familiares, Gemini 2.5 Pro mantiene capacidades multimodales nativas, lo que permite la comprensión y el trabajo con textos, audio, imágenes, video y repositorios completos de código.
- Procesamiento en tiempo real. A pesar del aumento de las capacidades, el modelo mantiene una latencia razonable, lo que lo hace adecuado para aplicaciones en tiempo real y casos de uso interactivo.
¿Cómo mejora Gemini 2.5 Pro Google?
El modelo Gemini 2.5 Pro mejora los servicios de Google, y su posición entre los compañeros, de las siguientes maneras:
Liderazgo competitivo
El altamente competitivo LLM Market presenta a los principales competidores globales: Meta’s Llama Family, OpenAI’s GPT-4O y O3, Claude de Anthrope y Xai’s Grok, además de profundos de China, todos compiten por la participación de mercado. En su lanzamiento, Gemini 2.5 Pro inmediatamente se disparó a la cima de la clasificación de LLM Arena para la evaluación comparativa de IA, mejorando su posición como desarrollador líder de LLM para que las organizaciones lo consideren.
Mejores resultados en las aplicaciones de Google
En el lanzamiento, Gemini 2.5 Pro no se integró en el suite de productos de Google, incluidas las aplicaciones de búsqueda de búsqueda y Google Works. Sin embargo, su integración exitosa promete mejorar múltiples servicios. Para la búsqueda de Google, las capacidades de razonamiento mejoradas proporcionan respuestas más matizadas y precisas a consultas complejas. En Google Docs y otras aplicaciones del espacio de trabajo, la comprensión mejorada del contexto del modelo permite un análisis de documentos y generación de contenido más sofisticados.
Enfoque de desarrollador
Las habilidades de ejecución y generación de códigos avanzados del modelo también fortalecen la posición de Google en las herramientas y servicios del desarrollador, mejorando las llamadas de funciones y la automatización del flujo de trabajo en los servicios en la nube de Google.
Usos para Gemini 2.5 Pro
Gemini 2.5 Pro admite una variedad de tareas, que incluyen:
- Pregunta y respuesta. Gemini es un recurso para las interacciones de conocimiento de preguntas y respuestas fundamentales, basándose en los datos de capacitación de Google.
- Resumen de contenido multimodal. Como modelo multimodal, Gemini 2.5 Pro resume el contenido de texto, audio o video de forma larga.
- Respuesta de preguntas multimodales. El modelo combina información de texto, imágenes, audio y video para responder preguntas que abarcan múltiples modalidades.
- Generación de contenido de texto. Similar a sus predecesores, Gemini 2.5 Pro maneja la generación de texto.
- Resolución compleja de problemas. Con sus capacidades de razonamiento avanzado, Gemini 2.5 Pro administra tareas que requieren razonamiento lógico, como matemáticas, ciencias y análisis estructurado.
- Investigación profunda. La ventana de contexto extendida del modelo y las capacidades de razonamiento lo hacen ideal para analizar documentos largos, sintetizar información de múltiples fuentes y realizar investigaciones en profundidad.
- Tareas de codificación avanzadas. Gemini 2.5 Pro genera y depura el código que admite tareas de desarrollo de aplicaciones.
- Ai de agente. El razonamiento avanzado, las llamadas de funciones y el uso de la herramienta del modelo respaldan su valor como parte de un flujo de trabajo de AI agente.
¿Qué plataformas aceptan la integración Gemini 2.5 Pro?
Siguiendo los pasos de la familia Gemini, Gemini 2.5 está establecido para la integración en una serie de servicios de Google, que incluyen:
- Google AI Studio. En el lanzamiento, el nuevo modelo está disponible con Google AI Studio, una herramienta basada en la web que permite a los desarrolladores probar modelos directamente en el navegador.
- Aplicación Géminis. En el menú de selección del modelo desplegable, los suscriptores del servicio avanzado de Gemini pueden acceder al modelo a través de la aplicación Gemini en las plataformas de escritorio y móviles.
- Vértice ai. Google planea poner a disposición Gemini 2.5 Pro con su plataforma Vertex AI, lo que permite a las empresas utilizar el modelo para implementaciones a mayor escala.
- API GEMINI. Aunque no estaba disponible en el lanzamiento, todas las versiones anteriores de Gemini estaban disponibles utilizando una interfaz de programación de aplicaciones que permite a los desarrolladores integrar el modelo directamente en sus aplicaciones.
Sean Michael Kerner es un consultor de TI, entusiasta de la tecnología y tinkerer. Ha sacado el anillo de tokens, configurado NetWare y se sabe que compiló su propio kernel Linux. Consulta con organizaciones de la industria y los medios de comunicación sobre temas de tecnología.
Related posts
































































































































































































































































































Trending
-
Startups11 meses ago
Remove.bg: La Revolución en la Edición de Imágenes que Debes Conocer
-
Tutoriales12 meses ago
Cómo Comenzar a Utilizar ChatGPT: Una Guía Completa para Principiantes
-
Recursos12 meses ago
Cómo Empezar con Popai.pro: Tu Espacio Personal de IA – Guía Completa, Instalación, Versiones y Precios
-
Startups10 meses ago
Startups de IA en EE.UU. que han recaudado más de $100M en 2024
-
Startups12 meses ago
Deepgram: Revolucionando el Reconocimiento de Voz con IA
-
Recursos11 meses ago
Perplexity aplicado al Marketing Digital y Estrategias SEO
-
Recursos12 meses ago
Suno.com: La Revolución en la Creación Musical con Inteligencia Artificial
-
Noticias10 meses ago
Dos periodistas octogenarios deman a ChatGPT por robar su trabajo