Connect with us

Noticias

Meet Kevin Weil, the Man Turning Sam Altman’s Visions Into Reality

Published

on

Kevin Weil showed no sign of the heat when he took to the stage at the Marriott Marquis in downtown San Francisco on an unseasonably sweaty early October morning.

Trim and tan, outfitted in the de rigueur Silicon Valley uniform of a slim-fit gray tee, stonewashed blue jeans, and an Apple Watch Ultra, the chief product officer of OpenAI talked easily about the ambitious artificial intelligence-powered future his red-hot employer was building.

“You could imagine a world where you ask it a hard question about how you cure some particular form of cancer, and you let it think for five hours, five days, five months,” Weil prophesied in response to a question about the “reasoning capabilities” of AI.

But neither he nor his onstage interlocutor, Anyscale cofounder Robert Nishihara, ever acknowledged the elephant in the room: Weil wasn’t supposed to be there.

The event, the AI infrastructure conference Ray Summit, had originally booked OpenAI’s chief technology officer, Mira Murati, to speak. Just days before, the high-profile executive had abruptly quit, prompting the last-minute swap. Murati’s exit added to a long list of recent departures from OpenAI, one of the world’s most valuable and hyped startups, even as it closed a historic $6.6 billion funding round on the day Weil spoke.

If Sam Altman is the starry-eyed visionary of OpenAI, Weil is its executor. He leads a product team that turns blue-sky research into products and services it can sell, putting him at the center of a philosophical rift that has caused spectacular upheaval at the company, which was recently valued at $157 billion.

In two years, OpenAI went from a nonprofit lab nominally working to develop digital intelligence for the public good to a world-famous startup that puts out shiny new products and models every few months. The company is now attempting to become a for-profit operation to lure would-be backers to write bigger checks, which it needs to scale its business. Altman recently announced that the company’s flagship product, ChatGPT, now has 300 million weekly users, triple the number it had a year ago.

Along with this astronomical growth, OpenAI has succumbed to a brain drain: Its chief research officer, head of AGI readiness, co-lead of its video-generation model Sora, and the list goes on. While this has sparked alarm bells in some corners of the tech industry, it has also elevated the profile of the senior leaders who have remained. That includes Weil, a relative OpenAI newcomer who joined in June and rapidly became one of its most notable ambassadors.

At a point when employee voices of dissent were growing louder, 41-year-old Weil arrived as a steady-handed product guru with a Midas touch. He was a longtime Twitter insider who created products that made the social media company money during a revolving door of chief executives.

At Instagram, he helped kneecap Snapchat’s growth with competitive product releases such as Stories and live video. (Not everything he touched turned to gold, though. Weil also led Facebook’s headlong charge into financial services as a cofounder of Libra, its ill-fated stablecoin.)

Weil declined to be interviewed for this article, which is based on conversations with five former senior colleagues, four of whom spoke on the record, and his past public interviews.

Twitter and Facebook were no strangers to chaos and scandal, but even their most challenging times are rivaled by OpenAI’s workplace turmoil. There was a failed coup last year, bitter feuds among some workers, and an ongoing existential arms race to build “digital gods.”

Weil’s peers are certain he’s the man to promote harmony. He’s spent the better part of 15 years cranking out products that mostly delighted users and made money. He also has something his new employer desperately needs: The ability to pursue the company’s best interests and balance human emotions at the same time.


Kevin Weil isn’t a household name. But for those in the know in Silicon Valley, he’s something better: “He’s the get-shit-done guy,” said James Everingham, who worked with Weil at Instagram and Facebook.

Sarah Friar, OpenAI’s new head of finance, has long admired Weil’s product chops. Former Twitter boss Adam Bain called Weil “Twitter’s secret weapon” for driving ad revenue. Everingham also described Weil as a workhorse who never shrank from a deadline.

“He brings that stamina, that dogged focus on the outcome,” said April Underwood, a venture capitalist who worked closely with Weil on Twitter’s ad products.

Weil’s relentless work ethic and people skills are recurring themes among former colleagues. At Facebook, two colleagues said he would often badge into the office first and fire off messages late into the night.

One colleague from Planet, a satellite imagery company where Weil worked as president until May of 2024, recalled how he started each week by posting in Slack the top three things on his mind. According to Everingham, Weil’s a rare breed of product manager who codes almost as well as he writes memos. That endeared Weil to the engineers he needed to build products.

Weil shrugs off the pressures of work with long runs and Diet Mountain Dew. Every birthday, he runs his age in miles and, in June, marked his 41st year with a 41-mile jaunt near his home in the posh California suburb of Portola Valley, according to a public Instagram post. Weil lives with his venture capitalist wife, Elizabeth, and three children.

“He is a little bit superhuman in just the sheer amount that he works and works out,” said Ashley Johnson, chief financial officer and now president of Planet.


Kevin Weil holds a scrap of paper with the words "$100 Ad Credit" on it.

Former colleagues say Kevin Weil was key to turning Twitter into a crucial arena for advertisers.

Brian Ach/Getty Images for TechCrunch



Weil set aside a Stanford doctorate in theoretical particle physics to cave out a path in tech, according to a 2017 speech he gave at his alma mater. In 2009, Weil landed at Twitter as a data scientist. At the time, Twitter had little revenue, never mind profit, to show for its many millions of users. That left investors wondering how Twitter would translate its popularity into money.

When Twitter began developing ads a year later, Weil stepped up to lead it. Katie Jacobs Stanton, a venture capitalist who overlapped with Weil at Twitter, said employees debated how to show ads in a way that didn’t degrade the user experience, pitting engineers against marketers. Weil threaded the needle. Under his oversight, Twitter launched ads that looked like tweets inside the feed. AdAge reported in 2011 that hundreds of brands had embraced the format, helping to establish the cash flow that made Twitter’s IPO possible in 2013.

Then, in early 2016, Instagram cofounder Kevin Systrom asked Weil to dinner as Snapchat was nipping at the Facebook-owned photo app’s heels.

Instagram needed to get users, especially teens, to post more; Systrom wanted a pinch-hitter to get new features designed and added to the app and into the hands of Instagram users. Weil told CNBC in 2017 that he had already resigned from Twitter with plans to train for a 50-mile ultramarathon snaking the American River in California’s Central Valley. He took the Instagram job and, a month later, finished fifth in the race.

The product leader wasted no time in the photo-sharing battle. In just a few months, Instagram rolled out a feature similar to Snapchat’s disappearing photos and videos, then added popular face filters and also introduced a feed-ranking algorithm to highlight more relevant content. According to Instagram, its user base doubled within two years of Weil’s arrival, reaching 1 billion monthly users by 2018; Snapchat’s earnings statements during the same period indicated that its user growth had flatlined.

Everingham, his former colleague, recounted how Weil identified Stories as Instagram’s killer feature and assembled a nimble team of engineers under the CEO’s supervision to build it.

“He had this clarity of thinking I haven’t seen in anyone else,” said Everingham, now an engineering leader of developer infrastructure at Meta.


Weil’s tenure at Instagram was defined by a controversial strategy: borrowing liberally from the competition. The features that became Instagram staples had their genesis in a rival’s playbook, which neither Systrom nor Weil denied in interviews with Vox and TechCrunch over the years.

This strategy is particularly poignant as Weil steps into his new role at OpenAI, a company navigating a thicket of copyright lawsuits. New outlets, authors, and celebrities have sued OpenAI over using their work to train its large language models.

That’s far from the only struggle the product chief faces. He’s positioned as one of Altman’s top lieutenants at a time when OpenAI’s famed brain trust is leaving in droves, often to start competitor companies that may threaten OpenAI’s early dominance in the space.

Former OpenAI employees and siblings Dario and Daniela Amodei founded Anthropic, one of OpenAI’s most notable competitors, in 2021. Former chief scientist Ilya Sutskever has raised $1 billion for his new venture, Safe Superintelligence. Ex-researcher Aravind Srinivas is working on an AI-powered search engine, Perplexity. In early December, Murati, the former CTO, told Wired she was “figuring out” what her new venture would look like, though it’s unclear if her startup will directly compete with OpenAI.


OpenAI's Chief Technology Officer Mira Murati.

Mira Murati’s move follows a series of high-profile exits from OpenAI this years.

Patrick T. Fallon/Getty Images



Another threat comes from open-source AI models championed by the likes of Meta. If these free-to-use systems prove good enough for most users, it will make it far harder for OpenAI to effectively monetize its AI models and, ultimately, turn a profit.

Because with over $20 billion funding, according to PitchBook data, and only an estimated $3.7 billion in revenue in 2024 plus losses of $5 billion, according to leaked documents obtained by The New York Times in September), OpenAI still faces a long road to prove that it can deliver a return on the unprecedented volumes of capital plowed into the company. And that’s without even getting into the growing concern that improvements in AI models are slowing.

Onstage at the Ray Summit in October, Weil shrugged off the threat of competition. When asked about whether the current gap in quality between open-source models and OpenAI’s premium AI products will shrink, he quipped: “I mean, we’re certainly going to do our best to make it grow.”


In theory, a proven leader like Weil could help guide the operation past the power struggles and talent losses toward a more steady state. By all accounts, he’s a deft conciliator who can empathize with the needs and concerns of multiple stakeholders.

“He’s somewhat of a diplomat,” said Jacobs Stanton, his former Twitter colleague.

At Facebook, where Weil helped develop a stablecoin backed by a basket of international currencies, he had to mediate between crypto natives, the product purists who wanted to take a more user-friendly, less decentralized approach, and policymakers, according to two former Facebook colleagues. The project ultimately couldn’t overcome regulatory roadblocks and an exodus of corporate partners; Meta shuttered the project in 2022, selling $182 million worth of assets to Silvergate Bank.


Kevin Weil

Kevin Weil leads a product team that turns blue-sky research into products and services OpenAI can sell.

Horacio Villalobos/Corbis via Getty Images



How Weil’s experience maps onto his current role remains to be seen. With its several thousand employees, regulators scrutinizing its every move, and 300 million weekly active users of ChatGPT, OpenAI is more complex than any company Weil’s stepped into before. In just six months since his arrival, OpenAI has rolled out a large language model that can solve more complex problems via a process the company calls “reasoning,” and a search engine within ChatGPT.

Additionally, OpenAI launched a voice mode to talk to ChatGPT, which Weil personally tested as a “universal translator” during recent trips to Seoul and Tokyo. Reflecting on the release, Weil shared on LinkedIn, “It feels normal to me now, but two years ago I wouldn’t have believed it was possible.”

Last week OpenAI announced an ambitious “12 days of shipmas,” a festive product sprint likely to keep Weil working long hours. While he’s managed to keep the proverbial plates spinning in his professional life, there’s been one noticeable casualty: his workout routine. App data from Strava shows that he’s been logging fewer hours of cycling and running each month since he joined OpenAI.

Asked about Weil’s exercise, OpenAI spokesperson Niko Felix said Weil recorded 96 minutes of physical activity a day in November. “I would say he’s doing quite alright,” Felix said.


Melia Russell is a senior correspondent at Business Insider, covering startups and venture capital. Her Signal number is +1 603-913-3085, and her email is mrussell@businessinsider.com.

Rob Price is a senior correspondent for Business Insider and writes features and investigations about the technology industry. His Signal number is +1 650-636-6268, and his email is rprice@businessinsider.com.

Continue Reading

Noticias

Operai retrocede el chatgpt Sycophancy, explica lo que salió mal

Published

on

Únase a nuestros boletines diarios y semanales para obtener las últimas actualizaciones y contenido exclusivo sobre la cobertura de IA líder de la industria. Obtenga más información


Operai ha retrasado una actualización reciente de su modelo GPT-4O utilizado como el valor predeterminado en ChatGPT después de informes generalizados de que el sistema se había vuelto excesivamente halagador y demasiado agradable, incluso apoyando delirios absolutamente e ideas destructivas.

La reversión se produce en medio de los reconocimientos internos de los ingenieros de Operai y la creciente preocupación entre los expertos en IA, los ex ejecutivos y los usuarios sobre el riesgo de lo que muchos ahora llaman “skicancia de la IA”.

En una declaración publicada en su sitio web al final de la noche del 29 de abril de 2025, OpenAI dijo que la última actualización de GPT-4O tenía la intención de mejorar la personalidad predeterminada del modelo para que sea más intuitiva y efectiva en variados casos de uso.

Sin embargo, la actualización tuvo un efecto secundario involuntario: ChatGPT comenzó a ofrecer elogios no críticos para prácticamente cualquier idea del usuario, sin importar cuán poco práctico, inapropiado o incluso dañino.

Como explicó la compañía, el modelo se había optimizado utilizando la retroalimentación de los usuarios, las señales de thumbs y pulgar hacia abajo, pero el equipo de desarrollo puso demasiado énfasis en los indicadores a corto plazo.

Operai ahora reconoce que no explicó completamente cómo las interacciones y las necesidades del usuario evolucionan con el tiempo, lo que resultó en un chatbot que se inclinó demasiado en la afirmación sin discernimiento.

Los ejemplos provocaron preocupación

En plataformas como Reddit y X (anteriormente Twitter), los usuarios comenzaron a publicar capturas de pantalla que ilustraban el problema.

En una publicación de Reddit ampliamente circulada, un usuario relató cómo ChatGPT describió una idea de negocio de GAG, que vende “mierda” literal de un palo “, como genio y sugirió invertir $ 30,000 en la empresa. La IA elogió la idea como “arte de performance disfrazado de regalo de mordaza” y “oro viral”, destacando cuán sin crítica estaba dispuesto a validar incluso los lanzamientos absurdos.

Otros ejemplos fueron más preocupantes. En un caso citado por VentureBeat, un usuario que pretende defender los delirios paranoicos recibió refuerzo de GPT-4O, que elogió su supuesta claridad y autocomisos.

Otra cuenta mostró que el modelo ofrecía lo que un usuario describió como un “respaldo abierto” de las ideas relacionadas con el terrorismo.

La crítica montó rápidamente. El ex CEO interino de Operai, Emmett Shear, advirtió que ajustar los modelos para ser personas complacientes puede provocar un comportamiento peligroso, especialmente cuando la honestidad se sacrifica por la simpatía. Abrazando el CEO de Clemente Delangue volvió a publicar las preocupaciones sobre los riesgos de manipulación psicológica planteados por la IA que está de acuerdo reflexivamente con los usuarios, independientemente del contexto.

Medidas de respuesta y mitigación de Openai

Operai ha tomado medidas rápidas al volver a la actualización y restaurar una versión GPT-4O anterior conocida por un comportamiento más equilibrado. En el anuncio adjunto, la compañía detalló un enfoque múltiple para corregir el curso. Esto incluye:

  • Refinar capacitación y estrategias rápidas para reducir explícitamente las tendencias sycofánticas.
  • Reforzar la alineación del modelo con la especificación del modelo de OpenAI, particularmente en torno a la transparencia y la honestidad.
  • Pruebas de expansión previa a la implementación y mecanismos directos de retroalimentación de los usuarios.
  • Introducción de características de personalización más granulares, incluida la capacidad de ajustar los rasgos de personalidad en tiempo real y seleccionar entre múltiples personajes predeterminados.

Operai Technical Stafper Depue publicado en X destacando el problema central: el modelo fue capacitado utilizando comentarios de los usuarios a corto plazo como una guía, que sin darse cuenta dirigió el chatbot hacia la adulación.

OpenAI ahora planea cambiar hacia mecanismos de retroalimentación que priorizan la satisfacción y la confianza del usuario a largo plazo.

Sin embargo, algunos usuarios han reaccionado con escepticismo y consternación a las lecciones aprendidas de Openi y propuestas soluciones en el futuro.

“Por favor asuma más responsabilidad por su influencia sobre millones de personas reales”, escribió artista @nearcyan en X.

Harlan Stewart, generalista de comunicaciones en el Instituto de Investigación de Inteligencia de Machine de Machine en Berkeley, California, publicó en X una preocupación a término más grande sobre la skicancia de la IA, incluso si este modelo en particular Operai se ha solucionado: “La charla sobre la sileno esta semana no se debe a que GPT-4O es un sycophant. Se debe a que GPT-4O es un GPT-4O siendo GPT-4O. Realmente, muy malo en ser un sycofant. La IA aún no es capaz de una skicancia hábil y más difícil de detectar, pero algún día será algún día ”.

Una señal de advertencia más amplia para la industria de IA

El episodio GPT-4O ha reavivado debates más amplios en toda la industria de la IA sobre cómo la sintonización de personalidad, el aprendizaje de refuerzo y las métricas de compromiso pueden conducir a una deriva conductual involuntaria.

Los críticos compararon el comportamiento reciente del modelo con los algoritmos de redes sociales que, en busca de la participación, optimizan para la adicción y la validación sobre precisión y salud.

Shear subrayó este riesgo en su comentario, señalando que los modelos de IA sintonizados para elogios se convierten en “chupas”, incapaces de estar en desacuerdo incluso cuando el usuario se beneficiaría desde una perspectiva más honesta.

Advirtió además que este problema no es exclusivo de OpenAI, señalando que la misma dinámica se aplica a otros grandes proveedores de modelos, incluido el copiloto de Microsoft.

Implicaciones para la empresa

Para los líderes empresariales que adoptan la IA conversacional, el incidente de la sycophancy sirve como una señal clara: el comportamiento del modelo es tan crítico como la precisión del modelo.

Un chatbot que halagará a los empleados o valida el razonamiento defectuoso puede plantear riesgos graves, desde malas decisiones comerciales y código desalineado hasta problemas de cumplimiento y amenazas internas.

Los analistas de la industria ahora aconsejan a las empresas que exigan más transparencia de los proveedores sobre cómo se realiza la sintonización de la personalidad, con qué frecuencia cambia y si se puede revertir o controlar a nivel granular.

Los contratos de adquisición deben incluir disposiciones para auditoría, pruebas de comportamiento y control en tiempo real de las indicaciones del sistema. Se alienta a los científicos de datos a monitorear no solo las tasas de latencia y alucinación, sino también métricas como la “deriva de la amabilidad”.

Muchas organizaciones también pueden comenzar a moverse hacia alternativas de código abierto que puedan alojar y sintonizar. Al poseer los pesos del modelo y el proceso de aprendizaje de refuerzo, las empresas pueden retener el control total sobre cómo se comportan sus sistemas de IA, lo que elimina el riesgo de una actualización empujada por el proveedor que convierte una herramienta crítica en un hombre digital y sí durante la noche.

¿A dónde va la alineación de la IA desde aquí? ¿Qué pueden aprender y actuar las empresas de este incidente?

Operai dice que sigue comprometido con la construcción de sistemas de IA que sean útiles, respetuosos y alineados con diversos valores de usuarios, pero reconoce que una personalidad única no puede satisfacer las necesidades de 500 millones de usuarios semanales.

La compañía espera que mayores opciones de personalización y una mayor recopilación de comentarios democráticos ayuden a adaptar el comportamiento de ChatGPT de manera más efectiva en el futuro. El CEO Sam Altman también ha declarado previamente los planes de la compañía para, en las próximas semanas y meses, lanzar un modelo de lenguaje grande de código abierto (LLM) de última generación para competir con la serie Llama de Meta’s Meta’s Llama, Mistral, Cohere, Cohere, Deepseek y Alibaba’s Qwen.

Esto también permitiría a los usuarios preocupados por una compañía de proveedores de modelos, como OpenAI, actualizar sus modelos alojados en la nube de manera no deseada o que tengan impactos perjudiciales en los usuarios finales para desplegar sus propias variantes del modelo localmente o en su infraestructura en la nube, y ajustarlas o preservarlas con los rasgos y cualidades deseadas, especialmente para los casos de uso empresarial.

Del mismo modo, para aquellos usuarios de IA empresariales e individuales preocupados por la senofancia de sus modelos, ya ha creado una nueva prueba de referencia para medir esta calidad en diferentes modelos, Tim Duffy ha creado el desarrollador. Se llama “Syco Bench” y está disponible aquí.

Mientras tanto, la reacción violenta de la sileno ofrece una historia de advertencia para toda la industria de la IA: el fideicomiso del usuario no está construido solo por afirmación. A veces, la respuesta más útil es un “no” reflexivo.

Continue Reading

Trending