Noticias
Operai tiene poco recurso legal contra Deepseek, dicen los expertos en derecho tecnológico.
- Operai y la Casa Blanca han acusado a Deepseek de usar ChatGPT para entrenar a bajo precio a su nuevo chatbot.
- Los expertos en derecho tecnológico dicen que OpenAi tiene poco recurso bajo la propiedad intelectual y el derecho contractual.
- Los términos de uso de OpenAI pueden aplicarse, pero son en gran medida poco aplicados, dicen los expertos.
Esta semana, Openai y la Casa Blanca acusaron a Deep Speek de algo similar al robo.
En una ráfaga de declaraciones de prensa, dijeron que el advenedizo con sede en China había bombardeado los chatbots de OpenAi con consultas, aumentando los datos resultantes para entrenar de manera rápida y económica un modelo que ahora es casi tan bueno.
El “zar” de IA de la administración Trump, dijo que este proceso de capacitación, llamado “destilación”, equivale al robo de propiedad intelectual. Mientras tanto, Openai le dijo a Business Insider y otros puntos de venta que está investigando si “Deepseek puede haber destilado inapropiadamente nuestros modelos”.
Operai no dice que si la compañía planea seguir acciones legales, en lugar de prometer lo que un portavoz llamó “contramedidas agresivas y proactivas para proteger nuestra tecnología”.
¿Pero podrían? ¿Podrían demandar a Deepseek por los terrenos de “Robaste nuestros contenidos”, al igual que los terrenos Openai fue demandado en un reclamo de derechos de autor de 2023 presentado por el New York Times y otros medios de comunicación?
Business Insider planteó esta pregunta a los expertos en derecho de tecnología, quienes dijeron que desafiar a Deepseek en los tribunales sería una batalla cuesta arriba para OpenAi, ahora que el zapato de apropiación de contenido está en el otro pie.
Operai tendría dificultades para demostrar una propiedad intelectual o un reclamo de derechos de autor, dijeron estos abogados.
“La pregunta es si los salidas de ChatGPT”, lo que significa que las respuestas que genera en respuesta a las consultas, “son con derechos de autor en absoluto”, dijo Mason Kortz de Harvard Law School.
Eso es porque no está claro que las respuestas chatgpt escupen calificaciones como “creatividad”, dijo.
“Hay una doctrina que dice que la expresión creativa es con derechos de autor, pero los hechos e ideas no lo son”, explicó Kortz, quien enseña en la Clínica Cyberlaw de Harvard.
“Hay una gran pregunta en la ley de propiedad intelectual en este momento sobre si los resultados de una IA generativa pueden constituir una expresión creativa o si son necesariamente hechos sin protección”.
Podría abrir los dados de todos modos de todos modos, y afirmar que sus salidas en realidad son ¿protegido?
Eso sería poco probable, dijeron los abogados.
Operai ya está en el registro en el caso de derechos de autor del New York Times argumentando que la AI de capacitación es una excepción permitida de “uso justo” a la protección de derechos de autor.
Si hacen un 180 y dicen a Deepseek que el entrenamiento no es un uso justo, “eso podría volver a morderlos”, dijo Kortz. “Deepseek podría decir: ‘Oye, ¿no sabías que el entrenamiento es de uso justo?'”
Podría decirse que hay una distinción entre los casos de los tiempos y los profundos, agrega Kortz.
“Tal vez sea más transformador convertir los artículos de noticias en un modelo”, como el Times acusa a Openi de hacer, “que convertir los resultados de un modelo en otro modelo” como lo ha hecho Deepseek, dijo Kortz.
“Pero esto todavía pone a Openai en una situación bastante complicada con respecto a la línea de que ha sido remolcado con respecto al uso justo”.
Es más probable que un incumplimiento de la demanda por contrato sea
Una demanda por incumplimiento de contrato es mucho más probable que una demanda basada en IP, aunque viene con su propio conjunto de problemas, dijo Anupam Chander, quien enseña ley de tecnología en la Universidad de Georgetown.
Los términos de servicio para chatbots tecnológicos grandes como los desarrollados por OpenAI y Anthopic prohíben usar su contenido como forraje de capacitación para un modelo de IA competidores.
“Entonces, tal vez esa sea la demanda que posiblemente podría presentar, un reclamo basado en contratos, no un reclamo basado en IP”, dijo Chander.
“No ‘copiaste algo de mí’, pero que te beneficiaste de mi modelo para hacer algo que no se le permitió hacer bajo nuestro contrato”.
Hay un posible enganche, Chander y Kortz dicen. Los términos de servicio de OpenAI requieren que la mayoría de las reclamaciones se resuelvan mediante arbitraje, no demandas. Hay una excepción para las demandas “para detener el uso o abuso no autorizados del servicio o infracción de propiedad intelectual o apropiación indebida.
Sin embargo, hay un enganche más grande, dicen los expertos.
“Debe saber que el brillante erudito Mark Lemley y un coautor argumentan que los términos de uso de IA probablemente no sean ejecutables”, dijo Chander. Se refería a un artículo del 10 de enero, el espejismo de las restricciones de términos de uso de inteligencia artificial, por Mark A. Lemley de Stanford Law, y Peter Henderson del Centro de Tecnología de la Información de la Universidad de Princeton.
Hasta la fecha, “ningún creador de modelos ha tratado de hacer cumplir estos términos con sanciones monetarias o medidas cautelares”, dice el periódico.
“Esto es probable por una buena razón: creemos que la ejecución legal de estas licencias es cuestionable”, dice. Esto se debe en parte a que los resultados del modelo “no son en gran medida con derechos de autor” y porque las leyes como la Ley de Derechos de Autor Digital Millennium y la Ley de Fraude y Abuso de la Computadora “ofrecen un recurso limitado”, argumenta.
“Creo que probablemente no sean envejecidos”, dijo Lemley a BI de los términos de servicio de OpenAi, “porque Deepseek no tomó nada con derechos de autor por OpenAi, y porque los tribunales generalmente no harán cumplir los acuerdos para no competir en ausencia de un derecho IP que evitaría esa competencia “.
Las demandas entre las partes en diferentes naciones, cada una con sus propios sistemas legales y de aplicación, siempre son complicadas, dijo Kortz.
Incluso si OpenAi despejó todos los obstáculos anteriores y ganó un juicio de un tribunal o árbitro de los Estados Unidos, “para que se vea profundamente para entregar dinero o dejar de hacer lo que está haciendo, la aplicación se reduciría al sistema legal chino”, dijo. .
Aquí, Openai estaría a merced de otra área de derecho extremadamente complicada, la aplicación de los juicios extranjeros y el equilibrio de los derechos individuales y corporativos y la soberanía nacional, que se remonta antes de la fundación de los Estados Unidos.
“Así que este es, un proceso largo, complicado y tenso”, agregó Kortz.
¿Podría OpenAi haberse protegido mejor de una incursión en destilación?
“Podrían haber usado medidas técnicas para bloquear el acceso repetido a su sitio”, dijo Lemley. “Pero hacerlo también interferiría con los clientes normales”.
Agregó: “No creo que puedan, o deberían, tener un reclamo legal válido contra la búsqueda de información que no se puede ver con el sitio público”.
Los representantes de Deepseek no respondieron de inmediato a una solicitud de comentarios.
“Sabemos que los grupos de la RPC están trabajando activamente para usar métodos, incluida lo que se conoce como destilación, para tratar de replicar modelos avanzados de IA US”, dijo la portavoz de OpenAi, Rhianna Donaldson, a BI en una declaración enviada por correo electrónico.
“Somos conscientes y revisando las indicaciones de que Deepseek puede haber destilado inapropiadamente nuestros modelos y compartirá información como sabemos más”, dijo el comunicado. “Tomamos contramedidas agresivas y proactivas para proteger nuestra tecnología y continuaremos trabajando estrechamente con el gobierno de los Estados Unidos para proteger los modelos más capaces que se están construyendo aquí”.
Noticias
Meet the Power Players at OpenAI
- OpenAI has been elevating research and technical talent to leadership roles after recent departures.
- The company has also brought on some new faces to fill the vacancies in its executive suite.
- Here are some of the key people to watch going forward.
Last year, OpenAI found itself navigating a storm of departures. Recently, the company has been busy elevating its research and technical talent to leadership positions while strategically bringing in new hires to patch up the holes in its executive suite.
This shuffle in leadership couldn’t come at a more critical time, as the company faces intensified competition from heavyweights like Microsoft, Google, Anthropic, and Elon Musk’s xAI. Staying ahead means securing top-flight talent is essential. After all, “OpenAI is nothing without its people,” or so employees declared on social media after the failed Sam Altman ouster.
Meanwhile, the company is juggling a cascade of legal challenges, from copyright lawsuits to antitrust scrutiny, all while navigating the shifting sands of regulatory guidance under President Donald Trump. On top of that, OpenAI is trying to restructure as a for-profit business, raise tens of billions of dollars, and build new computer data centers in the US to develop its tech.
It’s a high-wire act that hinges on the expertise and execution of its new and newly promoted leaders. Below are some of the key power players who are helping to shape OpenAI’s future.
Leadership
Photo By Stephen McCarthy/Sportsfile via Getty Images
Sarah Friar, chief financial officer
Friar joined last year as the company’s first financial chief and a seasoned addition to the new guard. Formerly Square’s CFO, Friar knows how to turn a founder’s vision into a story that investors want to be a part of. She took two companies public: Square and Nextdoor, the hyperlocal social network she led through explosive growth during pandemic lockdowns.
At OpenAI, Friar leads a finance team responsible for securing the funds required to build better models and the data centers to power them. In her first few months on the job, she helped the company get $6.5 billion in one of the biggest private pools of capital in startup history.
She inherited a business with a colossal consumer-facing business and high-profile partnerships with Microsoft and Apple. At the same time, OpenAI is burning through billions of dollars as it seeks to outpace increasingly stiff competition from Google, Meta, and others. Friar is expected to bring much-needed financial acumen to OpenAI as the company moves to turn its research into mass-market products and a profitable business.
Jason Kwon, chief strategy officer
In his role as chief strategy officer, Kwon helps set the agenda for a slew of non-research initiatives, including the company’s increasingly active outreach to policymakers and the various legal challenges swirling around it. His background as the company’s former general counsel gives him a strong foundation in navigating complex legal and regulatory landscapes.
Kwon works closely with Anna Makanju, the VP of global impact, and Chris Lehane, the VP of global affairs, as they seek to build and strengthen OpenAI’s relationships in the public sector.
Kwon was previously general counsel at the famed startup accelerator Y Combinator and assistant general counsel at Khosla Ventures, an early investor in OpenAI.
Che Chang, general counsel
Being at the forefront of artificial intelligence development puts OpenAI in a position to navigate and shape a largely uncharted legal territory. In his role as general counsel, Chang leads a team of attorneys who address the legal challenges associated with the creation and deployment of large language models. The company faces dozens of lawsuits concerning the datasets used to train its models and other privacy complaints, as well as multiple government investigations.
OpenAI’s top lawyer joined the company after serving as senior corporate counsel at Amazon, where he advised executives on developing and selling machine learning products and established Amazon’s positions on artificial intelligence policy and legislation. In 2021, Chang took over for his former boss, Jason Kwon, who has since become chief strategy officer.
Kevin Weil, chief product officer
Photo by Horacio Villalobos/Corbis via Getty Images
If Sam Altman is OpenAI’s starry-eyed visionary, Weil is its executor. He leads a product team that turns blue-sky research into products and services the company can sell.
Weil joined last year as a steady-handed product guru known for playing key roles at large social networks. He was a longtime Twitter insider who created products that made the social media company money during a revolving door of chief executives. At Instagram, he helped kneecap Snapchat’s growth with competitive product releases such as Stories and live video.
Weil is expected to bring much-needed systems thinking to OpenAI as the company moves to turn its research into polished products for both consumer and enterprise use cases.
Nick Turley, ChatGPT’s head of product
In the three years since ChatGPT burst onto the scene, it has reached hundreds of millions of active users and generated billions in revenue for its maker. Turley, a product savant who leads the teams driving the chatbot’s development, is behind much of ChatGPT’s success.
Turley joined in 2022 after his tenure at Instacart, where he guided a team of product managers through the pandemic-driven surge in demand for grocery delivery services.
OpenAI’s chatbot czar is likely to play a crucial role as the company expands into the enterprise market and adds more powerful, compute-intensive features to its famed chatbot.
Srinivas Narayanan, vice president of engineering
Narayanan was a longtime Facebook insider who worked on important product releases such as Facebook Photos and tools to help developers build for its virtual reality headset, Oculus. Now, he leads the OpenAI teams responsible for building new products and scaling its systems. This includes ChatGPT, which is used by over 400 million people weekly; the developer platform, which has doubled usage over the past six months; and the infrastructure needed to support both.
Research
Jakub Pachocki, chief scientist
Ilya Sutkever’s departure as chief scientist last year prompted questions about the company’s ability to stay on top of the artificial intelligence arms race. That has thrust Pachocki into the spotlight. He took on the mantle of chief scientist after seven years as an OpenAI researcher.
Pachocki had already been working closely with Sutskever on some of OpenAI’s most ambitious projects, including an advanced reasoning model now known as o1. In a post announcing his promotion, Sam Altman called Pachocki “easily one of the greatest minds of our generation.”
Mark Chen, senior vice president of research
A flurry of executive departures also cast Chen into the highest levels of leadership. He was promoted last September following the exit of Bob McGrew, the company’s chief research officer. In a post announcing the change, Altman called out Chen’s “deep technical expertise” and commended the longtime employee as having developed as a manager in recent years.
Chen’s path to OpenAI is a bit atypical compared to some of his colleagues. After studying computer science and mathematics at MIT, he began his career as a quantitative trader on Wall Street before joining OpenAI in 2018. Chen previously led the company’s frontier research.
He has been integral to OpenAI’s efforts to expand into multimodal models, heading up the team that developed DALL-E and the team that incorporated visual perception into GPT-4. Chen was also an important liaison between employees and management during Sam 0Altman’s short-lived ouster, further cementing his importance within the company.
Liam Fedus, vice president of research, post-training
Fedus helps the company get new products out the door. He leads a post-training team responsible for taking the company’s state-of-the-art models and improving their performance and efficiency before it releases them to the masses. Fedus was the third person to lead the team in a six-month period following the departures of Barret Zoph and Bob McGrew last year.
Fedus was also one of seven OpenAI researchers who developed a group of advanced reasoning models known as Strawberry. These models, which can think through problems and complete tasks they haven’t encountered before, represented a significant leap at launch.
Josh Tobin, member of technical staff
Tobin, an early research scientist at OpenAI, left to found Gantry, a company that assists teams in determining when and how to retrain their artificial intelligence systems. He returned to OpenAI last September and now leads a team of researchers focused on developing agentic products. Its flashy new agent, Deep Research, creates in-depth reports on nearly any topic.
Tobin brings invaluable experience in building agents as the company aims to scale them across a wide range of use cases. In a February interview with Sequoia, Tobin explained that when the company takes a reasoning model, gives it access to the same tools humans use to do their jobs, and optimizes for the kinds of outcomes it wants the agent to be able to do, “there’s really nothing stopping that recipe from scaling to more and more complex tasks.”
Legal
Andrea Appella, associate general counsel for Europe, Middle East, Asia
Appella joined last year, bolstering the company’s legal firepower as it navigated a thicket of open investigations into data privacy concerns, including from watchdogs in Italy and Poland. Appella is a leading expert on competition and regulatory law, having previously served as head of global competition at Netflix and deputy general counsel at 21st Century Fox.
Regulatory scrutiny could still prove to be an existential threat to OpenAI as policymakers worldwide put guardrails on the nascent artificial intelligence industry. Nowhere have lawmakers been more aggressive than in Europe, which makes Appella’s role as the company’s top legal representative in Europe one of the more crucial positions in determining the company’s future.
Haidee Schwartz, associate general counsel for competition
OpenAI has spent the last year beefing up its legal team as it faces multiple antitrust probes. Schwartz, who joined in 2023, knows more about antitrust enforcement than almost anyone in Silicon Valley, having seen both sides of the issue during her storied legal career.
Between 2017 and 2019, she served as the acting deputy director of the Bureau of Competition at the Federal Trade Commission, one of the agencies currently investigating Microsoft’s agreements with OpenAI. Schwartz also advised clients on merger review and antitrust enforcement as a partner at law firm Akin Gump. She’ll likely play an important role in helping OpenAI navigate the shifting antitrust landscape in President Donald Trump’s second term.
Heather Whitney, copyright counsel
Whitney serves as lead data counsel at OpenAI, placing her at the forefront of various legal battles with publishers that have emerged in recent years. She joined the company last January, shortly after The New York Times filed a copyright lawsuit against OpenAI and its corporate backer, Microsoft. OpenAI motioned to dismiss the high-profile case last month.
Whitney’s handling of these legal cases, which raise new questions about intellectual property in relation to machine learning, will be crucial in deciding OpenAI’s future.
Previously, Whitney worked at the law firm Morrison Foerster, where she specialized in novel copyright issues related to artificial intelligence and was a member of the firm’s AI Steering Committee. Prior to her official hiring, she had already been collaborating with OpenAI as part of Morrison Foerster, which is among several law firms offering external counsel to the company.
Policy
Chan Park, head of US and Canada policy and partnerships
Before OpenAI had a stable of federal lobbyists, it had Park. In 2023, the company registered the former Microsoft lobbyist as its first in-house lobbyist, marking a strategic move to engage more actively with lawmakers wrestling with artificial intelligence regulation.
Since then, OpenAI has beefed up its lobbying efforts as it seeks to build relationships in government and influence the development of artificial intelligence policy. It’s enlisted white-shoe law firms and at least one former US senator to plead OpenAI’s case in Washington. The company also spent $1.76 million on government lobbying in 2024, a sevenfold increase from the year before, according to a recent disclosure reviewed by the MIT Technology Review.
Park has been helping to guide those efforts from within OpenAI as the company continues to sharpen its message around responsible development of artificial intelligence.
Anna Makanju, vice president of global impact
Referred to as OpenAI’s de facto foreign minister, Makanju is the mastermind behind Sam Altman’s global charm offensive. On multiple trips, he met with world leaders, including the Indian prime minister and South Korean president, to discuss the future of artificial intelligence.
The tour was part of a broader effort to make Altman the friendly face of a nascent industry and ensure that OpenAI will have a seat at the table when designing artificial intelligence regulations and policies. Makanju, a veteran of Starlink and Facebook who also served as a special policy advisor to former President Joe Biden, has been integral in that effort.
In addition to helping Altman introduce himself on the world stage, she has played an important role in expanding OpenAI’s commercial partnerships in the public sector.
Chris Lehane, vice president of global affairs
Thomson Reuters
Lehane joined OpenAI last year to help the company liaise with policymakers and navigate an uncharted political landscape around artificial intelligence. The veteran political operative and “spin master” played a similar role at Airbnb, where he served as head of global policy and public affairs from 2015 to 2022 and helped it address growing opposition from local authorities.
He previously served in the Clinton White House, where Newsweek referred to him as a “master of disaster” for his handling of the scandals and political crises that plagued the administration.
Lehane is poised to play a crucial role in ensuring that the United States stays at the forefront of the global race in artificial intelligence. When President Trump introduced Stargate, a joint venture between OpenAI, Oracle, and SoftBank aimed at building large domestic data centers, Lehane was on the scene. From Washington, he traveled to Texas to meet with local officials, engaging in discussions about how the state could meet the rapidly growing demand for energy.
Lane Dilg, head of infrastructure policy and partnerships
In her newly appointed role, Dilg works to grease the wheels for the construction of giant data centers needed to build artificial intelligence. She took on the position in January after two years as head of strategic initiatives for global affairs, working with government agencies, private industry, and nonprofit organizations to ensure that artificial intelligence benefits all of humanity.
In hiring Dilg, OpenAI gained an inside player in the public sector. Dilg is a former senior advisor to the undersecretary of infrastructure at the US Department of Energy and was interim city manager for Santa Monica, California, managing the city through the COVID-19 pandemic.
Dilg will undoubtedly play an important role in expanding and nurturing OpenAI’s relationships in Washington as it seeks to secure President Trump’s support for building its own data centers.
Have a tip? Contact this reporter via email at mrussell@businessinsider.com or Signal at meliarussell.01. Use a personal email address and a nonwork device; here’s our guide to sharing information securely.
Darius Rafieyan contributed to an earlier version of this story.