Un investigador de inteligencia artificial (IA) canadiense que ha vivido en los Estados Unidos durante 12 años y trabajó en ChatGPT se le negó una tarjeta verde, según los empleados de la empresa matriz OpenAI a través de una serie de publicaciones en X, anteriormente Twitter.
Newsweek comunicado con los servicios de ciudadanía e inmigración de los Estados Unidos (USCIS) por correo electrónico fuera del horario comercial normal el sábado por la mañana para hacer comentarios.
Por que importa
El presidente Donald Trump se comprometió a promulgar la mayor represión contra la inmigración en la historia del país, iniciando deportaciones masivas que permanecen sumidas en el estancamiento legal en medio de desafíos de varios estados y autoridades legales.
Sin embargo, Elon Musk y Vivek Ramaswamy, ambos inicialmente aprovechados por Trump para liderar el Departamento de Eficiencia del Gobierno (DOGE), defendieron un enfoque en una expansión de programas como la visa H-1B, una visa temporal y no inmigrante que permite a los empleadores estadounidenses contratar trabajadores extranjeros para trabajos estacionales o no intraficulturales, para aumentar el número de inmigrantes altos en escéquidos.
Que saber
Noam Brown, un investigador de Openai, el viernes por la mañana escribió en X que estaba “profundamente preocupado” por el estado migratorio de Kai Chen, un ciudadano canadiense que ha vivido y trabajado en los Estados Unidos durante 12 años que se vio obligado a irse después de que su solicitud de tarjeta verde fue negada.
“Es profundamente preocupante que uno de los mejores investigadores de IA con los que he trabajado, [Kai Chen]se le negó una tarjeta verde de EE. UU. Hoy “, escribió Brown, y agregó:” Estamos arriesgando el liderazgo de IA de Estados Unidos cuando rechazamos el talento como este “.
Dylan Hunn, otro empleado de Operai, se hizo eco del sentimiento de Brown solo unas horas después, diciendo que Chen era “increíblemente importante para OpenAi”, ya que era “crucial para GPT-4.5”.
“Nuestro sistema de inmigración se ha vuelto * loco * para patearla”, escribió Hunn. “¡Estados Unidos la necesita!”
Brown luego escribió en X que Chen planeaba trabajar de forma remota desde un Airbnb en Vancouver y ir al “modo de monje completo” para mantenerse al día con sus proyectos mientras el problema de inmigración se resolvió. Chen trató de conocer el momento con optimismo, escribiendo en respuesta a Brown de que ella estaría en Vancouver “por una cantidad de tiempo indeterminada” y estaría “entusiasmada con conocer a nuevas personas”.
“Esperemos que regrese a casa en algún momento de este año, pero si no lo hará lo mejor”, escribió Chen, luego agregando en una publicación separada de que OpenAi ha sido “increíblemente solidario durante esta kerfuffle”.
Brown proporcionó una actualización poco antes de la medianoche de que parecía que “podría haber habido problemas de papeleo con la presentación de tarjeta verde inicial” realizado dos años antes.
“Es una pena que esto signifique [Chen] Tiene que dejar a los Estados Unidos por un tiempo, pero hay una razón para el optimismo de que esto se resolverá “, escribió Brown en X.
Chen aclaró aún más la situación, diciendo que había solicitado la tarjeta verde hace tres años antes de su tiempo en OpenAi.
“Realmente apesta ser negado después de esperar tanto tiempo y no poder regresar a casa, pero en general se siente muy afortunado de estar donde estoy”, escribió.
Una persona muestra el logotipo de ChatGPT en la pantalla de un teléfono inteligente con el logotipo de OpenAI en el fondo el 29 de diciembre de 2024 en Chongqing, China. Una persona muestra el logotipo de ChatGPT en la pantalla de un teléfono inteligente con el logotipo de OpenAI en el fondo el 29 de diciembre de 2024 en Chongqing, China. Cheng Xin/Getty Images
¿Qué protecciones tienen los titulares de tarjetas verdes?
El USCIS dice que un titular de la tarjeta verde tiene derecho a vivir permanentemente en los EE. UU. Siempre que no cometan ninguna acción que “lo haga removible bajo la ley de inmigración”. Esto incluye romper las leyes y no presentar impuestos.
Un titular de la tarjeta verde está protegido por todas las leyes de los Estados Unidos, incluidas las de los niveles estatales y locales, y pueden solicitar trabajos más libremente que aquellos que pueden estar en los EE. UU. En visas basadas en el trabajo.
Viajar también es mucho más fácil con una tarjeta verde que con otras visas temporales, pero los titulares deben asegurarse de que no se van por más de seis meses a la vez.
“Hay una razón por la cual alguien querría una tarjeta verde en lugar de estar aquí con una visa temporal porque es una residencia permanente legal, le brinda la capacidad de vivir y trabajar permanentemente en los Estados Unidos. Pero dicho eso, no es ciudadanía”, Eliss Taub, socio de la firma de abogados de inmigración Siskind, contada, contada. Newsweek.
Los titulares de tarjetas verdes deben renovar sus tarjetas cada 10 años y pueden solicitar la ciudadanía después de tres años si están casados con un ciudadano estadounidense o cinco si no.
Lo que la gente dice
Un portavoz de OpenAi dijo Newsweek En una respuesta por correo electrónico a una solicitud de comentarios: “Esta solicitud se presentó algún tiempo antes de que nuestro empleado se uniera a OpenAI y no estábamos involucrados en el caso. Sin embargo, nuestra evaluación inicial, basada en la información que nos proporciona, muestra que puede haber algunos problemas de papeleo en la presentación. Continuamos trabajando estrechamente con nuestro empleado en su situación”.
Noam Brown, un empleado de Operai, Escribí en X el sábado: “He estado en IA desde 2012, y he visto suficientes historias de terror de visa desde entonces para saber que la ruptura de la inmigración altamente calificada en Estados Unidos es persistente. Es particularmente doloroso ver que la ruptura ralentiza a mi compañero de equipo durante más de 2 meses cuando el progreso de la IA es semana a semana”.
CEO de Operai Sam Altman en 2023 Escribió en X: “Una de las victorias de política más fáciles que puedo imaginar para los Estados Unidos es reformar la inmigración de alta habilidad. El hecho de que muchas de las personas más talentosas del mundo quieran estar aquí es un regalo ganado con fuerza; abrazarlos es la clave para mantenerlo así. Es difícil recuperar esto si lo perdemos”.
Shaun Ralston, un contratista independiente que brinda soporte para los clientes de API de Openai, escribió en X el viernes: …@Openai presentó más de 80 más H-1BS el año pasado solo. ¿Cuántas mentes más brillantes se alejará la administración Trump a otros países? Hola, Maga, arregle la tubería de talento o deja de hablar sobre el liderazgo de IA “.
Matt Tegarden, el CEO de la Asociación Kansas Livestock, A principios de este mes le dijo Newsweek: “Las empresas se están asegurando de que sus archivos de documentos de empleo están en orden. También están confirmando sus derechos y responsabilidades en esta área, así como ayudando a sus empleados a comprender sus derechos”.
¿Qué pasa después?
La aplicación de la tarjeta verde de Chen tomará tiempo para resolverse, pero parece que el problema raíz ha sido identificado, lo que hace que sea más probable que pueda regresar a los Estados Unidos más temprano que tarde.
Actualización, 26/04/25 a las 4:52 PM ET: Este artículo se ha actualizado para incluir una declaración de OpenAI.
Si busca “CHATGPT” en su navegador, es probable que se tope en sitios web que parecen estar alimentados por OpenAI, pero no lo son. Uno de esos sitios, chat.chatbotapp.ai, ofrece acceso a “GPT-3.5” de forma gratuita y utiliza marca familiar.
Pero aquí está la cosa: no está dirigida por OpenAi. Y, francamente, ¿por qué usar un GPT-3.5 potencialmente falso cuando puedes usar GPT-4O de forma gratuita en el actual ¿Sitio de chatgpt?
In the summer of 2023, Ilya Sutskever, a co-founder and the chief scientist of OpenAI, was meeting with a group of new researchers at the company. By all traditional metrics, Sutskever should have felt invincible: He was the brain behind the large language models that helped build ChatGPT, then the fastest-growing app in history; his company’s valuation had skyrocketed; and OpenAI was the unrivaled leader of the industry believed to power the future of Silicon Valley. But the chief scientist seemed to be at war with himself.
Sutskever had long believed that artificial general intelligence, or AGI, was inevitable—now, as things accelerated in the generative-AI industry, he believed AGI’s arrival was imminent, according to Geoff Hinton, an AI pioneer who was his Ph.D. adviser and mentor, and another person familiar with Sutskever’s thinking. (Many of the sources in this piece requested anonymity in order to speak freely about OpenAI without fear of reprisal.) To people around him, Sutskever seemed consumed by thoughts of this impending civilizational transformation. What would the world look like when a supreme AGI emerged and surpassed humanity? And what responsibility did OpenAI have to ensure an end state of extraordinary prosperity, not extraordinary suffering?
By then, Sutskever, who had previously dedicated most of his time to advancing AI capabilities, had started to focus half of his time on AI safety. He appeared to people around him as both boomer and doomer: more excited and afraid than ever before of what was to come. That day, during the meeting with the new researchers, he laid out a plan.
“Once we all get into the bunker—” he began, according to a researcher who was present.
“I’m sorry,” the researcher interrupted, “the bunker?”
“We’re definitely going to build a bunker before we release AGI,” Sutskever replied. Such a powerful technology would surely become an object of intense desire for governments globally. The core scientists working on the technology would need to be protected. “Of course,” he added, “it’s going to be optional whether you want to get into the bunker.”
This essay has been adapted from Hao’s forthcoming book, Empire of AI.
Two other sources I spoke with confirmed that Sutskever commonly mentioned such a bunker. “There is a group of people—Ilya being one of them—who believe that building AGI will bring about a rapture,” the researcher told me. “Literally, a rapture.” (Sutskever declined to comment.)
Sutskever’s fears about an all-powerful AI may seem extreme, but they are not altogether uncommon, nor were they particularly out of step with OpenAI’s general posture at the time. In May 2023, the company’s CEO, Sam Altman, co-signed an open letter describing the technology as a potential extinction risk—a narrative that has arguably helped OpenAI center itself and steer regulatory conversations. Yet the concerns about a coming apocalypse would also have to be balanced against OpenAI’s growing business: ChatGPT was a hit, and Altman wanted more.
When OpenAI was founded, the idea was to develop AGI for the benefit of humanity. To that end, the co-founders—who included Altman and Elon Musk—set the organization up as a nonprofit and pledged to share research with other institutions. Democratic participation in the technology’s development was a key principle, they agreed, hence the company’s name. But by the time I started covering the company in 2019, these ideals were eroding. OpenAI’s executives had realized that the path they wanted to take would demand extraordinary amounts of money. Both Musk and Altman tried to take over as CEO. Altman won out. Musk left the organization in early 2018 and took his money with him. To plug the hole, Altman reformulated OpenAI’s legal structure, creating a new “capped-profit” arm within the nonprofit to raise more capital.
Since then, I’ve tracked OpenAI’s evolution through interviews with more than 90 current and former employees, including executives and contractors. The company declined my repeated interview requests and questions over the course of working on my book about it, which this story is adapted from; it did not reply when I reached out one more time before the article was published. (OpenAI also has a corporate partnership with The Atlantic.)
OpenAI’s dueling cultures—the ambition to safely develop AGI, and the desire to grow a massive user base through new product launches—would explode toward the end of 2023. Gravely concerned about the direction Altman was taking the company, Sutskever would approach his fellow board of directors, along with his colleague Mira Murati, then OpenAI’s chief technology officer; the board would subsequently conclude the need to push the CEO out. What happened next—with Altman’s ouster and then reinstatement—rocked the tech industry. Yet since then, OpenAI and Sam Altman have become more central to world affairs. Last week, the company unveiled an “OpenAI for Countries” initiative that would allow OpenAI to play a key role in developing AI infrastructure outside of the United States. And Altman has become an ally to the Trump administration, appearing, for example, at an event with Saudi officials this week and onstage with the president in January to announce a $500 billion AI-computing-infrastructure project.
Altman’s brief ouster—and his ability to return and consolidate power—is now crucial history to understand the company’s position at this pivotal moment for the future of AI development. Details have been missing from previous reporting on this incident, including information that sheds light on Sutskever and Murati’s thinking and the response from the rank and file. Here, they are presented for the first time, according to accounts from more than a dozen people who were either directly involved or close to the people directly involved, as well as their contemporaneous notes, plus screenshots of Slack messages, emails, audio recordings, and other corroborating evidence.
The altruistic OpenAI is gone, if it ever existed. What future is the company building now?
Before ChatGPT, sources told me, Altman seemed generally energized. Now he often appeared exhausted. Propelled into megastardom, he was dealing with intensified scrutiny and an overwhelming travel schedule. Meanwhile, Google, Meta, Anthropic, Perplexity, and many others were all developing their own generative-AI products to compete with OpenAI’s chatbot.
Many of Altman’s closest executives had long observed a particular pattern in his behavior: If two teams disagreed, he often agreed in private with each of their perspectives, which created confusion and bred mistrust among colleagues. Now Altman was also frequently bad-mouthing staffers behind their backs while pushing them to deploy products faster and faster. Team leads mirroring his behavior began to pit staff against one another. Sources told me that Greg Brockman, another of OpenAI’s co-founders and its president, added to the problems when he popped into projects and derailed long-standing plans with last-minute changes.
The environment within OpenAI was changing. Previously, Sutskever had tried to unite workers behind a common cause. Among employees, he had been known as a deep thinker and even something of a mystic, regularly speaking in spiritual terms. He wore shirts with animals on them to the office and painted them as well—a cuddly cat, cuddly alpacas, a cuddly fire-breathing dragon. One of his amateur paintings hung in the office, a trio of flowers blossoming in the shape of OpenAI’s logo, a symbol of what he always urged employees to build: “A plurality of humanity-loving AGIs.”
But by the middle of 2023—around the time he began speaking more regularly about the idea of a bunker—Sutskever was no longer just preoccupied by the possible cataclysmic shifts of AGI and superintelligence, according to sources familiar with his thinking. He was consumed by another anxiety: the erosion of his faith that OpenAI could even keep up its technical advancements to reach AGI, or bear that responsibility with Altman as its leader. Sutskever felt Altman’s pattern of behavior was undermining the two pillars of OpenAI’s mission, the sources said: It was slowing down research progress and eroding any chance at making sound AI-safety decisions.
Meanwhile, Murati was trying to manage the mess. She had always played translator and bridge to Altman. If he had adjustments to the company’s strategic direction, she was the implementer. If a team needed to push back against his decisions, she was their champion. When people grew frustrated with their inability to get a straight answer out of Altman, they sought her help. “She was the one getting stuff done,” a former colleague of hers told me. (Murati declined to comment.)
During the development of GPT‑4, Altman and Brockman’s dynamic had nearly led key people to quit, sources told me. Altman was also seemingly trying to circumvent safety processes for expediency. At one point, sources close to the situation said, he had told Murati that OpenAI’s legal team had cleared the latest model, GPT-4 Turbo, to skip review by the company’s Deployment Safety Board, or DSB—a committee of Microsoft and OpenAI representatives who evaluated whether OpenAI’s most powerful models were ready for release. But when Murati checked in with Jason Kwon, who oversaw the legal team, Kwon had no idea how Altman had gotten that impression.
In the summer, Murati attempted to give Altman detailed feedback on these issues, according to multiple sources. It didn’t work. The CEO iced her out, and it took weeks to thaw the relationship.
By fall, Sutskever and Murati both drew the same conclusion. They separately approached the three board members who were not OpenAI employees—Helen Toner, a director at Georgetown University’s Center for Security and Emerging Technology; the roboticist Tasha McCauley; and one of Quora’s co-founders and its CEO, Adam D’Angelo—and raised concerns about Altman’s leadership. “I don’t think Sam is the guy who should have the finger on the button for AGI,” Sutskever said in one such meeting, according to notes I reviewed. “I don’t feel comfortable about Sam leading us to AGI,” Murati said in another, according to sources familiar with the conversation.
That Sutskever and Murati both felt this way had a huge effect on Toner, McCauley, and D’Angelo. For close to a year, they, too, had been processing their own grave concerns about Altman, according to sources familiar with their thinking. Among their many doubts, the three directors had discovered through a series of chance encounters that he had not been forthcoming with them about a range of issues, from a breach in the DSB’s protocols to the legal structure of OpenAI Startup Fund, a dealmaking vehicle that was meant to be under the company but that instead Altman owned himself.
If two of Altman’s most senior deputies were sounding the alarm on his leadership, the board had a serious problem. Sutskever and Murati were not the first to raise these kinds of issues, either. In total, the three directors had heard similar feedback over the years from at least five other people within one to two levels of Altman, the sources said. By the end of October, Toner, McCauley, and D’Angelo began to meet nearly daily on video calls, agreeing that Sutskever’s and Murati’s feedback about Altman, and Sutskever’s suggestion to fire him, warranted serious deliberation.
As they did so, Sutskever sent them long dossiers of documents and screenshots that he and Murati had gathered in tandem with examples of Altman’s behaviors. The screenshots showed at least two more senior leaders noting Altman’s tendency to skirt around or ignore processes, whether they’d been instituted for AI-safety reasons or to smooth company operations. This included, the directors learned, Altman’s apparent attempt to skip DSB review for GPT-4 Turbo.
By Saturday, November 11, the independent directors had made their decision. As Sutskever suggested, they would remove Altman and install Murati as interim CEO. On November 17, 2023, at about noon Pacific time, Sutskever fired Altman on a Google Meet with the three independent board members. Sutskever then told Brockman on another Google Meet that Brockman would no longer be on the board but would retain his role at the company. A public announcement went out immediately.
For a brief moment, OpenAI’s future was an open question. It might have taken a path away from aggressive commercialization and Altman. But this is not what happened.
After what had seemed like a few hours of calm and stability, including Murati having a productive conversation with Microsoft—at the time OpenAI’s largest financial backer—she had suddenly called the board members with a new problem. Altman and Brockman were telling everyone that Altman’s removal had been a coup by Sutskever, she said.
It hadn’t helped that, during a company all-hands to address employee questions, Sutskever had been completely ineffectual with his communication.
“Was there a specific incident that led to this?” Murati had read aloud from a list of employee questions, according to a recording I obtained of the meeting.
“Many of the questions in the document will be about the details,” Sutskever responded. “What, when, how, who, exactly. I wish I could go into the details. But I can’t.”
“Are we worried about the hostile takeover via coercive influence of the existing board members?” Sutskever read from another employee later.
“Hostile takeover?” Sutskever repeated, a new edge in his voice. “The OpenAI nonprofit board has acted entirely in accordance to its objective. It is not a hostile takeover. Not at all. I disagree with this question.”
Shortly thereafter, the remaining board, including Sutskever, confronted enraged leadership over a video call. Kwon, the chief strategy officer, and Anna Makanju, the vice president of global affairs, were leading the charge in rejecting the board’s characterization of Altman’s behavior as “not consistently candid,” according to sources present at the meeting. They demanded evidence to support the board’s decision, which the members felt they couldn’t provide without outing Murati, according to sources familiar with their thinking.
In rapid succession that day, Brockman quit in protest, followed by three other senior researchers. Through the evening, employees only got angrier, fueled by compounding problems: among them, a lack of clarity from the board about their reasons for firing Altman; a potential loss of a tender offer, which had given some the option to sell what could amount to millions of dollars’ worth of their equity; and a growing fear that the instability at the company could lead to its unraveling, which would squander so much promise and hard work.
Faced with the possibility of OpenAI falling apart, Sutskever’s resolve immediately started to crack. OpenAI was his baby, his life; its dissolution would destroy him. He began to plead with his fellow board members to reconsider their position on Altman.
Meanwhile, Murati’s interim position was being challenged. The conflagration within the company was also spreading to a growing circle of investors. Murati now was unwilling to explicitly throw her weight behind the board’s decision to fire Altman. Though her feedback had helped instigate it, she had not participated herself in the deliberations.
By Monday morning, the board had lost. Murati and Sutskever flipped sides. Altman would come back; there was no other way to save OpenAI.
I was already working on a book about OpenAI at the time, and in the weeks that followed the board crisis, friends, family, and media would ask me dozens of times: What did all this mean, if anything? To me, the drama highlighted one of the most urgent questions of our generation: How do we govern artificial intelligence? With AI on track to rewire a great many other crucial functions in society, that question is really asking: How do we ensure that we’ll make our future better, not worse?
The events of November 2023 illustrated in the clearest terms just how much a power struggle among a tiny handful of Silicon Valley elites is currently shaping the future of this technology. And the scorecard of this centralized approach to AI development is deeply troubling. OpenAI today has become everything that it said it would not be. It has turned into a nonprofit in name only, aggressively commercializing products such as ChatGPT and seeking historic valuations. It has grown ever more secretive, not only cutting off access to its own research but shifting norms across the industry to no longer share meaningful technical details about AI models. In the pursuit of an amorphous vision of progress, its aggressive push on the limits of scale has rewritten the rules for a new era of AI development. Now every tech giant is racing to out-scale one another, spending sums so astronomical that even they have scrambled to redistribute and consolidate their resources. What was once unprecedented has become the norm.
As a result, these AI companies have never been richer. In March, OpenAI raised $40 billion, the largest private tech-funding round on record, and hit a $300 billion valuation. Anthropic is valued at more than $60 billion. Near the end of last year, the six largest tech giants together had seen their market caps increase by more than $8 trillion after ChatGPT. At the same time, more and more doubts have risen about the true economic value of generative AI, including a growing body of studies that have shown that the technology is not translating into productivity gains for most workers, while it’s also eroding their critical thinking.
In a November Bloomberg article reviewing the generative-AI industry, the staff writers Parmy Olson and Carolyn Silverman summarized it succinctly. The data, they wrote, “raises an uncomfortable prospect: that this supposedly revolutionary technology might never deliver on its promise of broad economic transformation, but instead just concentrate more wealth at the top.”
Meanwhile, it’s not just a lack of productivity gains that many in the rest of the world are facing. The exploding human and material costs are settling onto wide swaths of society, especially the most vulnerable, people I met around the world, whether workers and rural residents in the global North or impoverished communities in the global South, all suffering new degrees of precarity. Workers in Kenya earned abysmal wages to filter out violence and hate speech from OpenAI’s technologies, including ChatGPT. Artists are being replaced by the very AI models that were built from their work without their consent or compensation. The journalism industry is atrophying as generative-AI technologies spawn heightened volumes of misinformation. Before our eyes, we’re seeing an ancient story repeat itself: Like empires of old, the new empires of AI are amassing extraordinary riches across space and time at great expense to everyone else.
To quell the rising concerns about generative AI’s present-day performance, Altman has trumpeted the future benefits of AGI ever louder. In a September 2024 blog post, he declared that the “Intelligence Age,” characterized by “massive prosperity,” would soon be upon us. At this point, AGI is largely rhetorical—a fantastical, all-purpose excuse for OpenAI to continue pushing for ever more wealth and power. Under the guise of a civilizing mission, the empire of AI is accelerating its global expansion and entrenching its power.
As for Sutskever and Murati, both parted ways with OpenAI after what employees now call “The Blip,” joining a long string of leaders who have left the organization after clashing with Altman. Like many of the others who failed to reshape OpenAI, the two did what has become the next-most-popular option: They each set up their own shops, to compete for the future of this technology.
This essay has been adapted from Karen Hao’s forthcoming book, Empire of AI.
Empire Of AI – Dreams And Nightmares In Sam Altman’s OpenAI
By Karen Hao
*Illustration by Akshita Chandra / The Atlantic. Sources: Nathan Howard / Bloomberg / Getty; Jack Guez / AFP / Getty; Jon Kopaloff / Getty; Manuel Augusto Moreno / Getty; Yuichiro Chino / Getty.
When you buy a book using a link on this page, we receive a commission. Thank you for supporting The Atlantic.
Chua, M. et al. Abordar la incertidumbre de predicción en el aprendizaje automático para la atención médica. Nat. Biomed. Ing.7711–718 (2023).
Artículo PubMed Google Scholar
Bhardwaj, R. y Tripathi, I. Un algoritmo de ocultación de datos reversibles mejorados que utiliza una red neuronal profunda para E-Healthcare. J. Amb. Intell. Humaniz. Computación.1410567–10585 (2023).
Artículo Google Scholar
Nandy, S. et al. Un sistema inteligente de predicción de enfermedades cardíacas basado en la red neuronal artificial enjambre. Computación neuronal. Aplicación3514723–14737 (2023).
Artículo Google Scholar
Jaafar, N. y Lachiri, Z. Métodos de fusión multimodal con redes neuronales profundas y metainformación para la detección de agresión en vigilancia. Sistema de expertos. Aplicación211118523 (2023).
Artículo Google Scholar
Mahum, R. et al. Un marco robusto para generar resúmenes de video de vigilancia utilizando la combinación de momentos de Zernike y una transformación R y una red neuronal profunda. Multimed. Herramientas apl.8213811–13835 (2023).
Artículo Google Scholar
Jan, Z. et al. Inteligencia artificial para la industria 4.0: Revisión sistemática de aplicaciones, desafíos y oportunidades. Sistema de expertos. Aplicación216119456 (2023).
Artículo Google Scholar
Raja Santhi, A. y Muthuswamy, P. Industry 5.0 o Industry 4.0 s? Introducción a la industria 4.0 y un vistazo a las posibles tecnologías de la industria 5.0. Int. J. Interact. Des. Manuf. (Ijidem)17947–979 (2023).
Artículo Google Scholar
Shafiq, M. et al. Evaluación continua de control de calidad durante la fabricación utilizando algoritmo de aprendizaje supervisado para la industria 4.0. Int. J. Adv. Manuf. Technol. (2023).
Rajput, DS, Meena, G., Acharya, M. y Mohbey, KK Predicción de fallas utilizando red neuronal de convolución difusa en entorno IoT con fusión de datos de detección heterogénea. Medición Sensación26100701 (2023).
Artículo Google Scholar
Liyakat, KK S. Enfoque de aprendizaje automático utilizando redes neuronales artificiales para detectar nodos maliciosos en redes IoT. En Conferencia internacional sobre aprendizaje automático, IoT y Big Data 123–134 (Springer, 2023).
Thakkar, A. y Lohiya, R. Clasificación de ataque de datos de intrusión desequilibrados para la red IoT utilizando una red neuronal profunda basada en el aprendizaje. IEEE Internet Things J.1011888–11895 (2023).
Artículo Google Scholar
Openai, R. GPT-4 Informe técnico. Preprint en ARXIV: 2303.08774. Ver en el artículo213 (2023).
Wang, J. et al. EL-NAS: Eficiente búsqueda de arquitectura de dominio de atención cruzada para la clasificación de imágenes hiperespectrales. Sensación remota.154688 (2023).
Anuncios de artículos Google Scholar
Yang, T., He, Q. y Huang, L. OM-NAS: Clasificación de imagen de lesión de piel pigmentada basada en una búsqueda de arquitectura neural. Biomed. Optar. Expresar142153–2165 (2023).
Artículo CAS PubMed PubMed Central Google Scholar
Yang, Y., Wei, J., Yu, Z. y Zhang, R. Un marco de búsqueda de arquitectura neuronal confiable para la clasificación de imágenes de neumonía utilizando tecnología blockchain. J. Supercomput.801694-1727 (2024).
Hassan, E. et al. Enmascarar modelos R-CNN. Nilo J. Commun. Computación. Sci.317–27 (2022).
Artículo Google Scholar
Dong, P. et al. RD-NAS: Mejora de la capacidad de clasificación SuperNet de un solo disparo a través de la destilación de clasificación de proxies de costo cero. En ICASSP 2023-2023 Conferencia internacional IEEE sobre acústica, procesamiento de habla y señales (ICASSP) 1–5 (IEEE, 2023).
Wang, J. et al. NAS-DYMC: red neuronal de convolucional múltiple dinámica basada en NAS para la detección de eventos de sonido. En ICASSP 2023-2023 Conferencia internacional IEEE sobre acústica, procesamiento de habla y señales (ICASSP) 1–5 (IEEE, 2023).
Li, J. et al. Graph Neural Network Architecture Busque para el diagnóstico de fallas de maquinaria giratoria basado en el aprendizaje de refuerzo. Mech. Syst. Proceso de señal.202110701 (2023).
Artículo Google Scholar
Yuan, W., Fu, C., Liu, R. y Fan, X. Ssob: Buscando una arquitectura orientada a la escena para la detección de objetos submarinos. VIS. Computación.395199–5208 (2023).
Artículo Google Scholar
Jia, X. et al. Detector de objetos rápido y preciso para la conducción autónoma basada en yolov5 mejorado. Sci. Reps.131–13 (2023).
Anuncios de Google Scholar
Mehta, R., Jurečková, O. y Stamp, M. Un enfoque de procesamiento del lenguaje natural para la clasificación de malware. J. Comput. Virol. Tech de piratería.20173-184 (2024).
Girdhar, N., Coustaty, M. y Doucet, A. Benchmarking Nas para la separación de artículos en periódicos históricos. En Conferencia internacional sobre bibliotecas digitales asiáticas76–88 (Springer, 2023).
Real, E., Aggarwal, A., Huang, Y. y LE, QV Evolución regularizada para la búsqueda de arquitectura del clasificador de imágenes. En Actas de la Conferencia AAAI sobre inteligencia artificial volumen 33, 4780–4789 (2019).
Liu, C. et al. Búsqueda de arquitectura neuronal progresiva. En Actas de la Conferencia Europea sobre Visión Computadora (ECCV) 19–34 (2018).
Cai, H., Chen, T., Zhang, W., Yu, Y. y Wang, J. Búsqueda de arquitectura eficiente por transformación de red. En Actas de la Conferencia AAAI sobre inteligencia artificialvol. 32 (2018).
Pham, H., Guan, M., Zoph, B., Le, Q. y Dean, J. Búsqueda eficiente de arquitectura neuronal a través de parámetros compartiendo. En Conferencia internacional sobre aprendizaje automático 4095–4104 (PMLR, 2018).
Liu, H., Simonyan, K. y Yang, Y. Darts: búsqueda de arquitectura diferenciable. Preimpresión en ARXIV: 1806.09055 (2018).
Ying, C. et al. NAS-Bench-101: Hacia la búsqueda reproducible de arquitectura neuronal. En Conferencia internacional sobre aprendizaje automático 7105–7114 (PMLR, 2019).
Dong, X. y Yang, Y. Nas Bench-201-201: Extendiendo el alcance de la búsqueda de arquitectura neuronal reproducible. Preprint en ARXIV: 2001.00326 (2020).
Krizhevsky, A. y Hinton, G. Aprender múltiples capas de características de pequeñas imágenes (Tech. Rep, Toronto, ON, Canadá, 2009).
Chrabaszcz, P., Loshchilov, I. y Hutter, F. Una variante a la baja de Imagenet como alternativa a los conjuntos de datos CIFAR. Preprint en ARXIV: 1707.08819 (2017).
Ye, P. et al. \(\beta\)-Darts: regularización de beta para la búsqueda de arquitectura diferenciable. En 2022 Conferencia IEEE/CVF sobre visión por computadora y reconocimiento de patrones (CVPR) 10864–10873 (IEEE, 2022).
Movahedi, S. et al. \ (\ lambda \) -Darts: mitigar el colapso del rendimiento al armonizar la selección de operaciones entre las células. Preprint en ARXIV: 2210.07998 (2022).
Zheng, M. et al. ¿Puede GPT-4 realizar la búsqueda de arquitectura neural? Preimpresión en ARXIV: 2304.10970 (2023).
Achiam, J. et al. Informe técnico GPT-4. Preimpresión en ARXIV: 2303.08774 (2023).
Wang, H. et al. Búsqueda de arquitectura neuronal gráfica con GPT-4. Preimpresión en ARXIV: 2310.01436 (2023).
Hassan, E., Bhatnagar, R. y Shams, M. Y. Avance de la investigación científica en ciencias de la computación por Chatgpt y Llama-A Review. En Conferencia internacional sobre fabricación inteligente y sostenibilidad energética 23–37 (Springer, 2023).
Helber, P., Bischke, B., Dengel, A. y Borth, D. Eurosat: un nuevo conjunto de datos y un punto de referencia de aprendizaje profundo para el uso de la tierra y la clasificación de la cobertura de la tierra. IEEE J. Sel. Arriba. Aplicación Tierra obs. Sensación remota.12(7), 2217–2226 (2019).
Anuncios de artículos Google Scholar
Rajaraman, S. et al. Redes neuronales convolucionales previamente entrenadas como extractores de características hacia la detección de parásitos de malaria mejorados en imágenes de frotis de sangre delgada. Peerj6E4568 (2018).
Artículo PubMed PubMed Central Google Scholar
Maas, A. et al. Vectores de palabras de aprendizaje para el análisis de sentimientos. En Actas de la 49ª Reunión Anual de la Asociación de Lingüística Computacional: Tecnologías de lenguaje humano 142–150 (2011).
Powerapi. Pyjoules: Biblioteca de medición de energía basada en Python para varios dominios, incluidas las GPU NVIDIA. https://github.com/powerapi-ng/pyjoulles (2024). Consultado: 2024-05-31.
Loni, M., Sinaei, S., Zoljodi, A., Daneshtalab, M. y Sjödin, M. Deepmaker: un marco de optimización de objetivos múltiples para redes neuronales profundas en sistemas integrados. Microprocesos. Microsyst.73102989 (2020).
Artículo Google Scholar
Suganuma, M., Kobayashi, M., Shirakawa, S. y Nagao, T. Evolución de redes neuronales convolucionales profundas utilizando programación genética cartesiana. Evol. Computación.28141–163 (2020).
Artículo PubMed Google Scholar
Ren, J. et al. Eigen: enfoque genético de inspiración ecológica para la búsqueda de estructuras de redes neuronales desde cero. En Actas de la conferencia IEEE/CVF sobre visión por computadora y reconocimiento de patrones 9059–9068 (2019).
Xie, L. y Yuille, A. Genetic CNN. En Actas de la Conferencia Internacional IEEE sobre visión por computadora 1379–1388 (2017).
Lu, Z. et al. Diseño evolutivo de criterio múltiple de redes neuronales convolucionales profundas. Preprint en ARXIV: 1912.01369 (2019).
Kandasamy, K., Neiswanger, W., Schneider, J., Poczos, B. y Xing, EP Búsqueda de arquitectura neural con optimización bayesiana y transporte óptimo. Adv. Inf. Neural. Proceso. Syst. 31 (2018).
Elsken, T., Metzen, J.-H. & Hutter, F. Búsqueda de arquitectura simple y eficiente para redes neuronales convolucionales. Preimpresión en ARXIV: 1711.04528 (2017).
Dong, X. y Yang, Y. Buscando una arquitectura neuronal robusta en cuatro horas de GPU. En Actas de la conferencia IEEE/CVF sobre visión por computadora y reconocimiento de patrones 1761–1770 (2019).
Chu, X. et al. DARTS-: Salir de manera robusta del colapso de rendimiento sin indicadores. Preprint en ARXIV: 2009.01027 (2020).
Chen, X., Wang, R., Cheng, M., Tang, X. y Hsieh, C.-J. DRNAS: búsqueda de arquitectura neural de Dirichlet. Preprint en ARXIV: 2006.10355 (2020).
Hu, Y., Wang, X., Li, L. y Gu, Q. Mejora de NAS de un solo disparo con Supernet reducida y expansiva. Reconocimiento de patrones.118108025 (2021).
Artículo Google Scholar
Chu, X., Zhang, B. y Xu, R. Fairnas: Repensar la equidad de evaluación de la búsqueda de arquitectura neuronal compartiendo peso. En Actas de la conferencia internacional IEEE/CVF sobre visión por computadora 12239–12248 (2021).
Xiao, H., Wang, Z., Zhu, Z., Zhou, J. y Lu, J. Shapley-NAS: Descubrimiento de la contribución de la operación para la búsqueda de arquitectura neural. En Actas de la conferencia IEEE/CVF sobre visión por computadora y reconocimiento de patrones 11892–11901 (2022).
Yu, K., Ranftl, R. y Salzmann, M. Regularización histórica: clasificación de entrenamiento guiado de Super Net en la búsqueda de arquitectura neural. En Actas de la conferencia IEEE/CVF sobre visión por computadora y reconocimiento de patrones 13723–13732 (2021).
Cavagnero, N., Robbiano, L., Caputo, B. y Avera, G. Freerea: Búsqueda de arquitectura basada en la evolución libre de capacitación. En Actas de la conferencia de invierno IEEE/CVF sobre aplicaciones de visión por computadora 1493–1502 (2023).
Zheng, X. et al. Búsqueda de arquitectura neuronal con representación de información mutua. En Actas de la conferencia IEEE/CVF sobre visión por computadora y reconocimiento de patrones 11912–11921 (2022).
Strubell, E., Ganesh, A. y McCallum, A. Consideraciones de energía y política para el aprendizaje profundo en la PNL. Preprint en ARXIV: 1906.02243 (2019).
Zoph, B., Vasudevan, V., Shlens, J. y Le, Q. V. Aprender arquitecturas transferibles para el reconocimiento de imágenes escalables. En Actas de la conferencia IEEE sobre visión por computadora y reconocimiento de patrones 8697–8710 (2018).
This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.
Strictly Necessary Cookies
Strictly Necessary Cookie should be enabled at all times so that we can save your preferences for cookie settings.
If you disable this cookie, we will not be able to save your preferences. This means that every time you visit this website you will need to enable or disable cookies again.