Connect with us

Noticias

SIMPPLE Unveils Gemini: Revolutionary 3-in-1 Robot for Security, Concierge & Cleaning Services

Published

on





SIMPPLE (NASDAQ: SPPL) released a shareholder letter highlighting its achievements in 2024 and outlook for 2025. The company made significant progress in robotics, launching Gemini, the world’s first multifunctional robot with security surveillance, digital concierge, and cleaning capabilities. SIMPPLE expanded internationally, establishing presence in Australia and forming distribution partnerships across multiple countries.

Key developments include the completion of SIMPPLE A.I. pilot trials, enhancement of facility management solutions, and a joint venture with Evolve Consulting ApS for ESG audit capabilities. The company regained Nasdaq compliance in December 2024 and plans to relocate to a larger office by Q1 2025. The global service robotics market is projected to grow by 30.25% CAGR (2024-2028), reaching $90.4 billion, while the compliance management software market is expected to reach $75.8 billion by 2031.

SIMPPLE (NASDAQ: SPPL) ha pubblicato una lettera agli azionisti evidenziando i suoi risultati nel 2024 e le prospettive per il 2025. L’azienda ha fatto progressi significativi nella robotica, lanciando Gemini, il primo robot multifunzionale al mondo con capacità di sorveglianza, concierge digitale e pulizia. SIMPPLE si è espansa a livello internazionale, stabilendo una presenza in Australia e formando partnership commerciali in diversi paesi.

Sviluppi chiave includono il completamento di prove pilota su SIMPPLE A.I., il miglioramento delle soluzioni di gestione delle strutture e una joint venture con Evolve Consulting ApS per le capacità di audit ESG. L’azienda ha ripristinato la conformità al Nasdaq nel dicembre 2024 e prevede di trasferirsi in un ufficio più grande entro il primo trimestre del 2025. Si prevede che il mercato globale della robotica di servizio cresca del 30,25% CAGR (2024-2028), raggiungendo $90,4 miliardi, mentre si stima che il mercato del software di gestione della conformità raggiunga i $75,8 miliardi entro il 2031.

SIMPPLE (NASDAQ: SPPL) publicó una carta a los accionistas destacando sus logros en 2024 y perspectivas para 2025. La empresa ha realizado avances significativos en robótica, lanzando Gemini, el primer robot multifuncional del mundo con capacidades de vigilancia de seguridad, conserje digital y limpieza. SIMPPLE se ha expandido internacionalmente, estableciendo presencia en Australia y formando asociaciones de distribución en varios países.

Los desarrollos clave incluyen la finalización de ensayos piloto de SIMPPLE A.I., la mejora de las soluciones de gestión de instalaciones y una empresa conjunta con Evolve Consulting ApS para capacidades de auditoría ESG. La compañía recuperó la conformidad con Nasdaq en diciembre de 2024 y planea mudarse a una oficina más grande para el primer trimestre de 2025. Se proyecta que el mercado global de robótica de servicio crezca un 30,25% CAGR (2024-2028), alcanzando los $90.4 mil millones, mientras que se espera que el mercado de software de gestión de cumplimiento alcance los $75.8 mil millones para 2031.

SIMPPLE (NASDAQ: SPPL)는 2024년의 성과와 2025년 전망을 강조한 주주 서신을 발표했습니다. 이 회사는 로봇 공학에서 중요한 발전을 이루었으며, 보안 감시, 디지털 컨시어지, 청소 기능을 갖춘 세계 최초의 다기능 로봇인 Gemini를 출시했습니다. SIMPPLE는 호주에서의 존재를 확립하고 여러 국가와의 유통 파트너십을 형성하면서 국제적으로 확장했습니다.

주요 개발 사항으로는 SIMPPLE A.I. 파일럿 시험 완료, 시설 관리 솔루션 개선, ESG 감사 능력을 위한 Evolve Consulting ApS와의 공동 벤처가 포함됩니다. 이 회사는 2024년 12월 Nasdaq 규정 준수를 회복했으며, 2025년 1분기까지 더 큰 사무실로 이전할 계획입니다. 글로벌 서비스 로봇 시장은 2024-2028년 동안 연평균 30.25% 성장할 것으로 예상되며, 904억 달러에 이를 것으로 보이며, 준수 관리 소프트웨어 시장은 2031년까지 758억 달러에 이를 것으로 예상됩니다.

SIMPPLE (NASDAQ: SPPL) a publié une lettre aux actionnaires mettant en avant ses réalisations en 2024 et ses perspectives pour 2025. L’entreprise a réalisé des progrès significatifs dans le domaine de la robotique, lançant Gemini, le premier robot multifonctionnel au monde doté de capacités de surveillance de sécurité, de concierge numérique et de nettoyage. SIMPPLE s’est internationalement développée, établissant une présence en Australie et formant des partenariats de distribution dans plusieurs pays.

Parmi les développements clés, on note l’achèvement des essais pilotes de SIMPPLE A.I., l’amélioration des solutions de gestion des installations et une coentreprise avec Evolve Consulting ApS pour des capacités d’audit ESG. L’entreprise a retrouvé la conformité à la Nasdaq en décembre 2024 et prévoit de déménager dans un bureau plus grand d’ici le premier trimestre 2025. Le marché mondial de la robotique de service devrait croître de 30,25 % CAGR (2024-2028), atteignant 90,4 milliards de dollars, tandis que le marché des logiciels de gestion de la conformité devrait atteindre 75,8 milliards de dollars d’ici 2031.

SIMPPLE (NASDAQ: SPPL) hat einen Aktionärsbrief veröffentlicht, der die Erfolge im Jahr 2024 und den Ausblick für 2025 hervorhebt. Das Unternehmen hat bedeutende Fortschritte in der Robotik gemacht und Gemini, den weltweit ersten multifunktionalen Roboter mit Sicherheitsüberwachung, digitalem Concierge und Reinigungsfunktionen, eingeführt. SIMPPLE hat international expandiert und eine Präsenz in Australien etabliert sowie Vertriebspartnerschaften in mehreren Ländern geknüpft.

Wichtige Entwicklungen umfassen den Abschluss der Pilotversuche von SIMPPLE A.I., die Verbesserung von Facility-Management-Lösungen und ein Joint Venture mit Evolve Consulting ApS für ESG-Audit-Fähigkeiten. Das Unternehmen hat im Dezember 2024 die Nasdaq-Compliance wiedererlangt und plant, bis zum ersten Quartal 2025 in ein größeres Büro umzuziehen. Der globale Markt für Servicerobotik wird voraussichtlich von 2024 bis 2028 mit einer CAGR von 30,25 % wachsen und 90,4 Milliarden Dollar erreichen, während der Markt für Compliance-Management-Software bis 2031 voraussichtlich 75,8 Milliarden Dollar erreichen wird.

Positive


  • Launch of innovative Gemini multifunctional robot combining security, concierge, and cleaning capabilities

  • Successful international expansion with new office in Australia and multiple distribution agreements

  • Completion of SIMPPLE A.I. pilot trials with reputable building owners

  • Strategic joint venture with Evolve Consulting for ESG audit capabilities

  • Regained Nasdaq listing compliance in December 2024

  • Patent award in Singapore for SIMPPLE A.I. process

Insights


The CEO letter reveals several critical strategic developments that position SIMPPLE for significant market expansion. The launch of Gemini, the world’s first multifunctional robot combining security, concierge and cleaning capabilities, represents a breakthrough in the $90.4 billion service robotics market, growing at 30.25% CAGR. The successful completion of SIMPPLE A.I. trials and patent acquisition in Singapore demonstrates meaningful progress in commercializing their proprietary technology stack.

The strategic partnership with Evolve Consulting for ESG reporting capabilities taps into the compliance management software market, projected to reach $75.8 billion by 2031. The company’s expansion into multiple international markets and planned entry into Europe and USA markets in 2025 significantly broadens their addressable market.

Several indicators point to improving financial health and market positioning. The company’s recent compliance with Nasdaq listing requirements strengthens their capital market standing. Their diversification strategy through new revenue streams – including ESG consulting, IP monetization and expanded geographical presence – reduces business risk and enhances growth potential.

The anticipated Q1 2025 revenue targets, supported by Singapore government incentives, suggest strong near-term financial performance. The investment in a larger office space indicates confidence in future growth trajectory. The expansion into high-value sectors like aviation, healthcare and education, combined with the growing global CAFM market potential of $2.2 billion by 2034, presents substantial revenue opportunities.

The company’s strategic positioning aligns well with major market trends in facilities management technology. The focus on integrated solutions combining robotics, AI and ESG compliance addresses critical pain points in the industry. The partnership strategy, particularly in entering new markets through local distributors, reduces market entry risks while accelerating expansion.

The engagement of Korn Ferry for organizational restructuring demonstrates commitment to building robust operational capabilities. The development of open-API platforms and brand-agnostic solutions increases market adaptability and potential customer base. The timing of their ESG compliance platform launch coincides with increasing regulatory pressures and market demand for sustainability reporting tools.












Singapore, Jan. 10, 2025 (GLOBE NEWSWIRE) — SIMPPLE Ltd. (NASDAQ: SPPL) (“SIMPPLE” or “the Company”), a leading technology provider and innovator in the facilities management (FM) sector, today announced that Norman Schroeder, the Company’s Chief Executive Officer, has issued the following Letter to Shareholders.

Dear Shareholders:

As we reflect and assess the year that has been, I would like to thank you for your continued confidence, and support throughout 2024, a transformative year of growth, delivering on key committed milestones and organizational change, subsequently laying the groundwork for 2025 and beyond. SIMPPLE remains focused on its commitment to deliver innovative product solutions and continued growth through sector diversification and its planned international expansion. This will be underpinned by the commercialization of SIMPPLE’s proprietary systems and patented robotics, innovative software solutions, and in-house developed A.I.-driven analytics.

Human Resource Strategy and Reorganization

As part of our ongoing commitment to establish SIMPPLE as an industry leader and partner of choice, we began the year with a strategic business-wide review and reshuffle of its leadership team. The Company made several new executive appointments, while repositioning key existing management roles to maintain organizational continuity, and retention of critical competencies.

Furthermore, the Company engaged global organizational consulting firm Korn Ferry to conduct a comprehensive business-wide review, benchmarking SIMPPLE’s organizational structure, processes, employee grading and development, and remuneration standards against industry leading organisations. This initiative has positioned SIMPPLE’s leadership team to meet the demands of its global expansion and strategic objectives for continued growth.

Product Development and Commercialization

Robotics Automation
SIMPPLE made significant strides in the robotics division with the major sale and deployment of autonomous cleaning robots in railway stations and airport terminals, amongst others, as reported last year. Our range of SIMPPLE Robotics was also recognized at overseas trade shows, winning the ISSA Excellence Award (Innovation) for large equipment in Australia and CleanNZ Service & Technology Award in New Zealand. These contracts and award wins validate the quality, innovation, and effectiveness of our products in the eyes of industry professionals and customers.

In mid-2024, SIMPPLE also launched the world’s first multifunctional robot with security surveillance, digital concierge, and cleaning capabilities, named Gemini, at an international event CleanEnviro Summit held in Singapore. Built with the intention of retrofitting traditional cleaning robots with A.I. video analytics functionalities, this groundbreaking innovation enables service robotics to be more intelligent and fit-for-purpose for integrated facility operations. Gemini robots are deployed in commercial real estate premises and residential condominiums and are expected to proliferate as integrated facility services operators continue to look for multifunctional capabilities and artificial intelligence (A.I.) to drive greater accountability in operations and cost down.

This optimism is supported by a May 2024 report by Technavio, which estimated that the global service robotics market will grow by a CAGR of 30.25%, or $90.4 billion, from 2024 to 2028. This rapid growth, said Technavio, will be driven by “continuing integration of advanced technologies such as IoT, A.I., and natural language processing into service robots, and by world governments pouring significant investment into these technologies.”

Software and Artificial Intelligence
SIMPPLE continues to expand its capabilities beyond its core cleaning and security technologies, with a strategic focus on becoming a comprehensive end-to-end solution provider in facilities management. In 2024, the Company took steps to enhance our solution suite to include energy, lighting, water and asset monitoring and management, which help clients reduce their environmental impact and operational costs. Additionally, SIMPPLE signed a Memorandum of Understanding with a leading pest management technology provider, enabling us to offer a holistic technology package to facility operators and building owners. This collaboration reinforces our commitment as a brand agnostic open-API platform and enhances our ability to address the diverse needs of our clients, ensuring a seamless, efficient, and sustainable approach to managing facilities.

In the second half of 2024, the Company has successfully completed pilot trials with reputable building owners in Singapore with the development and mobilization of SIMPPLE A.I., our next-generation Autonomic Intelligence Engine (A.I.E.), leveraging technologies such as computer vision analytics, system integration capabilities across different brands of autonomous robots with CCTV camera systems, lifts, and doors. This initiative and development are pivotal validations to the future of integrated facility operations with technology assets. Building owners and building service contractors can collectively achieve quicker response time to incidents, improved workforce, and cost efficiency. I hope to share positive developments in SIMPPLE A.I. offerings in this new year.

We expect that 2025 will see our A.I.E.-directed software and computer vision analytics platform gain significant adoption in the facilities management sector. In fact, many credible research reports, including one published in April 2024 by Future Market Insights, underscore the potential of computer-aided facility management (CAFM) systems, collectively expected to help elevate this market sector to a valuation of $2.2 billion by 2034. This growth will be driven by the integration of CAFM with Internet-of-Things (IoT) devices for real-time data collection as well as use of A.I. for predictive maintenance and automated workflows.

Strategic Partnerships and International Expansion

As part of our ambitious international expansion strategy, SIMPPLE has successfully extended its presence beyond Singapore, establishing an office in Australia to serve the broader Australia and New Zealand region. On top of existing contracts in Hong Kong, Malaysia, and Qatar, SIMPPLE has expanded into numerous other countries in Southeast Asia region and the rest of the world in 2024. These new ventures reflect our growing footprint and the increasing demand for our technologies and services worldwide. We have entered into distribution agreements with partners based in Australia, Canada, New Zealand, Thailand, and Vietnam, further strengthening our global reach. With our sights set on Europe and the USA in the new year 2025, we are currently engaged in partnership discussions in these regions, and we look forward to sharing more details later this year. This continued global expansion underscores our commitment to becoming a leading international technology provider in facilities management.

Diversified Revenue Streams with ESG and Intellectual Property

To further strengthen SIMPPLE’s proposition as an end-to-end integrated software platform for building owners, the Company has actively taken steps to diversify its revenue streams and strengthen its long-term growth prospects. Late 2024, SIMPPLE signed a joint venture agreement with Evolve Consulting ApS, enabling both parties to provide Environmental, Social, Governance (ESG) audit and reporting capabilities for the built environment stakeholders in response to the increasing demand for sustainability-focused reporting requirements. Branded SIMPPLE-Evolve, the platform tool automates compliance assessments and ensures adherence to latest ESG standards in Europe and emerging global ESG frameworks.

According to a May 2024 research report by Verified Market Research, the global compliance management software market is projected to grow at CAGR 10.9%, from $33.1 billion in 2024 to $75.8 billion by 2031. This rapid growth is driven by the growing emphasis on sustainability, regulatory compliance and mounting pressures from stakeholders. Companies today are striving to reduce their environmental footprint and align with global sustainability objectives.

In addition to the earlier mentioned SIMPPLE A.I. project, we are also pleased to announce that we have been building our intellectual property (IP) portfolio, with an award of a patent in Singapore for the SIMPPLE A.I. process. As we culminate more trademarks, patents, and design applications globally, we plan to leverage these IP assets to create new revenue opportunities through strategic monetization, ensuring that we remain at the forefront of innovation in our industry.

Office Expansion and Relocation Expected in 2025

Aligned with the broader expansion efforts, SIMPPLE has taken significant steps to look for a larger and more productive office space to support its strategic vision for growth. This move, which is expected to be completed by Q1 2025, positions the Company to better leverage synergies with key players in the built environment ecosystem. This new premise will allow us to increase brand visibility to both Singapore government and foreign delegates, explore potential partnership opportunities with complementary technology solution providers to further enhance our service offerings, and enable us to pilot and scale innovative technologies in a larger built setting. This transition will certainly serve as a hub for innovation and collaboration, supporting our efforts to drive operational excellence and growth in the years ahead.

Outlook for 2025

Looking ahead to 2025, SIMPPLE is poised for continued growth and expansion. With a strengthened leadership team, diversified service offerings, and a growing international presence, we are well-positioned to capitalize on emerging opportunities across the world. Our focus will remain on enhancing operational efficiency, driving sales and innovation, and deepening partnerships within the built environment ecosystem.

In December 2024 SIMPPLE successfully regained compliance with Nasdaq listing requirements, a key milestone that reinforces our financial stability and commitment to governance excellence. We are confident in our ability to surpass our revenue targets in Q1 2025, accompanied by strong financial incentives from the Singapore government to drive robotics and automation in various industries.

With momentum leading into 2025, we remain steadfast and committed to advancing our end-to-end fit-for-purpose solutions agenda with anticipation of a great 2025 ahead of us, with more significant milestones and news to be shared. 2025 will be a year where SIMPPLE will see our strategic investments in key areas, such as Robotics and innovative sensing technologies, advanced A.I. automation, and ESG compliance and reporting, come to fruition and accelerate. We will continue to leverage our strengths in key industries like aviation, healthcare, and higher education institutions to bring best-in-class solutions to deliver long-term value to our clients.

To our shareholders, thank you once again for your trust and confidence as we continue to execute our vision. I look forward to keeping you informed throughout 2025, while remaining focused on building a company that delivers meaningful solutions, sustainable growth, and shareholder value well into the future.

Sincerely,

Norman Schroeder
Chief Executive Officer
SIMPPLE Ltd.

About SIMPPLE LTD.

Headquartered in Singapore, SIMPPLE LTD. is an advanced technology solution provider in the emerging PropTech space, focused on helping facilities owners and managers manage facilities autonomously. Founded in 2016, the Company has a strong foothold in the Singapore facilities management market, serving over 60 clients in both the public and private sectors and extending out of Singapore into Australia and the Middle East. The Company has developed its proprietary SIMPPLE Ecosystem, to create an automated workforce management tool for building maintenance, surveillance and cleaning comprised of a mix of software and hardware solutions such as robotics (both cleaning and security) and Internet-of-Things (“IoT”) devices. 

For more information on SIMPPLE, please visit: https://www.simpple.ai/

Safe Harbor Statement

This press release contains forward-looking statements. In addition, from time to time, we or our representatives may make forward-looking statements orally or in writing. We base these forward-looking statements on our expectations and projections about future events, which we derive from the information currently available to us. Such forward-looking statements relate to future events or our future performance, including: our financial performance and projections; our growth in revenue and earnings; and our business prospects and opportunities. You can identify forward-looking statements by those that are not historical in nature, particularly those that use terminology such as “may,” “should,” “expects,” “anticipates,” “contemplates,” “estimates,” “believes,” “plans,” “projected,” “predicts,” “potential,” or “hopes” or the negative of these or similar terms. In evaluating these forward-looking statements, you should consider various factors, including: our ability to change the direction of the Company; our ability to keep pace with new technology and changing market needs; and the competitive environment of our business. These and other factors may cause our actual results to differ materially from any forward-looking statement.

Forward-looking statements are only predictions. The forward-looking events discussed in this press release and other statements made from time to time by us or our representatives, may not occur, and actual events and results may differ materially and are subject to risks, uncertainties, and assumptions about us. We are not obligated to publicly update or revise any forward-looking statement, whether as a result of uncertainties and assumptions, the forward-looking events discussed in this press release and other statements made from time to time by us or our representatives might not occur.

For investor and media queries, please contact:
SIMPPLE LTD.
Investor Relations Department
Email: ir@simpple.ai

Visit the Investor Relations Website: https://www.investor.simpple.ai/

Skyline Corporate Communications Group, LLC
Scott Powell, President
1177 Avenue of the Americas, 5th Floor
New York, NY 10036
Tel: (646) 893-5835
Email: info@skylineccg.com  









FAQ



What is SIMPPLE’s (SPPL) new Gemini robot and what are its capabilities?


Gemini is SIMPPLE’s multifunctional robot launched in mid-2024, featuring security surveillance, digital concierge, and cleaning capabilities. It’s designed to retrofit traditional cleaning robots with A.I. video analytics and is currently deployed in commercial real estate and residential condominiums.


How is SIMPPLE (SPPL) expanding internationally in 2024-2025?


SIMPPLE established an office in Australia, serving Australia and New Zealand, and entered distribution agreements with partners in Australia, Canada, New Zealand, Thailand, and Vietnam. The company plans to expand into Europe and USA in 2025.


What is the significance of SIMPPLE’s (SPPL) joint venture with Evolve Consulting?


The joint venture, branded as SIMPPLE-Evolve, provides ESG audit and reporting capabilities for built environment stakeholders, automating compliance assessments and ensuring adherence to latest ESG standards in Europe and global frameworks.


When will SIMPPLE (SPPL) complete its office relocation?


SIMPPLE expects to complete its office relocation by Q1 2025, moving to a larger space to support its strategic growth vision and enhance collaboration opportunities.


What are the market growth projections for SIMPPLE’s (SPPL) key business segments?


The global service robotics market is projected to grow at 30.25% CAGR, reaching $90.4 billion by 2028, while the compliance management software market is expected to grow at 10.9% CAGR, reaching $75.8 billion by 2031.





Continue Reading
Click to comment

Leave a Reply

Tu dirección de correo electrónico no será publicada. Los campos obligatorios están marcados con *

Noticias

LIVE: Sam Altman on Building the ‘Core AI Subscription’

Published

on

Contents

Pat Grady: Our next guest needs no introduction, so I’m not gonna bother introducing him—Sam Altman. I will just say Sam is now three for three in joining us to share his thoughts at the three AI Ascents that we’ve had, which we really appreciate. So I just want to say thank you for being here.

Sam Altman: This was our first office.

[applause]

Pat Grady: That’s right. Oh, that’s right. Say that again.

Sam Altman: Yeah, this was—this was our first office. So it’s nice to be back.

Alfred Lin: Let’s go back to the first office here. You started in 2016?

Sam Altman: Yeah.

Alfred Lin: 2016. We just had Jensen here, who said that he delivered the first DGX-1 system over here.

Sam Altman: He did, yeah. It’s amazing how small that thing looks now.

Alfred Lin: Oh, versus what?

Sam Altman: Well, the current boxes are still huge, but yeah, it was a fun throwback.

Alfred Lin: How heavy was it?

Sam Altman: That was still when you could kind of like lift one yourself. [laughs]

Alfred Lin: You said it was about 70 pounds.

Sam Altman: I mean, it was heavy, but you could carry it.

Alfred Lin: So did you imagine that you’d be here today in 2016?

Sam Altman: No. It was like we were sitting over there, and there were 14 of us or something.

Alfred Lin: And you were hacking on this new system?

How OpenAI got to ChatGPT

Sam Altman: I mean, even that was like a—we were sitting around looking at whiteboards, trying to talk about what we should do. This was a—it’s almost impossible to sort of overstate how much we were like a research lab with a very strong belief and direction and conviction, but no real kind of like action plan. I mean, not only was, like, the idea of a company or a product sort of unimaginable, the specific—like, LLMs as an idea were still very far off. We’re trying to play video games.

Alfred Lin: Trying to play video games. Are you still trying to play video games?

Sam Altman: No, we’re pretty good at that.

Alfred Lin: All right. So it took you another six years for the first consumer product to come out, which is ChatGPT. Along the way, how did you sort of think about milestones to get something to that level?

Sam Altman: It’s like an accident of history. The first consumer product was not ChatGPT.

Alfred Lin: That’s right.

Sam Altman: It was Dall-E. The first product was the API. So we had built—you know, we kind of went through a few different things. We were—a few directions that we really wanted to bet on. Eventually, as I mentioned, we said, “Well, we gotta build a system to see if it’s working, and we’re not just writing research papers. So we’re gonna see if we can, you know, play a video game. Well, we’re gonna see if we can do a robot hand. We’re gonna see if we can do a few other things.”

And at some point in there, one person, and then eventually a team, got excited about trying to do unsupervised learning and to build language models. And that led to GPT1, and then GPT2. And by the time of GPT3, we both thought we had something that was kind of cool, but we couldn’t figure out what to do with it. And also we realized we needed a lot more money to keep scaling. You know, we had done GPT3, we wanted to go to GPT4. We were heading into the world of billion-dollar models. It’s, like, hard to do those as a pure science experiment, unless you’re like a particle accelerator or something. Even then it’s hard.

So we started thinking, okay, we both need to figure out how this can become a business that can sustain the investment that it requires. And also we have a sense that this is heading towards something actually useful. And we had put GPT2 out as model weights, and not that much had happened.

One of the things that I had just observed about companies’ products in general is if you do an API, it usually works somehow on the upside. This is, like, true across many, many YC companies. And also that if you make something much easier to use, there’s usually a huge benefit to that. So we’re like, well, it’s kind of hard to run these models that are getting big. We’ll go write some software, do a really good job of running them, and also we’ll then, rather than build a product because we couldn’t figure out what to build, we will hope that somebody else finds something to build.

And so I forget exactly when, but maybe it was like June of 2020, we put out GPT3 in the API. And the world didn’t care, but sort of Silicon Valley did. They’re like, “Oh, this is kind of cool. This is pointing at something.” And there was this weird thing where, like, we got almost no attention from most of the world. And some startup founders were like, “Oh, this is really cool.” Or some of them are like, “This is AGI.”

The only people that built real businesses with the GPT3 API that I can remember were these company—a few companies that did, like, copywriting as a service. That was kind of the only thing GPT3 was over the economic threshold on. But one thing we did notice, which eventually led to ChatGPT, is even though people couldn’t build a lot of great businesses with the GPT3 API, people love to talk to it in the Playground.

And it was terrible at chat. We had not, at that point, figured out how to do RLHF to make it easy to chat with. But people loved to do it anyway. And in some sense, that was the kind of only killer use, other than copywriting, of the API product that led us to eventually build ChatGPT.

By the time ChatGPT 3.5 came out, there were maybe, like, eight categories instead of one category where you could build a business with the API. But our conviction that people just want to talk to the model had gotten really strong. So we had done Dall-E, and Dall-E was doing okay. But we knew we kind of wanted to build—especially along with the fine tuning we were able to do, we knew we wanted to build this model, this product that let you talk to the model.

Alfred Lin: And it launched in 2022.

Sam Altman: Yes.

Alfred Lin: Yeah, that’s six years from when the first …

Sam Altman: November 30, 2022. Yeah.

Alfred Lin: So there’s a lot of work leading up to that. And 2022, it launched. Today, it has over 500 million people who talk to it on a weekly basis.

Sam Altman: Yeah

Alfred Lin: [laughs] All right. All right. So by the way, get ready for some audience questions, because that was Sam’s request. You’ve been here for every single one of the Ascents, as Pat mentioned, and there’s been some—lots of ups and downs, but seems like the last six months it’s just been shipping, shipping, shipping. Shipped a lot of stuff. And it’s amazing to see the product velocity, the shipping velocity continue to increase. So this is like multi, sort of, part question. How have you gotten a large company to, like, increase product velocity over time?

Sam Altman: I think a mistake that a lot of companies make is they get big and they don’t do more things. So they just, like, get bigger because you’re supposed to get bigger, and they still ship the same amount of product. And that’s when, like, the molasses really takes hold. Like, I am a big believer that you want everyone to be busy. You want teams to be small, you want to do a lot of things relative to the number of people you have. Otherwise, you just have, like, 40 people in every meeting and huge fights over who gets what tiny part of the product.

There was this old observation of business that a good executive is a busy executive because you don’t want people, like, muddling around. But I think it’s like a good—you know, at our company and many other companies, like, researchers, engineers, product people, they drive almost all the value. And you want those people to be busy and high impact. So if you’re going to grow, you better do a lot more things, otherwise you kind of just have a lot of people sitting in a room fighting or meeting or talking about whatever. So we try to have, you know, relatively small numbers of people with huge amounts of responsibility. And the way to make that work is to do a lot of things.

And also, like, we have to do a lot of things. I think we really do now have an opportunity to go build one of these important internet platforms. But to do that, like, if we really are going to be people’s personalized AI that they use across many different services and over their life and across all of these different kind of main categories and all the smaller ones that we need to figure out how to enable, then that’s just a lot of stuff to go build.

Building the core AI subscription

Alfred Lin: Anything you’re particularly proud of that you’ve launched in the last six months?

Sam Altman: I mean, the models are so good now. Like, they still have areas to get better, of course, and we’re working on that fast. But, like, I think at this point, ChatGPT is a very good product because the model is very good. I mean, there’s other stuff that matters, too, but I’m amazed that one model can do so many things so well.

Alfred Lin: You’re building small models and large models. You’re doing a lot of things, as you said. So how does this audience stay out of your way and not be roadkill?

[laughter]

Sam Altman: I mean, like, I think the way to model us is we want to build—we want to be people’s, like, core AI subscription and way to use that thing. Some of that will be like what you do inside of ChatGPT. We’ll have a couple of other kind of like really key parts of that subscription, but mostly we will hopefully build this smarter and smarter model. We’ll have these surfaces, like future devices, future things that are sort of similar to operating systems, whatever.

And then we have not yet figured out exactly, I think, what the sort of API or SDK or whatever you want to call it is to really be our platform. But we will. It may take us a few tries, but we will. And I hope that that enables, like, just an unbelievable amount of wealth creation in the world, and other people to build onto that. But yeah, we’re going to go for, like, the core AI subscription and the model, and then the kind of core surfaces, and there will be a ton of other stuff to build.

Alfred Lin: So don’t be the core AI subscription. But you can do everything else.

Sam Altman: We’re gonna try. I mean, if you can make a better core AI subscription offering than us, go ahead. That’d be great. Okay.

Alfred Lin: It’s rumored that you’re raising $40 billion or something like that at $340 billion valuation. It’s rumors. I don’t know if this …

Sam Altman: I think we announced that we’re raise …

Alfred Lin: Okay. Well, I just want to make sure that you announced it. What’s your scale of ambition from there, from here?

Sam Altman: We’re going to try to make great models and ship good products, and there’s no master plan beyond that. Like, we’re gonna—I think, like …

Alfred Lin: Sure.

[laughter]

Sam Altman: No, I mean, I see plenty of OpenAI people in the audience. They can vouch for this. Like, we don’t—we don’t sit there and have—like, I am a big believer that you can kind of, like, do the things in front of you, but if you try to work backwards from, like, kind of we have this crazy complex thing, that doesn’t usually work as well. We know that we need tons of AI infrastructure.

Like, we know we need to go build out massive amounts of, like, AI factory volume. We know that we need to keep making models better. We know that we need to, like, build a great top of the stack, like, kind of consumer product and all the pieces that go into that. But we pride ourselves on being, like, nimble and adjusting tactics as the world adjusts.

And so the products, you know, the products that we’re going to build next year, we’re probably not even thinking about right now. And we believe we can build a set of products that people really, really love, and we have, like, unwavering confidence in that, and we believe we can build great models. I’ve actually never felt more optimistic about our research roadmap than I do right now.

Alfred Lin: What’s on the research roadmap?

Sam Altman: Really smart models.

[laughter]

Sam Altman: But in terms of the steps in front of us, we kind of take those one or two at a time.

Alfred Lin: So you believe in working forwards, not necessarily working backwards.

Sam Altman: I have heard some people talk about these brilliant strategies of how this is where they’re going to go and they’re going to work backwards. And this is take over the world. And this is the thing before that, and this is that, and this is that, and this is that, and this is that, and here’s where we are today. I have never seen those people, like, really massively succeed.

Alfred Lin: Got it. Who has a question? There’s a mic coming your way being thrown.

The generational divide in AI

Audience Member: What do you think the larger companies are getting wrong about transforming their organizations to be more AI native in terms of both using the tooling as well as producing products? Smaller companies are clearly just beating the crap out of larger ones when it comes to innovation here.

Sam Altman: I think this basically happens every major tech revolution. There’s nothing, to me, surprising about it. The thing that they’re getting wrong is the same thing they always get wrong, which is like people get incredibly stuck in their ways, organizations get incredibly stuck in their ways. If things are changing a lot every quarter or two, and you have, like, an information security council that meets once a year to decide what applications are going to allow and what it means to, like, put data into a system, like, it’s so painful to watch what happens here.

But, like, you know, this is creative destruction. This is why startups win. This is like how the industry moves forward. I’d say, I feel, like, disappointed but not surprised at the rate that big companies are willing to do this. My kind of prediction would be that there’s another, like, couple of years of fighting, pretending like this isn’t going to reshape everything, and then there’s like a capitulation and a last-minute scramble and it’s sort of too late. And in general, startups just sort of like blow past people doing it the old way.

I mean, this happens to people, too. Like watching, like, a, you know, someone who started—maybe you, like, talk to an average 20 year old and watch how they use ChatGPT, and then you go talk to, like, an average 35 year old on how they use it or some other service. And, like, the difference is unbelievable. It reminds me of, like, you know, when the smartphone came out and, like, every kid was able to use it super well. And older people just, like, took, like, three years to figure out how to do basic stuff. And then, of course, people integrate. But the sort of like generational divide on AI tools right now is crazy. And I think companies are just another symptom of that.

Alfred Lin: Anybody else have a question?

Audience Member: Just to follow up on that. What are the cool use cases that you’re seeing young people using with ChatGPT that might surprise us?

Sam Altman:They really do use it like an operating system. They have complex ways to set it up, to connect it to a bunch of files, and they have fairly complex prompts memorized in their head or in something where they paste in and out. And I mean, that stuff, I think, is all cool and impressive.

And there’s this other thing where, like, they don’t really make life decisions without asking, like, ChatGPT what they should do. And it has, like, the full context on every person in their life and what they’ve talked about. And, you know, like, the memory thing has been a real change there. But yeah, I think gross oversimplification but, like, older people use ChatGPT as a Google replacement. Maybe people in their 20s and 30s use it as like a life advisor something. And then, like, people in college use it as an operating system.

Alfred Lin: How do you use it inside of OpenAI?

Sam Altman: I mean, it writes a lot of our code.

Alfred Lin: How much?

Sam Altman: I don’t know the number. And also when people say the number, I think is always this very dumb thing because like you can write …

Alfred Lin: Someone said Microsoft code is 20, 30 percent.

Sam Altman: Measuring by lines of code is just such an insane way to, like, I don’t know. Maybe the thing I could say is it’s writing meaningful code. Like, it’s writing—I don’t know how much, but it’s writing the parts that actually matter.

Alfred Lin: That’s interesting. Next question.

Audience Member: Hey Sam.

Alfred Lin: Is the mic going around?

Will the OpenAI API be around in 10 years?

Audience Member: Okay. Hey Sam. I thought it was interesting that the answer to Alfred’s question about where you guys want to go is focused mostly around consumer and being the core subscription, and also most of your revenue comes from consumer subscriptions. Why keep the API in 10 years?

Sam Altman: I really hope that all of this merges into one thing. Like, you should be able to sign in with OpenAI to other services. Other services should have an incredible SDK to take over the ChatGPT UI at some point. But to the degree that you are going to have a personalized AI that knows you, that has your information, that knows what you want to share later, and has all this context on you, you’ll want to be able to use that in a lot of places. Now I agree that the current version of the API is very far off that vision, but I think we can get there.

Audience Member: Yeah. Maybe I have a follow up question to that one. You kind of took mine. But a lot of us who are building application layer companies, we want to, like, use those building blocks, those different API components—maybe the Deep Research API, which is not a release thing, but could be—and build stuff with them. Is that going to be a priority, like, enabling that platform for us? How should we think about that?

Sam Altman: Yeah. I think, I hope something in between those that there is sort of like a new protocol on the level of HTTP for the future of the internet, where things get federated and broken down into much smaller components, and agents are, like, constantly exposing and using different tools and authentication, payment, data transfer. It’s all built in at this level that everybody trusts; everything can talk to everything. And I don’t quite think we know what that looks like, but it’s coming out of the fog, and as we get a better sense for that—again, it’ll probably take us, like, a few iterations toward that to get there, but that’s kind of where I would like to see things go.

Audience Member: Hey Sam, back here. My name is Roy. I’m curious. The AI would obviously do better with more input data. Is there any thought to feeding sensor data? And what type of sensor data, whether it’s temperature, you know, things in the physical world that you could feed in that it could better understand reality?

Sam Altman: People do that a lot. People put that into—people have whatever—they build things where they just put sensor data into an o3 API call or whatever. And for some use cases it does work super well. I’d say the latest models seem to do a good job with this, and they used to not, so we’ll probably bake it in more explicitly at some point, but there’s already a lot happening there.

Voice in ChatGPT

Audience Member: Hi Sam, I was really excited to play with the voice model in the playground. And so I have two questions. The first is: How important is voice to OpenAI in terms of stack ranking for infrastructure? And can you share a little bit about how you think it’ll show up in the product, in ChatGPT, the core thing?

Sam Altman: I think voice is extremely important. Honestly, we have not made a good enough voice product yet. That’s fine. Like, it took us a while to make a good enough text model, too. We will crack that code eventually, and when we do, I think a lot of people are going to want to use voice interaction a lot more.

When we first launched our current voice mode, the thing that was most interesting to me was it was a new stream on top of the touch interface. You could talk and be clicking around on your phone at the same time. And I continue to think there is something amazing to do about, like, voice plus GUI interaction that we have not cracked. But before that, we’ll just make voice really great. And when we do, I think there’s a whole—not only is it cool with existing devices, but I sort of think voice will enable a totally new class of devices if you can make it feel like truly human-level voice.

How central is coding?

Audience Member: Similar question about coding. I’m curious, is coding just another vertical application, or is it more central to the future of OpenAI?

Sam Altman: That one’s more central to the future of OpenAI. Coding, I think, will be how these models kind of—right now, if you ask ChatGPT a response, you get text back, maybe you get an image. You would like to get a whole program back. You would like, you know, custom-rendered code for every response—or at least I would. You would like the ability for these models to go make things happen in the world. And writing code, I think, will be very central to how you, like, actuate the world and call a bunch of APIs or whatever. So I would say coding will be more in a central category. We’ll obviously expose it through our API and our platform as well, but ChatGPT should be excellent at writing code.

Alfred Lin: So we’re gonna move from the world of assistance to agents to basically applications all the way through?

Sam Altman: I think it’ll feel like very continuous, but yes.

Audience Member: So you have conviction in the roadmap about smarter models. Awesome. I have this mental model. There’s some ingredients, like more data, bigger data centers, a transformer as architecture, test time compute. What’s like an underrated ingredient, or something that’s going to be part of that mix that maybe isn’t in the mental model of most of us?

Sam Altman: I mean, that’s kind of the—each of those things are really hard. And, you know, obviously, like, the highest leveraged thing is still big algorithmic breakthroughs. And I think there still probably are some 10Xs or 100Xs left. Not very many, but even one or two is a big deal. But yeah, it’s kind of like algorithms, data, compute, those are sort of the big ingredients.

How to run a great research lab

Audience Member: Hi. So my question is, you run one of the best ML teams in the world. How do you balance between letting smart people like Isa chase Deep Research or something else that seems exciting, versus going top down and being like, “We’re going to build this, we’re going to make it happen. We don’t know if it’ll work.”

Sam Altman: There are some projects that require so much coordination that there has to be a little bit of, like, top down quarterbacking. But I think most people try to do way too much of that. I mean, this is like—there’s probably other ways to run good AI research or good research labs in general, but when we started OpenAI, we spent a lot of time trying to understand what a well-run research lab looks like. And you had to go really far back in the past.

In fact, almost everyone that could help advise us on this was dead. It had been a long time since there had been good research labs. And people ask us a lot, like, why does OpenAI repeatedly innovate, and why do the other AI labs, like, sort of copy? Or why do Biolab X not do good work and Biolab Y does do good work or whatever.

And we sort of keep saying, “Here’s the principles we’ve observed. Here’s how we learned them, here’s what we looked at in the past.” And then everybody says, “Great, but I’m gonna go do the other thing.” That’s fine, you came to us for advice, you do what you want. But I find it remarkable how much these few principles that we’ve tried to run our research lab on—which we did not invent, we shamelessly copied from other good research labs in history—have worked for us. And then people who have had some smart reason about why they were going to do something else that didn’t work.

Audience Member: So it seems to me that these large models, one of the really fascinating things as a lover of knowledge about them, is that they potentially embody and allow us to answer these amazing longstanding questions in the humanities about cyclical changes and artistic interesting things, or even like to what extent systematic prejudice and other sorts of things are really happening in society, and can we sort of detect these very subtle things which we could never really do more than hypothesize before. And I’m wondering whether OpenAI has a thought about, or even a roadmap for working with academic researchers, say, to help unlock some of these new things we could learn for the first time in the humanities and in the social sciences?

Sam Altman: We do, yeah. I mean, it’s amazing to see what people are doing there. We do have academic research programs where we partner and do some custom work, but mostly people just say, like, “I want access to the model or maybe I want access to the base model.” And I think we’re really good at that. One of the kind of cool things about what we do is so much of our incentive structure is pushed towards making the models as smart and cheap and widely accessible as possible, that that serves academics and really the whole world very well. So, you know, we do some custom partnerships, but we often find that what researchers or users really want is just for us to make the general model better across the board. And so we try to focus kind of 90 percent of our thrust vector on that.

Customization and the platonic ideal state

Audience Member: I’m curious how you’re thinking about customization. So you mentioned the federated sign in with OpenAI; bringing your memories, your context. I’m just curious if you think customization and these different post training on application specific things is a band aid, or is trying to make the core models better, and how you’re thinking about that.

Sam Altman: I mean, in some sense, I think platonic ideal state is a very tiny reasoning model with a trillion tokens of context that you put your whole life into. The model never retrains, the weights never customize, but that thing can reason across your whole context and do it efficiently. And every conversation you’ve ever had in your life, every book you’ve ever read, every email you’ve ever read, everything you’ve ever looked at is in there, plus connected all your data from other sources. And, you know, your life just keeps appending to the context, and your company just does the same thing for all your company’s data. We can’t get there today, but I think of kind of like anything else as a compromise off that platonic ideal. And that is how I would eventually, I hope, we do customization.

Alfred Lin: One last question in the back.

Value creation in the coming years

Audience Member: Hi Sam, thanks for your time. Where do you think most of the value creation will come from in the next 12 months? Would it be maybe advanced memory capabilities, or maybe security or protocols that allow agents to do more stuff and interact with the real world?

Sam Altman: I mean, in some sense the value will continue to come from really three things, like building out more infrastructure, smarter models, and building the kind of scaffolding to integrate this stuff into society. And if you push on those, I think the rest will sort itself out.

At a higher level of detail, I kind of think 2025 will be a year of sort of agents doing work, coding in particular, I would expect to be a dominant category. I think there’ll be a few others, too. Next year is a year where I would expect more like sort of AIs discovering new stuff, and maybe we have AIs make some very large scientific discoveries or assist humans in doing that.

And I am kind of a believer that most of the sort of real sustainable economic growth in human history comes from once you’ve kind of spread out and colonized the Earth, most of it comes from just better scientific knowledge and then implementing that for the world. And then ‘27, I would guess, is the year where that all moves from the sort of intellectual realm to the physical world, and robots go from a curiosity to a serious economic creator of value. But that was like an off the top of my head kind of guess right now.

Alfred Lin: Can I close with a few quick questions?

Sam Altman: Great.

Alfred Lin: One of which is GPT5. Is that going to be just all smarter than all of us here?

Sam Altman: I mean, if you think you’re, like, way smarter than o3, then maybe you have a little bit of a ways to go, but o3 is already pretty smart.

Leadership advice for founders

Alfred Lin: [laughs] Two personal questions. Last time you were here, you’d just come off a blip with OpenAI. Given some perspective now and distance, do you have any advice for founders here about resilience, endurance, strength?

Sam Altman: It gets easier over time, I think. Like, you will face a lot of adversity in your journey as a founder, and the kind of challenges get harder and higher stakes, but the emotional toll gets easier as you kind of go through more bad things. So, you know, in some sense yeah, even though abstractly the challenges get bigger and harder, your ability to deal with them, the sort of resilience you build up gets easier, like, with each one you kind of go through.

And then I think the hardest thing about the big challenges that come as a founder is not the moment when they happen. Like, a lot of things go wrong in the history of a company. In the acute thing, you can kind of like—you know, you get a lot of support, you can function off a lot of adrenaline. Like, even the really big stuff, like, your company runs out of money and fails, like, a lot of people will come and support you, and you kind of get through it and go on to the new thing.

The thing that I think is harder to sort of manage your own psychology through is the sort of, like, fallout after. And I think if there’s—you know, people focus a lot about how to work in that one moment during the crisis, and the really valuable thing to learn is how you, like, pick up the pieces. There’s much less talk about that. I think there’s—I’ve never actually found something good to point founders to to go read about, you know, not how you deal with the real crisis on day zero or day one or day two, but on day 60 as you’re just trying to, like, rebuild after it. And that’s the area that I think you can practice and get better at.

Alfred Lin: Thank you, Sam. You’re officially still on paternity leave, I know. So thank you for coming in and speaking with us. Appreciate it.

Sam Altman: Thank you.

[applause]

Continue Reading

Noticias

Las personas comparten cosas ‘totalmente desquiciadas’ para las que han usado Chatgpt

Published

on

El trastorno de ansiedad afecta a casi una quinta parte de la población, solo solo en los Estados Unidos. Nami.org informa que más del 19 por ciento de los estadounidenses sufren un trastorno de ansiedad, que debe distinguirse de los nervios regulares de “adrenalina” que alguien podría obtener de hablar en público o estar atrapados en el tráfico.

Para aquellos que saben, a veces puede parecer debilitante. Al igual que con muchos diagnósticos de salud mental, hay una variedad de gravedad y causas. Estamos “nacidos con él” genéticamente, o un evento traumático puede haber ocurrido que lo desencadena. No importa por qué o “qué tan mal” ocurre, puede sentirse especialmente aislante para aquellos que lo soportan, y para aquellos que quieren ayudar pero no saben qué decir o hacer. La terapia puede ayudar, y cuando sea necesario, medicamentos. Pero entenderlo, para todos los involucrados, puede ser complicado.

https://www.youtube.com/watch?v=bvjkf8iurje– Clip de YouTube sobre ansiedadwww.youtube.com, Psych Hub

La ansiedad no es como un resfriado que puedes atrapar y tratar con un antibiótico. Es difícil explicar exactamente cómo se siente a alguien que no lo experimenta. La mejor manera que puedo describir es que siempre estás sentado en el incómodo pozo de anticipación.

No solo me refiero a una angustia existencial como “¿Hay una vida futura?” o “¿Moriré solo?” Quiero decir, así: “¿Se cerrará mi auto en una intersección ocupada? ¿Qué pasa si necesito un conducto raíz de nuevo algún día? (Lo haré). ¿Llamará? ¿Qué pasa si mi caminante de perros se olvida de venir mientras estoy tentando? ¿Qué pasa si alguien corre una luz roja? ¿Dije lo correcto en la fiesta? ¿Cuál es mi presión arterial?” ¿Estás agotado todavía? Imagine preguntas grandes y pequeñas como esta corriendo continuamente en un bucle a través de la materia gris de un cerebro, sumergiendo dentro y fuera de la lógica en el lóbulo frontal y luego Haga clic, haga clic, haga clic en A medida que se engancha en un borde irregular y se repite … una y otra y otra vez.

Un registro gira en un bucle.Giphy gif por shingo2

Aunque bien intencionado, hay soluciones que las personas a menudo ofrecen que, al menos para mí, hacen que la tensión peor. Muchos terapeutas de salud mental han intervenido en las frases mejor para evitar y han ofrecido alternativas más útiles.

1) En laureltherapy.net, comienzan con el viejo castaño: “Solo relájate”.

Cuando cada sinapsis en tu cerebro está en alerta máxima, alguien que te dice que “simplemente derribarla solo” solo lo empeora. Es literalmente lo contrario de lo que está haciendo tu química cerebral (y no por elección). Es similar a “simplemente calmarse”, que por la misma razón puede sentirse despectivo e inútil.

Ofrecen en su lugar: “Estoy aquí para ti”. Reconoce su incomodidad y da un espacio suave para caer.

2) Otra oración para evitar: “Eres demasiado sensible”.

Esto sería como decirle a alguien con una discapacidad física que es su culpa. En cambio, ofrecen: “Tus sentimientos tienen sentido”.

A veces solo quieres sentirte visto/escuchado, especialmente por los más cercanos a ti. Lo último que uno necesita es sentirse mal por sentirse mal.

3) En EverydayHealth.com, Michelle Pugle (según lo revisado por Seth Gillihan, PhD) cita a Helen Egger, MD, y da este consejo:

No digas “Lo estás pensando demasiado”.

Ella da algunas opciones para probar en su lugar, pero mi favorito es: “Estás a salvo”.

Puede sonar cursi, pero cuando realmente estoy girando, es bueno saber que alguien está a mi lado y no juzga mi mente por pensar de manera diferente a la suya.

4) Pugle también aconseja decir “Preocuparse no cambiará nada”.

No puedo decirte con qué frecuencia se me dice esto y, mientras, tal vez, es cierto, nuevamente implica que no hay nada que uno pueda hacer en un momento de pánico. Ella escribe:

“Tratar de calmar la ansiedad de alguien diciéndoles sus pensamientos no son productivos, que valen la pena, o que son una pérdida de tiempo también invalida sus sentimientos e incluso pueden dejarlos sintiéndose más angustiados que antes”, explica Egger.

En su lugar, intente: “¿Quieres hacer algo para tomarte de la cabeza de las cosas?”
Esto da la impresión de que alguien está realmente dispuesto a ayudar y participar, no solo crítica.

5) “Todo está en tu cabeza”.
La difunta Carrie Fisher una vez escribió sobre cuánto odiaba cuando la gente le decía eso, como si eso fuera de alguna manera reconfortante. Parafraseando, su respuesta fue esencialmente: “Lo sé. ¡Es mi cabeza sacarlo de allí!”

https://www.youtube.com/watch?v=A6YOGZ8PCE– YouTubewww.youtube.com

Laurel Therapy sugiere que intente: “La ansiedad puede ser realmente dura”. Personalmente, preferiría: “¿Cómo puedo ayudar?”

Si bien a veces podría sentirse frustrante, la clave, cuando se trata de ansiedad, es ser consciente de que no está avergonzando o condescendiendo.

Aquí hay algunos conceptos más que me ayudan:

GRATITUD

Vi una película llamada Casi tiempo Hace unos años, escrito por Richard Curtis, que tiene una propensión a ser cursi. Pero esta cita es muy hermosa: “Solo trato de vivir todos los días como si hubiera vuelto deliberadamente a este día, para disfrutarlo, como si fuera el último día final de mi vida extraordinaria y ordinaria”. Simplemente me encanta la idea de fingir que hemos viajado el tiempo a cada momento de nuestras vidas a propósito. Y esto ayuda especialmente a los ansiosos porque si es cierto que siempre estamos herramientando en un futuro impredecible en lugar de estar sentados donde el tiempo quiere que estemos, tiene sentido que estuviéramos allí y hemos vuelto a un momento para mostrarle respeto. Ver todos los días y cada pensamiento como un regalo en lugar de un miedo. Ahora eso es algo.

RESPIRAR

Estoy seguro de que has oído hablar de los beneficios de la meditación. Son verdaderos. He visto la práctica de tener en cuenta tu respiración y sentarse aún hacer grandes diferencias en las personas cercanas a mí. No he podido hacer que la meditación sea parte de mi rutina diaria, pero eso no significa que no pueda esforzarme. (Intente, intente de nuevo.) Parto en el yoga y encuentro que ayuda a frenar mi mente considerablemente.

Saber que No son tus pensamientos

Nuestras amígdales (la parte del cerebro, que entre otros roles, provoca nuestra respuesta a las amenazas, reales o percibidas) puede jugar con trucos desagradables para nosotros. No somos la suma total de cada pensamiento que hemos tenido. Por el contrario, creo que somos lo que nosotros hacerno lo que pensamos. Nuestra ansiedad (o depresión) no tiene que definirnos, especialmente cuando sabemos que estamos respondiendo a muchas amenazas que ni siquiera existen. Podemos ser de servicio a los demás. Voluntario cuando sea posible o simplemente sea amable con los que lo rodean todos los días. Eso es lo que nos hace quienes somos. Personalmente, esa idea me calma.

Continue Reading

Noticias

‘Empire of AI’ author on OpenAI’s cult of AGI and why Sam Altman tried to discredit her book

Published

on

When OpenAI unleashed ChatGPT on the world in November 2022, it lit the fuse that ignited the generative AI era.

But Karen Hao, author of the new book, Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI, had already been covering OpenAI for years. The book comes out on May 20, and it reveals surprising new details about the company’s culture of secrecy and religious devotion to the promise of AGI, or artificial general intelligence.

Hao profiled the company for MIT Technology Review two years before ChatGPT launched, putting it on the map as a world-changing company. Now, she’s giving readers an inside look at pivotal moments in the history of artificial intelligence, including the moment when OpenAI’s board forced out CEO and cofounder Sam Altman. (He was later reinstated because of employee backlash.)

Empire of AI dispels any doubt that OpenAI’s belief in ushering in AGI to benefit all of humanity had messianic undertones. One of the many stories from Hao’s book involves Ilya Sutskever, cofounder and former chief scientist, burning an effigy on a team retreat. The wooden effigy “represented a good, aligned AGI that OpenAI had built, only to discover it was actually lying and deceitful. OpenAI’s duty, he said, was to destroy it.” Sutskever would later do this again at another company retreat, Hao wrote.

And in interviews with OpenAI employees about the potential of AGI, Hao details their “wide-eyed wonder” when “talking about how it would bring utopia. Someone said, ‘We’re going to reach AGI and then, game over, like, the world will be perfect.’ And then speaking to other people, when they were telling me that AGI could destroy humanity, their voices were quivering with that fear.”

Hao’s seven years of covering AI have culminated in Empire of AI, which details OpenAI’s rise to dominance, casting it as a modern-day empire. That Hao’s book reminded me of The Anarchy, the account of the OG corporate empire, The East India Company, is no coincidence. Hao reread William Dalrymple’s book while writing her own “to remind [herself] of the parallels of a company taking over the world.”

This is likely not a characterization that OpenAI wants. In fact, Altman went out of his way to discredit Hao’s book on X. “There are some books coming out about OpenAI and me. We only participated in two… No book will get everything right, especially when some people are so intent on twisting things, but these two authors are trying to.”

The two authors Altman named are Keach Hagey and Ashlee Vance, and they also have forthcoming books. The unnamed author was Hao, of course. She said OpenAI promised to cooperate with her for months, but never did.

We get into that drama in the interview below, plus OpenAI’s religious fervor for AGI, the harms AI has already inflicted on the Global South, and what else Hao would have included if she’d kept writing the book.

Order ‘Empire of AI’ by Karen Hao


Mashable: I was particularly fascinated by this religious belief or faith that AGI could be achieved, but also without being able to define it. You wrote about Ilya [Sutskever] being seen as a kind of prophet and burning an effigy. Twice. I’d love to hear more of your thoughts on that. 

Karen Hao: I’m really glad that you used religious belief to describe that, because I don’t remember if I explicitly used that word, but I was really trying to convey it through the description. This was a thing that honestly was most surprising to me while reporting the book. There is so much religious rhetoric around AGI, you know, ‘AI will kill us’ versus ‘AI will bring us to utopia.’ I thought it was just rhetoric. 

When I first started reporting the book, the general narrative among more skeptical people is, ‘Oh, of course they’re going to say that AI can kill people, or AI will bring utopia, because it creates this image of AI being incredibly powerful, and that’s going to help them sell more products.’ 

What I was surprised by was, no, it’s not just that. Maybe there are some people who do just say this as rhetoric, but there are also people who genuinely believe these things. 

I spoke to people with wide-eyed wonder when they were talking about how it would bring utopia. Someone said, ‘We’re going to reach AGI and then, game over, like, the world will be perfect.’ And then speaking to other people, when they were telling me that AGI could destroy humanity, their voices were quivering with that fear. 


The amount of power to influence the world is so profound that I think they start to need religion; some kind of belief system or value system to hold on to.

Ilya Sutskever and Sam Altman on stage at an event

Ilya Sutskever (pictured here at a 2023 event in Tel Aviv with Sam Altman) burned a wooden effigy at a company retreat that represented AGI gone rogue.
Credit: Photo by Jack Guez / AFP / Getty Images

I was really shocked by that level of all-consuming belief that a lot of people within this space start to have, and I think part of it is because they’re doing something that is kind of historically unprecedented. The amount of power to influence the world is so profound that I think they start to need religion; some kind of belief system or value system to hold on to. Because you feel so inadequate otherwise, having all that responsibility. 

Also, the community is so insular. Because I talked with some people over several years, I noticed that the language they use and how they think about what they’re doing fundamentally evolves. As you get more and more sucked into this world. You start using more and more religious language, and more and more of this perspective really gets to you.

It’s like Dune, where [Lady Jessica] tells a myth that she builds around Paul Atreides that she purposely kind of constructs to make it such that he becomes powerful, and they have this idea that this is the way to control people. To create a religion, you create a mythology around it. Not only do the people who hear it for the first time genuinely believe this because they don’t realize that it was a construct, but also Paul Atreides himself starts to believe it more and more, and it becomes a self-fulfilling prophecy. Honestly, when I was talking with people for the book, I was like, this is Dune

Something I’ve been wondering lately is, what am I not seeing? What are they seeing that is making them believe this so fervently? 

I think what’s happening here is twofold. First, we need to remember that when designing these systems, AI companies prioritize their own problems. They do this both implicitly—in the way that Silicon Valley has always done, creating apps for first-world problems like laundry and food delivery, because that’s what they know—and explicitly. 

My book talks about how Altman has long pushed OpenAI to focus on AI models that can excel at code generation because he thinks they will ultimately help the company entrench its competitive advantage. As a result, these models are designed to best serve the people who develop them. And the farther away your life is from theirs in Silicon Valley, the more this technology begins to break down for you.

The second thing that’s happening is more meta. Code generation has become the main use case in which AI models are more consistently delivering workers productivity gains, both for the reasons aforementioned above and because code is particularly well suited to the strengths of AI models. Code is computable. 

To people who don’t code or don’t exist in the Silicon Valley worldview, we view the leaps in code-generation capabilities as leaps in just one use case. But in the AI world, there is a deeply entrenched worldview that everything about the world is ultimately, with enough data, computable. So, to people who exist in that mind frame, the leaps in code generation represent something far more than just code generation. It’s emblematic of AI one day being able to master everything.

Mashable Light Speed

How did your decision to frame OpenAI as a modern-day empire come to fruition?

I originally did not plan to focus the book that much on OpenAI. I actually wanted to focus the book on this idea that the AI industry has become a modern-day empire. And this was based on work that I did at MIT Technology Review in 2020 and 2021 about AI colonialism. 


To really understand the vastness and the scale of what’s happening, you really have to start thinking about it more as an empire-like phenomenon.

It was exploring this idea that was starting to crop up a lot in academia and among research circles that there are lots of different patterns that we are starting to see where this pursuit of extremely resource-intensive AI technologies is leading to a consolidation of resources, wealth, power, and knowledge. And in a way, it’s no longer sufficient to kind of call them companies anymore.

To really understand the vastness and the scale of what’s happening, you really have to start thinking about it more as an empire-like phenomenon. At the time, I did a series of stories that was looking at communities around the world, especially in the Global South, that are experiencing this kind of AI revolution, but as vulnerable populations that were not in any way seeing the benefits of the technology, but were being exploited by either the creation of the technology or the deployment of it. 

And that’s when ChatGPT came out… and all of a sudden we were recycling old narratives of ‘AI is going to transform everything, and it’s amazing for everyone.’ So I thought, now is the time to reintroduce everything but in this new context. 

Then I realized that OpenAI was actually the vehicle to tell this story, because they were the company that completely accelerated the absolute colossal amount of resources that is going into this technology and the empire-esque nature of it all. 

sam altman speaking in the white house's roosevelt room with president donald trump

Sam Altman, under President Donald Trump’s administration, announced OpenAI’s $500 billion Stargate Project to build AI infrastructure in the U.S.
Credit: Jim Watson / AFP / Getty Images

Your decision to weave the stories of content moderators and the environmental impact of data centers from the perspective of the Global South was so compelling. What was behind your decision to include that?

As I started covering AI more and more, I developed this really strong feeling that the story of AI and society cannot be understood exclusively from its centers of power. Yes, we need reporting to understand Silicon Valley and its worldview. But also, if we only ever stay within that worldview, you won’t be able to fully understand the sheer extent of how AI then affects real people in the real world. 

The world is not represented by Silicon Valley, and the global majority or the Global South are the true test cases for whether or not a technology is actually benefiting humanity, because the technology is usually not built with them in mind. 

All technology revolutions leave some people behind. But the problem is that the people who are left behind are always the same, and the people who gain are always the same. So are we really getting progress from technology if we’re just exacerbating inequality more and more, globally? 

That’s why I wanted to write the stories that were in places far and away from Silicon Valley. Most of the world lives that way without access to basic resources, without a guarantee of being able to put healthy food on the table for their kids or where the next paycheck is going to come from. And so unless we explore how AI actually affects these people, we’re never really going to understand what it’s going to mean ultimately for all of us.

Another really interesting part of your book was the closing off of the research community [as AI labs stopped openly sharing details about their models] and how that’s something that we totally take for granted now. Why was that so important to include in the book?

I was really lucky in that I started covering AI before all the companies started closing themselves off and obfuscating technical details. And so for me, it was an incredibly dramatic shift to see companies being incredibly open with publishing their data, publishing their model weights, publishing the analyses of how their models are performing, independent auditors getting access to models, things like that, and now this state where all we get is just PR. So that was part of it, just saying, it wasn’t actually like this before. 

And it is yet another example of why empires are the way to think about this, because empires control knowledge production. How they perpetuate their existence is by continuously massaging the facts and massaging science to allow them to continue to persist. 

But also, if it wasn’t like this before, I hope that it’ll give people a greater sense of hope themselves, that this can change. This is not some inevitable state of affairs. And we really need more transparency in how these technologies are developed. 


The levels of opacity are so glaring, and it’s shocking that we’ve kind of been lulled into this sense of normalcy. I hope that it’s a bit of a wake-up call that we shouldn’t accept this.

They’re the most consequential technologies being developed today, and we literally can’t say basic things about them. We can’t say how much energy they use, how much carbon they produce, we can’t even say where the data centers are that are being built half the time. We can’t say how much discrimination is in these tools, and we’re giving them to children in classrooms and to doctors’ offices to start supporting medical decisions. 

The levels of opacity are so glaring, and it’s shocking that we’ve kind of been lulled into this sense of normalcy. I hope that it’s a bit of a wake-up call that we shouldn’t accept this.

When you posted about the book, I knew that it was going to be a big thing. Then Sam Altman posted about the book. Have you seen a rise in interest, and does Sam Altman know about the Streisand Effect?

a closeup up Sam Altman wearing a suit with bright lights behind him

Sam Altman (pictured at a recent Senate hearing) alluded to ‘Empire of AI’ in an X post as a book OpenAI declined to participate in. Hao says she tried for six months to get their cooperation.
Credit: Nathan Howard / Bloomberg / Getty Images

Obviously, he’s a very strategic and tactical person and generally very aware of how things that he does will land with people, especially with the media. So, honestly, my first reaction was just… why? Is there some kind of 4D chess game? I just don’t get it. But, yeah, we did see a rise in interest from a lot of journalists being like, ‘Oh, now I really need to see what’s in the book.’

When I started the book, OpenAI said that they would cooperate with the book, and we had discussions for almost six months of them participating in the book. And then at the six-month mark, they suddenly reversed their position. I was really disheartened by that, because I felt like now I have a much harder task of trying to tell this story and trying to accurately reflect their perspective without really having them participate in the book. 

But I think it ended up making the book a lot stronger, because I ended up being even more aggressive in my reporting… So in hindsight, I think it was a blessing. 

Why do you think OpenAI reversed its decision to talk to you, but talked to other authors writing books about OpenAI? Do you have any theories?

When I approached them about the book, I was very upfront and said, ‘You know all the things that I’ve written. I’m going to come with a critical perspective, but obviously I want to be fair, and I want to give you every opportunity to challenge some of the criticisms that I might bring from my reporting.’ Initially, they were open to that, which is a credit to them.

I think what happened was it just kept dragging out, and I started wondering how sincere they actually were or whether they were offering this as a carrot to try and shape how many people I reached out to myself, because I was hesitant to reach out to people within the company while I was still negotiating for interviews with the communications team. But at some point, I realized I’m running out of time and I just need to go through with my reporting plan, so I just started reaching out to people within the company.

My theory is that it frustrated them that I emailed people directly, and because there were other book opportunities, they decided that they didn’t need to participate in every book. They could just participate in what they wanted to. So it became kind of a done decision that they would no longer participate in mine, and go with the others. 

The book ends at the beginning of January 2025, and so much has happened since then. If you were going to keep writing this book, what would you focus on?

For sure the Stargate Project and DeepSeek. The Stargate Project is just such a perfect extension of what I talk about in the book, which is that the level of capital and resources, and now the level of power infrastructure and water infrastructure that is being influenced by these companies is hard to even grasp.

Once again, we are getting to a new age of empire. They’re literally land-grabbing and resource-grabbing. The Stargate Project was originally announced as a $500 billion spend over four years. The Apollo Program was $380 billion over 13 years, if you account for it in 2025. If it actually goes through, it would be the largest amount of capital spent in history to build infrastructure for technology that ultimately the track record for is still middling. 


Once again, we are getting to a new age of empire. They’re literally land-grabbing and resource-grabbing.

We haven’t actually seen that much economic progress; it’s not broad-based at all. In fact, you could argue that the current uncertainty that everyone feels about the economy and jobs disappearing is actually the real scorecard of what the quest for AGI has brought us. 

And then DeepSeek… the fundamental lesson of DeepSeek was that none of this is actually necessary. I know that there’s a lot of controversy around whether they distilled OpenAI’s models or actually spent the amount that they said they did. But OpenAI could have distilled their own models. Why didn’t they distill their models? None of this was necessary. They do not need to build $500 billion of infrastructure. They could have spent more time innovating on more efficient ways of reaching the same level of performance in their technologies. But they didn’t, because they haven’t had the pressure to do so with the sheer amount of resources that they can get access to through Altman’s once-in-a-generation fundraising capabilities.

What do you hope readers will take away from this book?

The story of the empire of AI is so deeply connected to what’s happening right now with the Trump Administration and DOGE and the complete collapse of democratic norms in the U.S., because this is what happens when you allow certain individuals to consolidate so much wealth, so much power, that they can basically just manipulate democracy. 

AI is just the latest vehicle by which that is happening, and democracy is not inevitable. If we want to preserve our democracy, we need to fight like hell to protect it and recognize that the way Silicon Valley is currently talking about weaponizing AI as a sort of a narrative for the future is actually cloaking this massive acceleration of the erosion of democracy and reversal of democracy. 

Empire of AI will be published by Penguin Random House on Tuesday, May 20. You can purchase the book through Penguin, Amazon, Bookshop.org, and other retailers.


Editor’s Note: This conversation has been edited for clarity and grammar.

Disclosure: Ziff Davis, Mashable’s parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.

Topics
Artificial Intelligence
OpenAI

Continue Reading

Trending