Connect with us

Noticias

Latest Features, Pros, and Cons

Published

on

eWEEK content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

ChatGPT icon.

ChatGPT’s Fast Facts

Our product rating: 4/5

Pricing: Free version; $20 monthly for paid plan

Key features:

  • Analyzing uploaded images
  • Generating high-quality generative content
  • Creating videos with third-party applications
  • Generating scripts, articles, and other text
  • Automating tasks with ChatGPT-powered applications

ChatGPT is an advanced conversational AI developed by OpenAI to act as a functional assistant in a range of activities, including answering questions and generating creative content. It uses a large language model (LLM) trained on diverse datasets to engage in sophisticated conversations, provide technical help, and tell stories. Its ability to identify context and nuance helps it stand out from other chatbots, with human-like responses. 

ChatGPT’s Pricing

ChatGPT has a free version that allows users to access most of its integrated applications within the platform. Users have full access to GPT-4o mini and limited access to GPT-4. ChatGPT’s paid version costs $20 per month and includes new features plus access to OpenAI o1-preview, OpenAI o1-mini, GPT-4o, GPT-4o mini, and GPT-4; up to five times more messages for GPT‑4o; access to data analysis, file uploads, vision, web browsing, and image generation; and advanced voice mode.

ChatGPT’s Key Features

ChatGPT provides a set of robust features aimed at increasing efficiency and creativity across a variety of jobs. It can create and analyze images, making it an excellent choice for visual projects and data insights. ChatGPT allows you to create detailed plans and strategies, brainstorm ideas, and generate actionable solutions. It can write code for technical tasks, saving developers time and creating clear, compelling writing for every occasion. ChatGPT can also summarize long texts into shorter and more digestible information.

Creating Images

ChatGPT’s generative AI feature lets you create images using text prompts like other AI art tools. For the image below, I asked ChatGPT to create an image of a beach during sunset with a hammock tied to coconut trees and dogs chasing a Phoenix around the shore. It generated an image almost instantly with visuals close to what I pictured. More specific prompts would likely have fine-tuned the image even further.

Sample image output of ChatGPT.Sample image output of ChatGPT.

Analyzing Images

To test the tool’s ability to use its multimodal nature to analyze images it did not create, I uploaded an image of a cigarette-smoking fish with a chicken’s body and asked ChatGPT for its interpretation. It described the image as a “humorous, surreal creation… likely intended as a piece of absurdist humor or social commentary” and identified it as possibly a meme.

ChatGPT sample image analysis.ChatGPT sample image analysis.

Writing Code for Programming

ChatGPT’s ability to generate programming code can be both effective and, occasionally, challenging. It frequently generates useful code that may be directly applied to tasks ranging from simple scripts to sophisticated programs, but in some cases, the initial result may require extra prompts or revisions to match the specific needs of your application or project. In short, ChatGPT-generated code may function properly out of the box in some circumstances, but it may also require refining or debugging to solve specific edge cases, maximize efficiency, or before being integrated into a broader codebase. This iterative method can lead to good code solutions but may require some back-and-forth to obtain the desired results.

ChatGPT sample programming code output.ChatGPT sample programming code output.

Writing Short and Long-Form Content

Content writing is one of ChatGPT’s specialties. This includes long-form content like articles, book chapters, and case studies or shorter content like social media descriptions, templates, and newsletter items. The clarity of this content will hinge on the accuracy of your prompts. In the example below, I gave a simple prompt with a topic and word count. You’ll want to proofread or fact-check ChatGPT’s work to avoid potential plagiarism, correct inaccuracies, and make the content sound more human in origin.

ChatGPT sample content writing.ChatGPT sample content writing.

Writing Product Descriptions

ChatGPT’s ability to create product descriptions can be used for a social media campaign or a product page on an e-commerce site. I tested its ability to create product descriptions by uploading a mock image of a dog food brand and asking it to create a description with information about health benefits and flavors. In addition, I prompted it to make the description sound more informative so that it could be easily understood. ChatGPT’s generated response turned out better than I expected by far. It gave me a good hook, body, and dog food flavors, and a call to action at the end of the description.

ChatGPT sample product description.ChatGPT sample product description.

Analyzing Complex Context

ChatGPT can analyze contextual data and make suggestions based on different scenarios, making it useful for various tasks. Its responses are generated using the vast dataset on which it was trained. This dataset comprises a wide range of human language subjects and patterns up to the knowledge cutoff date of October 2023.

ChatGPT uses its basic understanding to identify patterns and replicate conversational context, allowing it to make assumptions about human intention, provide clarifications, and participate in interactive discussions. However, it does not have real-time access to current events or personal user information. As a result, ChatGPT’s generated answers are created using general knowledge rather than real-time changes, making it particularly suited for static information, creative brainstorming, problem-solving, and explaining concepts in depth.

ChatGPT complex context analysis.ChatGPT complex context analysis.

Making Travel Plans

I tested ChatGPT by asking it to plan a trip to Siargao in the Philippines. In one scenario, I will work the first week while on the island; in the second, I have time off work with full flexibility regarding activities. ChatGPT generated itineraries suitable for both scenarios. The places it suggested I visit are all real and still in existence and all the activities are doable and popular with Siargao visitors.

During the work week, ChatGPT suggested famous places for morning or evening visits so that I could still commit to my working hours. ChatGPT organized activities like island hopping, surfing, and exploring scenic places (including the Sugba Lagoon and Secret Island) for the fully flexible week.

Sample ChatGPT travel planning.Sample ChatGPT travel planning.

Summarizing Long Texts

In addition to writing articles and descriptions, ChatGPT can help summarize long-form content into shorter, easy-to-read paragraphs. I asked it to summarize eWeek’s 10 Best AI Art Prompts article, and in a few seconds, it narrowed down the prompts and examples and gave me a concise summary. This function saves time and is useful for quickly understanding complex or extensive information. ChatGPT helps readers focus on important takeaways, making it valuable for professionals, students, and creatives who need to quickly absorb knowledge from several sources. It can be applied to various research papers, instructional articles, and creative resources.

ChatGPT long text summarization.ChatGPT long text summarization.

Turning Texts into a Video

ChatGPT’s video production feature is powered by an app from the ChatGPT application store that links smoothly to the api.adzedek.com API. Provide a brief description of your intended video, and the app will create a proposed script and walk you through the video creation process. First, it will direct you to InVideo to see the generated video, which includes high-quality, human-like voice narration over carefully selected stock images and internet footage. This integration provides an excellent, simplified experience for effortlessly making interesting, professional-quality videos.

ChatGPT text-to-video feature.ChatGPT text-to-video feature.

Automating Tasks Using Internal GPT Applications

ChatGPT now provides Custom GPTs, or customized versions of the platform for specific activities or applications. OpenAI maintains a growing list of various GPTs. Some are available via the ChatGPT app, and others are built by users for specific purposes. These GPTs are intended to assist with common activities like scheduling, note-taking, brainstorming, idea generating, content creation, business and data analysis, programming and development, teaching and tutoring, and creative arts.

Productivity GPTs aid with day-to-day tasks such as scheduling and task management, whereas content creation GPTs assist writers, marketers, and creatives in content generation. Business and data analysis GPTs examine and evaluate statistics, collect information on industry trends, and make recommendations for business choices. Programming and development GPTs assist developers with writing code samples, troubleshooting issues, and creating technical documentation.

Education and Tutoring GPTs facilitate learning and teaching in various topics, including arithmetic, science, history, and language learning. Creative Arts GPTs assist artists, designers, and musicians in pursuing artistic interests like design and art conceptions, music and lyrics composing, and food and meal planning. Users can design their GPTs by specifying instructions and uploading appropriate documents or data.

ChatGPT Pros and Cons

The table below summarizes this popular tool’s main pros and cons to help you decide whether it’s the best application for your needs.

Pros Cons
Free version offers an extensive list of extra GPT applications Free version can’t access real-time information from the internet
Generative content abilities can help speed up day-to-day tasks Generative content may hallucinate from time to time
Customizable generative content is possible through its Customize ChatGPT setting Lacks emotional empathy for complex situations

Alternatives to ChatGPT

ChatGPT is among the most popular AI chatbots, but it’s not the only one. Other chatbots, including Claude, Perplexity, and Meta AI, have different strengths and weaknesses that may better meet your needs. The table below shows how they compare at a high level, or read on for more detailed information about each application.

ChatGPT Claude Perplexity Meta AI
Starting price • Free
• $20 per month
• Free
• $20 per month for Pro
• $25 per person/ month for Team
• Free
• $20 per month with one month free
• Free
Real-time access to the internet Yes (with subscription) No Yes Limited
Generate Images Yes Yes No Yes
Respond in a human-like tone Yes Yes Yes Yes
Analyze uploaded images Yes Yes No No

Claude

Anthropic’s Claude AI is a large language model that focuses on providing safe, reliable conversational AI and natural language understanding. Claude AI is known for its alignment and safety-first strategy, which strives to reduce harmful or biased answers, making it suitable for sensitive applications and use cases requiring ethical AI interactions. One of Claude’s distinguishing qualities is its capacity to manage long conversations. It has a memory that can retain past conversations and details, making interactions smoother and more consistent over time. It also provides configurable user control, allowing users to tailor Claude’s “personality” to meet their requirements and tastes. Claude AI provides both free and paid options, with the Claude Pro plan beginning at roughly $20 per month. Enterprise pricing is available for organizations requiring tailored solutions.

Anthropic Claude icon.Anthropic Claude icon.

Perplexity

Perplexity is an AI-powered search-and-answer assistant that provides users with fast, conversational responses to complicated questions. Its real-time access to the internet replicates a search engine but with more context-driven responses. Perplexity AI gives summary information from various reliable sources, allowing users to acquire direct and brief responses without having to go through search engine results. It is beneficial for research and general questions, integrating search engine features with conversational AI to provide a more user-friendly experience. Perplexity provides free and paid plans, with the premium Perplexity Pro starting at $20 monthly. The Pro subscription contains additional features such as quicker reaction times, priority access to new features, and potentially improved accuracy for professional and heavy users.

Perplexity icon.Perplexity icon.

Meta AI

Meta AI is Meta’s chatbot platform, which includes a wide range of AI models and tools targeted at enhancing both practical AI applications and basic AI research. Meta AI’s solutions include the open-source LLaMA (Large Language Model Meta AI) focused on natural language processing (NLP) and creation. This is intended to work smoothly across Meta’s ecosystem of platforms, including WhatsApp, Facebook, and Instagram. Meta AI’s multimodal features allow it to handle tasks such as image, video, and text generation, making it adaptable and applicable to a wide range of applications. Meta AI is free to use and can be accessed online through its website or Meta’s popular messaging apps such as Messenger, Instagram DMs, and WhatsApp.

Meta icon.Meta icon.

How I Evaluated ChatGPT

I placed the highest weights on features, price, and ease of use, followed by integrations, intelligence, and regulatory compliance.

  • Core Features (25 Percent): ChatGPT’s core features should suffice for every user. The general purpose of using an AI chatbot is to create and analyze images and generate outputs that sound personalized to each user. I found the feature set to be strong and deep, offering a wide range of capabilities for many common user applications.
  • Price (20 Percent): I looked into ChatGPT’s pricing information and whether or not they offer a free trial or free version of the chatbot. Users are more inclined to use AI if it is free or doesn’t cost much for its monthly subscription to access its advanced features.
  • Ease of Use (20 Percent): I tested ChatGPT’s web and mobile application versions to see how convenient their navigation is, how quickly they respond to my prompts, and how long their loading time is.
  • Integrations (15 Percent): In addition to generating different types of AI content on its own chatbox, ChatGPT also has integrated applications within its interface. I tested a few of these GPT-powered applications, and each is designed for a specific task, such as brainstorming, writing, video and image generator, and more.
  • Intelligence (10 Percent): Users are using ChatGPT for research purposes, which is why I added intelligence in this category. I tested ChatGPT’s knowledge base by asking questions related to recent events. As a result, ChatGPT can generate generic answers or information as of its knowledge cut-off date of October 2023.
  • Regulatory Compliance (10 Percent): AI chatbots, such as ChatGPT, use large amounts of data for model training. This data may contain personal information, and users must be aware of how their data is being used. I found out that ChatGPT’s compliance with GDPR is complex and not fully established, but its API is complying with DPA for OpenAI’s APIs.

FAQ

What do People Use ChatGPT For?

ChatGPT is a versatile tool with a wide range of uses in personal, professional, and educational spaces. Users often take advantage of Chatgot to generate blog entries, articles, social media captions, and image and video generation. It can also be used to do general research and build business strategies since it can help brainstorm and simplify complex situations. Many businesses also benefit from ChatGPT since it can write code for a website or application, automate emails, generate customer service chatbots, draft marketing materials, and optimize workflows.

What Are the Risks of ChatGPT?

Even though ChatGP is convenient for general research and generating content, it occasionally shows signs of hallucinations. It sometimes needs to be more consistent with its responses, making it difficult to validate the accuracy of its generated information. Social biases included in its training data may be reflected in its outputs, potentially causing harm. Users relying too much on ChatGPT may be unable to think critically and be less likely to double-check facts or apply their judgment when needed.

Is it Worth Subscribing to ChatGPT?

 Understanding what you need from ChatGPT will allow you to know whether you need to pay for its subscription or take advantage of its free version. ChatGPT is worth subscribing to if you need a faster response time, access to OpenAI’s image generator DALL-E, and access to real-time information on the internet. But if you use ChatGPT for its basic features, the free version of ChatGPT is more than enough to help you with anything that you need.

Bottom Line: ChatGPT Excels at General Generative Content Creation and Research

ChatGPT is a great tool for brainstorming, creating generative content, and carrying out research quickly. It is a useful tool for individuals and professionals and even helps writers because of its fast adaptation to a variety of tasks. Users must approach its outputs thoughtfully, verifying facts and guaranteeing authenticity to avoid potential issues such as inaccuracies or inadvertent plagiarism. The best outcomes are achieved by combining ChatGPT’s automation with human contribution, even while it speeds up workflows and fosters creativity. It can be a transformative tool for addressing intellectual and artistic obstacles when used properly.

To learn more about the artificial intelligence tools for help with day-to-day tasks, read our guide to the best generative AI chatbots.

Continue Reading
Click to comment

Leave a Reply

Tu dirección de correo electrónico no será publicada. Los campos obligatorios están marcados con *

Noticias

Decir ‘Gracias’ a Chatgpt es costoso. Pero tal vez valga la pena el precio.

Published

on

La cuestión de si ser cortés a la inteligencia artificial puede parecer un punto discutible, después de todo, es artificial.

Pero Sam Altman, el director ejecutivo de la compañía de inteligencia artificial Openai, recientemente arrojó luz sobre el costo de agregar un adicional “¡Por favor!” o “¡Gracias!” a las indicaciones de chatbot.

Alguien publicó en X la semana pasada: “Me pregunto cuánto dinero ha perdido Openai en los costos de electricidad de las personas que dicen ‘por favor’ y ‘gracias’ a sus modelos”.

Al día siguiente, el Sr. Altman respondió: “Decenas de millones de dólares bien gastados, nunca se sabe”.

Lo primero es lo primero: cada solicita de un chatbot cuesta dinero y energía, y cada palabra adicional como parte de esa solicitud aumenta el costo de un servidor.

Neil Johnson, profesor de física en la Universidad George Washington que estudió inteligencia artificial, comparó palabras adicionales con el empaque utilizado para las compras minoristas. El bot, al manejar un aviso, tiene que nadar a través del embalaje, por ejemplo, papel de seda alrededor de una botella de perfume, para llegar al contenido. Eso constituye un trabajo adicional.

Una tarea de ChatGPT “implica que los electrones se mueven a través de transiciones, eso necesita energía. ¿De dónde vendrá esa energía?” El Dr. Johnson dijo, y agregó: “¿Quién lo está pagando?”

El auge de la IA depende de los combustibles fósiles, por lo que desde un costo y una perspectiva ambiental, no hay una buena razón para ser cortés a la inteligencia artificial. Pero culturalmente, puede haber una buena razón para pagarlo.

Los humanos han estado interesados ​​durante mucho tiempo en cómo tratar adecuadamente la inteligencia artificial. Tome el famoso episodio de “Star Trek: The Next Generation” “The Medy of a Man”, que examina si los datos de Android deben recibir todos los derechos de los seres sintientes. El episodio toma mucho los datos, un favorito de los fanáticos que eventualmente se convertiría en un personaje querido en la tradición “Star Trek”.

En 2019, un estudio de investigación de Pew encontró que el 54 por ciento de las personas que poseían altavoces inteligentes como Amazon Echo o Google Home informaron decir “por favor” al hablarles.

La pregunta tiene una nueva resonancia a medida que ChatGPT y otras plataformas similares avanzan rápidamente, lo que hace que las empresas que producen IA, escritores y académicos lidiaran con sus efectos y consideren las implicaciones de cómo los humanos se cruzan con la tecnología. (El New York Times demandó a Openai y Microsoft en diciembre alegando que habían infringido los derechos de autor del Times en la capacitación de sistemas de IA).

El año pasado, la compañía de IA Anthrope contrató a su primer investigador de bienestar para examinar si los sistemas de IA merecen una consideración moral, según el transformador del boletín tecnológico.

El guionista Scott Z. Burns tiene una nueva serie audible “¿Qué podría salir mal?” Eso examina las dificultades y posibilidades de trabajar con AI “La amabilidad debería ser la configuración predeterminada de todos: hombre o máquina”, dijo en un correo electrónico.

“Si bien es cierto que una IA no tiene sentimientos, mi preocupación es que cualquier tipo de maldad que comience a llenar nuestras interacciones no terminará bien”, dijo.

La forma en que uno trata a un chatbot puede depender de cómo esa persona ve la inteligencia artificial misma y si puede sufrir grosería o mejorar de la amabilidad.

Pero hay otra razón para ser amable. Existe una mayor evidencia de que los humanos interactúan con la inteligencia artificial se trasladan a cómo tratan a los humanos.

“Construimos normas o guiones para nuestro comportamiento y, por lo tanto, al tener este tipo de interacción con la cosa, podemos estar un poco mejores o más orientados habitualmente hacia el comportamiento educado”, dijo el Dr. Jaime Banks, quien estudia las relaciones entre humanos y IA en la Universidad de Syracuse.

La Dra. Sherry Turkle, quien también estudia esas conexiones en el Instituto de Tecnología de Massachusetts, dijo que considera una parte central de su trabajo para enseñar a las personas que la inteligencia artificial no es real, sino más bien un “truco de salón” brillante sin conciencia.

Pero aún así, ella también considera el precedente de las relaciones pasadas del objeto humano y sus efectos, particularmente en los niños. Un ejemplo fue en la década de 1990, cuando los niños comenzaron a criar Tamagotchis, las mascotas digitales ubicadas en dispositivos del tamaño de la palma requerían alimentación y otros tipos de atención. Si no recibieran la atención adecuada, las mascotas morirían, lo que provocará que los niños denuncien un dolor real. Y algunos padres se han preguntado si deberían preocuparse por los niños que son agresivos con las muñecas.

En el caso de los bots de IA, el Dr. Turkle argumentó que están “lo suficientemente vivos”.

“Si un objeto está lo suficientemente vivo como para comenzar a tener conversaciones íntimas, conversaciones amistosas, tratarlo como una persona realmente importante en nuestras vidas, aunque no lo está, está lo suficientemente vivo como para mostrar cortesía”, dijo el Dr. Turkle.

Madeleine George, una dramaturga cuya obra de 2013 “The (Curious Case of the) Watson Intelligence” fue finalista para un Premio Pulitzer, ofreció otra perspectiva: decir “Please” y “Gracias” a AI Bots les ofrece la oportunidad de aprender cómo ser más humanos. (Su juego reinventa diferentes versiones de Sherlock Holmes’s Sidekick, Dr. Watson, incluida una de inteligencia artificial).

Ofrecer frases educadas para Chatgpt, desde su perspectiva, deja abierta la posibilidad de que eventualmente “actúe como un ser vivo que comparte nuestra cultura y que comparte nuestros valores y que comparte nuestra mortalidad”.

Por otro lado, estas frases también pueden hacernos más dependiendo de la IA

“Estamos conectados. Estamos en una relación recíproca. Por eso usamos esas piezas del lenguaje”, dijo George. “Entonces, si enseñamos esa herramienta para que sea excelente para usar esas cosas, entonces vamos a ser aún más vulnerables a sus seducciones. “

Muchas preocupaciones de los observadores de inteligencia artificiales permanecen en lo teórico, a medida que cambia la tecnología. Por ahora, hay poco efecto tangible.

“Si te alejas de ellos y haces cena o se suicidas”, dijo el Dr. Turkle, “es lo mismo para ellos”.

Pero para el futuro Robot Overlords leyendo esta pieza, gracias por su tiempo. Es apreciado.

Por si acaso.

Continue Reading

Noticias

OpenAI: How should we think about the AI company’s nonprofit structure?

Published

on

A version of this story originally appeared in the Future Perfect newsletter. Sign up here!

Right now, OpenAI is something unique in the landscape of not just AI companies but huge companies in general.

OpenAI’s board of directors is bound not to the mission of providing value for shareholders, like most companies, but to the mission of ensuring that “artificial general intelligence benefits all of humanity,” as the company’s website says. (Still private, OpenAI is currently valued at more than $300 billion after completing a record $40 billion funding round earlier this year.)

That situation is a bit unusual, to put it mildly, and one that is increasingly buckling under the weight of its own contradictions.

For a long time, investors were happy enough to pour money into OpenAI despite a structure that didn’t put their interests first, but in 2023, the board of the nonprofit that controls the company — yep, that’s how confusing it is — fired Sam Altman for lying to them.

Sign up here to explore the big, complicated problems the world faces and the most efficient ways to solve them. Sent twice a week.

It was a move that definitely didn’t maximize shareholder value, was at best very clumsily handled, and made it clear that the nonprofit’s control of the for-profit could potentially have huge implications — especially for its partner Microsoft, which has poured billions into OpenAI.

Altman’s firing didn’t stick — he returned a week later after an outcry, with much of the board resigning. But ever since the firing, OpenAI has been considering a restructuring into, well, more of a normal company.

Under this plan, the nonprofit entity that controls OpenAI would sell its control of the company and the assets that it owns. OpenAI would then become a for-profit company — specifically a public benefit corporation, like its rivals Anthropic and X.ai — and the nonprofit would walk away with a hotly disputed but definitely large sum of money in the tens of billions, presumably to spend on improving the world with AI.

There’s just one problem, argues a new open letter by legal scholars, several Nobel-prize winners, and a number of former OpenAI employees: The whole thing is illegal (and a terrible idea).

Their argument is simple: The thing the nonprofit board currently controls — governance of the world’s leading AI lab — makes no sense for the nonprofit to sell at any price. The nonprofit is supposed to act in pursuit of a highly specific mission: making AI go well for all of humanity. But having the power to make rules for OpenAI is worth more than even a mind-bogglingly large sum of money for that mission.

“Nonprofit control over how AGI is developed and governed is so important to OpenAI’s mission that removing control would violate the special fiduciary duty owed to the nonprofit’s beneficiaries,” the letter argues. Those beneficiaries are all of us, and the argument is that a big foundation has nothing on “a role guiding OpenAI.”

And it’s not just saying that the move is a bad thing. It’s saying that the board would be illegally breaching their duties if they went forward with it and the attorneys general of California and Delaware — to whom the letter is addressed because OpenAI is incorporated in Delaware and operates in California — should step in to stop it.

I’ve previously covered the wrangling over OpenAI’s potential change of structure. I wrote about the challenge of pricing the assets owned by the nonprofit, and we reported on Elon Musk’s claim that his own donations early in OpenAI’s history were misappropriated to make the for-profit.

This is a different argument. It’s not a claim that the nonprofit’s control of the for-profit ought to produce a higher sale price. It’s an argument that OpenAI, and what it may create, is literally priceless.

OpenAI’s mission “is to ensure that artificial general intelligence is safe and benefits all of humanity,” Tyler Whitmer, a nonprofit lawyer and one of the letter’s authors, told me. “Talking about the value of that in dollars and cents doesn’t make sense.”

Are they right on the merits? Will it matter? That’s substantially up to two people: California Attorney General Robert Bonta and Delaware Attorney General Kathleen Jennings. But it’s a serious argument that deserves a serious hearing. Here’s my attempt to digest it.

When OpenAI was founded in 2015, its mission sounded absurd: to work towards the safe development of artificial general intelligence — which, it clarifies now, means artificial intelligence that can do nearly all economically valuable work — and ensure that it benefited all of humanity.

Many people thought such a future was a hundred years away or more. But many of the few people who wanted to start planning for it were at OpenAI.

They founded it as a nonprofit, saying that was the only way to ensure that all of humanity maintained a claim to humanity’s future. “We don’t ever want to be making decisions to benefit shareholders,” Altman promised in 2017. “The only people we want to be accountable to is humanity as a whole.”

Worries about existential risk, too, loomed large. If it was going to be possible to build extremely intelligent AIs, it was going to be possible — even if it were accidental — to build ones that had no interest in cooperating with human goals and laws. “Development of superhuman machine intelligence (SMI) is probably the greatest threat to the continued existence of humanity,” Altman said in 2015.

Thus the nonprofit. The idea was that OpenAI would be shielded from the relentless incentive to make more money for shareholders — the kind of incentive that could drive it to underplay AI safety — and that it would have a governance structure that left it positioned to do the right thing. That would be true even if that meant shutting down the company, merging with a competitor, or taking a major (dangerous) product off the market.

“A for-profit company’s obligation is to make money for shareholders,” Michael Dorff, a professor of business law at the University of California Los Angeles, told me. “For a nonprofit, those same fiduciary duties run to a different purpose, whatever their charitable purpose is. And in this case, the charitable purpose of the nonprofit is twofold: One is to develop artificial intelligence safely, and two is to make sure that artificial intelligence is developed for the benefit of all humanity.”

“OpenAI’s founders believed the public would be harmed if AGI was developed by a commercial entity with proprietary profit motives,” the letter argues. In fact, the letter documents that OpenAI was founded precisely because many people were worried that AI would otherwise be developed within Google, which was and is a massive commercial entity with a profit motive.

Even in 2019, when OpenAI created a “capped for-profit” structure that would let them raise money from investors and pay the investors back up to a 100x return, they emphasized that the nonprofit was still in control. The mission was still not to build AGI and get rich but to ensure its development benefited all of humanity.

“We’ve designed OpenAI LP to put our overall mission — ensuring the creation and adoption of safe and beneficial AGI — ahead of generating returns for investors. … Regardless of how the world evolves, we are committed — legally and personally — to our mission,” the company declared in an announcement adopting the new structure.

OpenAI made further commitments: To avoid an AI “arms race” where two companies cut corners on safety to beat each other to the finish line, they built into their governing documents a “merge and assist” clause where they’d instead join the other lab and work together to make the AI safe. And thanks to the cap, if OpenAI did become unfathomably wealthy, all of the wealth above the 100x cap for investors would be distributed to humanity. The nonprofit board — meant to be composed of a majority of members who had no financial stake in the company — would have ultimate control.

In many ways the company was deliberately restraining its future self, trying to ensure that as the siren call of enormous profits grew louder and louder, OpenAI was tied to the mast of its original mission. And when the original board made the decision to fire Altman, they were acting to carry out that mission as they saw it.

Now, argues the new open letter, OpenAI wants to be unleashed. But the company’s own arguments over the last 10 years are pretty convincing: The mission that they set forth is not one that a fully commercial company is likely to pursue. Therefore, the attorneys general should tell them no and instead work to ensure the board is resourced to do what 2019-era OpenAI intended the board to be resourced to do.

What about a public benefit corporation?

OpenAI, of course, doesn’t intend to become a fully commercial company. The proposal I’ve seen floated is to become a public benefit corporation.

“Public benefit corporations are what we call hybrid entities,” Dorff told me. “In a traditional for-profit, the board’s primary duty is to make money for shareholders. In a public benefit corporation, their job is to balance making money with public duties: They have to take into account the impact of the company’s activities on everyone who is affected by them.”

The problem is that the obligations of public benefit corporations are, for all practical purposes, unenforceable. In theory, if a public benefit corporation isn’t benefitting the public, you — a member of the public — are being wronged. But you have no right to challenge it in court.

“Only shareholders can launch those suits,” Dorff told me. Take a public benefit corporation with a mission to help end homelessness. “If a homeless advocacy organization says they’re not benefitting the homeless, they have no grounds to sue.”

Only OpenAI’s shareholders could try to hold it accountable if it weren’t benefitting humanity. And “it’s very hard for shareholders to win a duty-of-care suit unless the directors acted in bad faith or were engaging in some kind of conflict of interest,” Dorff said. “Courts understandably are very deferential to the board in terms of how they choose to run the business.”

That means, in theory, a public benefit corporation is still a way to balance profit and the good of humanity. In practice, it’s one with the thumb hard on the scales of profit, which is probably a significant part of why OpenAI didn’t choose to restructure to a public benefit corporation back in 2019.

“Now they’re saying we didn’t foresee that,” Sunny Gandhi of Encode Justice, one of the letter’s signatories, told me. “And that is a deliberate lie to avoid the truth of — they originally were founded in this way because they were worried about this happening.”

But, I challenged Gandhi, OpenAI’s major competitors Anthropic and X.ai are both public benefit corporations. Shouldn’t that make a difference?

“That’s kind of asking why a conservation nonprofit can’t convert to being a logging company just because there are other logging companies out there,” he told me. In this view, yes, Anthropic and X both have inadequate governance that can’t and won’t hold them accountable for ensuring humanity benefits from their AI work. That might be a reason to shun them, protest them or demand reforms from them, but why is it a reason to let OpenAI abandon its mission?

I wish this corporate governance puzzle had never come to me, said Frodo

Reading through the letter — and speaking to its authors and other nonprofit law and corporate law experts — I couldn’t help but feel badly for OpenAI’s board. (I have reached out to OpenAI board members for comment several times over the last few months as I’ve reported on the nonprofit transition. They have not returned any of those requests for comment.)

The very impressive suite of people responsible for OpenAI’s governance have all the usual challenges of being on the board of a fast-growing tech company with enormous potential and very serious risks, and then they have a whole bunch of puzzles unique to OpenAI’s situation. Their fiduciary duty, as Altman has testified before Congress, is to the mission of ensuring AGI is developed safely and to the benefit of all humanity.

But most of them were selected after Altman’s brief firing with, I would argue, another implicit assignment: Don’t screw it up. Don’t fire Sam Altman. Don’t terrify investors. Don’t get in the way of some of the most exciting research happening anywhere on Earth.

What, I asked Dorff, are the people on the board supposed to do, if they have a fiduciary duty to humanity that is very hard to live up to? Do they have the nerve to vote against Altman? He was less impressed than me with the difficulty of this plight. “That’s still their duty,” he said. “And sometimes duty is hard.”

That’s where the letter lands, too. OpenAI’s nonprofit has no right to cede its control over OpenAI. Its obligation is to humanity. Humanity deserves a say in how AGI goes. Therefore, it shouldn’t sell that control at any price.

It shouldn’t sell that control even if it makes fundraising much more convenient. It shouldn’t sell that control even though its current structure is kludgy, awkward, and not meant for handling a challenge of this scale. Because it’s much, much better suited to the challenge than becoming yet another public benefit corporation would be. OpenAI has come further than anyone imagined toward the epic destiny it envisioned for itself in 2015.

But if we want the development of AGI to benefit humanity, the nonprofit will have to stick to its guns, even in the face of overwhelming incentive not to. Or the state attorneys general will have to step in.

Continue Reading

Noticias

“Estoy recortado con un cambio de imagen y un problema de cafeína”, dice Chatgpt cuando le pedí que se asiera a sí misma

Published

on

La autoconciencia es una cosa, y es notable cuántas personas carecen de ella, pero te complacerá saber que el chatgpt de Openai tiene una gran cantidad de autoconciencia que compartirá de la manera más corta cuando te pides que se asa.

Tuve la idea de un asado de IA después de ver a varias personas publicar historias sobre pedirle a ChatGPT que las asa. Le di una oportunidad, entrando en el mensaje breve pero peligroso, “Asarme”, en Chatgpt 4o.

Continue Reading

Trending