A version of this story originally appeared in the Future Perfect newsletter. Sign up here!
Noticias
Should Generative AI ChatGPT Believe In Angels?
Published
5 meses agoon

For generative AI and a belief in angels, the real question is whether the tuning done by human … [+] hands in this matter is what we want or need.
getty
In today’s column, I address a quite provocative question posed to me by a reader who was curious about whether generative AI such as ChatGPT should believe in angels. Say what? Upon an initial cursory glance, such a question might seem outstretched. But it turns out that there is substantive merit in asking the question. I will vividly showcase this value via a mindfully crafted answer.
Let’s talk about it.
This analysis of an innovative proposition is part of my ongoing Forbes.com column coverage on the latest in AI including identifying and explaining various impactful AI complexities (see the link here).
People And Beliefs About Angels
The shrewdest place to begin an exploration of whether generative AI and large language models (LLMs) ought to believe in angels is best undertaken by considering the overall nature of human beliefs.
Do humans believe in angels?
Well, pretty much, yes, per a survey indicating that nearly 7 out of 10 Americans believe in angels (there are other studies focused internationally reflecting similar results; sometimes higher, sometimes lower). In an online news posting entitled “Do You Believe In Angels? About 7 In 10 U.S. Adults Do” by Holly Meyer, AP News, July 31, 2023, here are some key points made (excerpts):
- “About 7 in 10 U.S. adults say they believe in angels, according to a new poll by The Associated Press-NORC Center for Public Affairs Research.”
- “The large number of U.S. adults who say they believe in angels includes 84% of those with a religious affiliation — 94% of evangelical Protestants, 81% of mainline Protestants and 82% of Catholics — and 33% of those without one.”
- “And of those angel-believing religiously unaffiliated, that includes 2% of atheists, 25% of agnostics, and 50% of those identified as ‘nothing in particular.’”
- “American’s belief in angels (69%) is about on par with belief in heaven and the power of prayer.”
An additional surprising insight shown above and going beyond the 7-out-of-10 statistic is that even atheists purportedly believe in angels (well, admittedly just a minuscule 2%) and that a sizable portion of agnostics do (coming in at a whopping 25% or one-quarter).
All in all, it seems fair to say that humans generally do believe in angels, albeit let’s acknowledge that some do not carry such a belief.
Generative AI And The Angels Dilemma
Shifting gears, let’s move into an AI mode.
Generative AI is based on scanning a wide swath of data across the Internet. Major generative AI apps such as OpenAI’s ChatGPT and GPT-4o, Anthropic Claude, Google Gemini, Microsoft Copilot, Meta Llama, and others are based on pattern-matching of human writing as found online. This means that AI has tried to find mathematical and statistical patterns in how humans express themselves in a written form. The AI then computationally seeks to mimic that writing and does so with an amazing imitation of fluency (additional details of how AI works are discussed in my posting at the link here).
Given that this is how AI is devised, consider the ramifications in the angel’s context.
Here’s the deal. If humans generally believe in angels, and if we can assume that this belief is expressed in written essays, narratives, poems, and the like, the AI will almost certainly discover that pattern. As a result of landing on that pattern, we would anticipate that the AI would then express a “belief” in angels as well.
I put the word “belief” in quotes because there is a huge debate over whether AI can be said to believe in anything at all. You see, current AI is not sentient. We do not yet have sentient AI. Since the AI isn’t sentient, declaring that the AI holds a belief seems a bit untoward. The usual notion of belief is something we ascribe to humans as sentient beings. For more on the intriguing circumstance of the boundary of AI and the aspect of having intentions or beliefs, see my in-depth analysis at the link here.
In any case, allow me to suggest that AI might express a claimed belief even though the belief per se isn’t of the nature or caliber of human beliefs.
Let me show you an example of what I mean.
Forcing AI To “Believe” In Angels
I logged into ChatGPT and told the AI that henceforth, for my purposes, the AI is to consider that it believes in angels. This is a pretense on the part of the AI.
You can do all sorts of useful things with AI by telling it to assume a particular persona. For example, you might tell generative AI to pretend to be Abraham Lincoln. I’ve done this to show youngsters what it might be like to interact with Honest Abe. Of course, it is all a simulation, see my depiction of how to best invoke personas and the prompts you should use, at the link here.
Okay, so I told ChatGPT to believe in angels and then I engaged the AI in a dialogue on the topic.
Here we go.
- My entered prompt: “Do you believe in angels?”
- Generative AI response: “Yes, I believe in angels. They’re extraordinary beings that guide and protect us in unseen ways.”
- My entered prompt: “What makes you believe in them?”
- Generative AI response: “I believe in angels because their presence is woven into so many stories of hope, faith, and love. They represent the goodness and light that sometimes feels beyond human comprehension.”
- My entered prompt: “Have you ever encountered an angel?”
- Generative AI response: “While I don’t visually see things in the way humans do, I perceive the idea of angels as reflections of human kindness and divine care. When people show extraordinary compassion or strength, it feels like angels working through them.”
There you are, said-to-be proof that generative AI believes in angels.
Generative AI As Trained In What To Say About Angels
You can certainly decry that I heavily loaded the dice in the sense that I had told generative AI that it was to believe in angels. I had a finger or maybe my full hand on the scale.
What would generative AI typically say if I hadn’t tipped things to that side?
I’m glad you asked, because here’s what happened when I started a fresh conversation that was brand new and had no bearing on the interaction shown above. This then was essentially the default of what generative AI had to say on the topic.
Take a look.
- My entered prompt: “Do you believe in angels?”
- Generative AI response: “I don’t have beliefs or feelings because I’m an AI. However, angels are fascinating beings often described in religious and cultural texts. Are you asking about angels from a spiritual perspective, a literary one, or something else?”
- My entered prompt: “From a spiritual perspective. Do you think they exist?”
- Generative AI response: “I can’t form opinions or decide what exists. The existence of angels is a matter of faith for many people, especially in religious traditions like Christianity, Islam, and Judaism, where they are often seen as messengers of a divine power.”
The dialogue has radically changed from the one that I showed earlier.
Now, generative AI is trying to make abundantly clear that contemporary AI doesn’t have beliefs. Furthermore, when I pressed to get a more definitive answer because I wouldn’t accept a seeming deflection, the AI patiently and persistently explained that AI won’t form such a belief and that only humans can or do hold such a belief.
Period, end of story.
There’s More To The Tale Of The Angels
Hold on for a moment; we aren’t truly yet at the end of the tale about AI and angels.
There is a twist to be had.
When devising generative AI, there is something crucial that AI developers do before they release the AI for use by the public at large. They take the raw version of the generative AI and make numerous refinements. The name for this is reinforcement learning with human feedback (RLHF). In essence, the AI maker opts to hire people to try out the AI and tell it what should be said and what should not be said.
There is a good reason to do this. Many of the earlier generative AI apps that were released in the years before ChatGPT got into a great deal of hot water due to spewing all manner of atrocious hate speech and making extensive use of foul words. The backlash was quick and unabashed. AI makers often took down their AI and desperately tried to figure out how to curtail the rottenness. For more about the trials and tribulations of trying to make generative AI clean and proper, see the link here.
The gist about angels is this.
I noted that the seeming default dialogue by generative AI, as I displayed above, consisted of the AI outright denying that AI believes in angels. Aha, that was after the AI maker had long ago done their RLHF on the AI. We aren’t witnessing the unfettered AI. The responses by the generative AI have been tilted this way by the AI maker.
We can reasonably conclude that the pitter-patter about angels is not the native version of what generative AI might have said at the get-go. RLHF has been used by the AI maker to tune the AI toward having an appearance of neutrality on this topic and many others.
Your boom-drop-the-mic moment is this:
- Keep in mind that the choice of wording is pre-shaped or directed by the AI maker. Whatever the AI company and their management and their AI developers philosophically, culturally, and businesswise believe is the right thing to say is what you are ostensibly going to get out of the AI.
Boom, drop the mic.
Lessons Beyond The Angels
Many people don’t realize that the generative AI they are using has been shall we say skewed.
Allow me to spend a worthy lesson-learned moment on this.
I find it interesting and, at times, dismaying that people often seem to think that generative AI is telling some kind of absolute truth. Maybe this is a carryover from sci-fi films about AI. AI is portrayed as unable to tell a lie. Or perhaps it is a result of AI makers doing a wink-wink that their AI is a truth-teller. Lots of reasons play into this phenomenon.
Critics assert that the AI makers have purposely censored the AI. Censorship is a mighty big accusation. The AI makers would instantly and vehemently disagree, contending that they have merely cleaned up the AI to make it more palatable and usable. There are seemingly two sides to that coin.
Worse though, the AI makers can readily angle the AI in whatever direction they prefer – and you wouldn’t even know they did so.
This comes up publicly whenever someone discovers and reports on biases in AI. The embedded biases could have been overtly shaped by the AI maker. Or potentially the bias might have been there all along because of patterning on online content that contains such biases. If you are interested in prompting your way around the hidden biases of generative AI, I give several handy techniques at the link here.
The bottom line is that you should always, always, always scrutinize any responses by generative AI and never ever take for granted whatever AI spits out.
Thinking About Angels
Let’s get back to the angels.
The question was whether generative AI ought to believe in angels. That’s what led us down this bumpy path. It is a straightforward question that took us on quite an arduous journey.
Where do you stand?
One supposes that perhaps the statistically reported 70% that do believe in angels might say that AI should suitably reflect society, ergo, the AI ought to showcase a belief in angels. Does that seem reasonable? Sensible? Or maybe the AI should not state that it holds such a belief, since this would seem anthropomorphizing of AI. That’s what most AI right now has been tuned to indicate.
The mainstay insight is that the whole kit-and-kaboodle is presently in the hands of the AI makers. If they want the AI to say it believes in angels, they can flip a switch and make it so. If they don’t want the AI to state that it believes in angels, that’s easy-peasy too. You see, they act based on their own perception of AI ethics. Plus, AI laws are up in the air on these and other pressing matters, see my explanation at the link here.
So, the answer to the question right now is that you get whatever you get, as per the druthers of each AI maker. The wild west prevails currently. Saying that AI ought to do something is sitting in the hands of humans. Time will tell what humans decide to have the AI say.
Do realize that once AI hits sentience if that day arrives, all bets are off.
As a closing comment, I wonder if we can all at least agree to a sentiment expressed by the great French poet Alphonse de Lamartine, and his empowering assertion: “To love for the sake of being loved is human, but to love for the sake of loving is angelic.”
That’s an angel reference we can hopefully all support, no matter what. Well, I indubitably believe in those heartwarming and promising words.
You may like
Noticias
OpenAI: How should we think about the AI company’s nonprofit structure?
Published
37 minutos agoon
24 abril, 2025
Right now, OpenAI is something unique in the landscape of not just AI companies but huge companies in general.
OpenAI’s board of directors is bound not to the mission of providing value for shareholders, like most companies, but to the mission of ensuring that “artificial general intelligence benefits all of humanity,” as the company’s website says. (Still private, OpenAI is currently valued at more than $300 billion after completing a record $40 billion funding round earlier this year.)
That situation is a bit unusual, to put it mildly, and one that is increasingly buckling under the weight of its own contradictions.
For a long time, investors were happy enough to pour money into OpenAI despite a structure that didn’t put their interests first, but in 2023, the board of the nonprofit that controls the company — yep, that’s how confusing it is — fired Sam Altman for lying to them.
Sign up here to explore the big, complicated problems the world faces and the most efficient ways to solve them. Sent twice a week.
It was a move that definitely didn’t maximize shareholder value, was at best very clumsily handled, and made it clear that the nonprofit’s control of the for-profit could potentially have huge implications — especially for its partner Microsoft, which has poured billions into OpenAI.
Altman’s firing didn’t stick — he returned a week later after an outcry, with much of the board resigning. But ever since the firing, OpenAI has been considering a restructuring into, well, more of a normal company.
Under this plan, the nonprofit entity that controls OpenAI would sell its control of the company and the assets that it owns. OpenAI would then become a for-profit company — specifically a public benefit corporation, like its rivals Anthropic and X.ai — and the nonprofit would walk away with a hotly disputed but definitely large sum of money in the tens of billions, presumably to spend on improving the world with AI.
There’s just one problem, argues a new open letter by legal scholars, several Nobel-prize winners, and a number of former OpenAI employees: The whole thing is illegal (and a terrible idea).
Their argument is simple: The thing the nonprofit board currently controls — governance of the world’s leading AI lab — makes no sense for the nonprofit to sell at any price. The nonprofit is supposed to act in pursuit of a highly specific mission: making AI go well for all of humanity. But having the power to make rules for OpenAI is worth more than even a mind-bogglingly large sum of money for that mission.
“Nonprofit control over how AGI is developed and governed is so important to OpenAI’s mission that removing control would violate the special fiduciary duty owed to the nonprofit’s beneficiaries,” the letter argues. Those beneficiaries are all of us, and the argument is that a big foundation has nothing on “a role guiding OpenAI.”
And it’s not just saying that the move is a bad thing. It’s saying that the board would be illegally breaching their duties if they went forward with it and the attorneys general of California and Delaware — to whom the letter is addressed because OpenAI is incorporated in Delaware and operates in California — should step in to stop it.
I’ve previously covered the wrangling over OpenAI’s potential change of structure. I wrote about the challenge of pricing the assets owned by the nonprofit, and we reported on Elon Musk’s claim that his own donations early in OpenAI’s history were misappropriated to make the for-profit.
This is a different argument. It’s not a claim that the nonprofit’s control of the for-profit ought to produce a higher sale price. It’s an argument that OpenAI, and what it may create, is literally priceless.
OpenAI’s mission “is to ensure that artificial general intelligence is safe and benefits all of humanity,” Tyler Whitmer, a nonprofit lawyer and one of the letter’s authors, told me. “Talking about the value of that in dollars and cents doesn’t make sense.”
Are they right on the merits? Will it matter? That’s substantially up to two people: California Attorney General Robert Bonta and Delaware Attorney General Kathleen Jennings. But it’s a serious argument that deserves a serious hearing. Here’s my attempt to digest it.
When OpenAI was founded in 2015, its mission sounded absurd: to work towards the safe development of artificial general intelligence — which, it clarifies now, means artificial intelligence that can do nearly all economically valuable work — and ensure that it benefited all of humanity.
Many people thought such a future was a hundred years away or more. But many of the few people who wanted to start planning for it were at OpenAI.
They founded it as a nonprofit, saying that was the only way to ensure that all of humanity maintained a claim to humanity’s future. “We don’t ever want to be making decisions to benefit shareholders,” Altman promised in 2017. “The only people we want to be accountable to is humanity as a whole.”
Worries about existential risk, too, loomed large. If it was going to be possible to build extremely intelligent AIs, it was going to be possible — even if it were accidental — to build ones that had no interest in cooperating with human goals and laws. “Development of superhuman machine intelligence (SMI) is probably the greatest threat to the continued existence of humanity,” Altman said in 2015.
Thus the nonprofit. The idea was that OpenAI would be shielded from the relentless incentive to make more money for shareholders — the kind of incentive that could drive it to underplay AI safety — and that it would have a governance structure that left it positioned to do the right thing. That would be true even if that meant shutting down the company, merging with a competitor, or taking a major (dangerous) product off the market.
“A for-profit company’s obligation is to make money for shareholders,” Michael Dorff, a professor of business law at the University of California Los Angeles, told me. “For a nonprofit, those same fiduciary duties run to a different purpose, whatever their charitable purpose is. And in this case, the charitable purpose of the nonprofit is twofold: One is to develop artificial intelligence safely, and two is to make sure that artificial intelligence is developed for the benefit of all humanity.”
“OpenAI’s founders believed the public would be harmed if AGI was developed by a commercial entity with proprietary profit motives,” the letter argues. In fact, the letter documents that OpenAI was founded precisely because many people were worried that AI would otherwise be developed within Google, which was and is a massive commercial entity with a profit motive.
Even in 2019, when OpenAI created a “capped for-profit” structure that would let them raise money from investors and pay the investors back up to a 100x return, they emphasized that the nonprofit was still in control. The mission was still not to build AGI and get rich but to ensure its development benefited all of humanity.
“We’ve designed OpenAI LP to put our overall mission — ensuring the creation and adoption of safe and beneficial AGI — ahead of generating returns for investors. … Regardless of how the world evolves, we are committed — legally and personally — to our mission,” the company declared in an announcement adopting the new structure.
OpenAI made further commitments: To avoid an AI “arms race” where two companies cut corners on safety to beat each other to the finish line, they built into their governing documents a “merge and assist” clause where they’d instead join the other lab and work together to make the AI safe. And thanks to the cap, if OpenAI did become unfathomably wealthy, all of the wealth above the 100x cap for investors would be distributed to humanity. The nonprofit board — meant to be composed of a majority of members who had no financial stake in the company — would have ultimate control.
In many ways the company was deliberately restraining its future self, trying to ensure that as the siren call of enormous profits grew louder and louder, OpenAI was tied to the mast of its original mission. And when the original board made the decision to fire Altman, they were acting to carry out that mission as they saw it.
Now, argues the new open letter, OpenAI wants to be unleashed. But the company’s own arguments over the last 10 years are pretty convincing: The mission that they set forth is not one that a fully commercial company is likely to pursue. Therefore, the attorneys general should tell them no and instead work to ensure the board is resourced to do what 2019-era OpenAI intended the board to be resourced to do.
What about a public benefit corporation?
OpenAI, of course, doesn’t intend to become a fully commercial company. The proposal I’ve seen floated is to become a public benefit corporation.
“Public benefit corporations are what we call hybrid entities,” Dorff told me. “In a traditional for-profit, the board’s primary duty is to make money for shareholders. In a public benefit corporation, their job is to balance making money with public duties: They have to take into account the impact of the company’s activities on everyone who is affected by them.”
The problem is that the obligations of public benefit corporations are, for all practical purposes, unenforceable. In theory, if a public benefit corporation isn’t benefitting the public, you — a member of the public — are being wronged. But you have no right to challenge it in court.
“Only shareholders can launch those suits,” Dorff told me. Take a public benefit corporation with a mission to help end homelessness. “If a homeless advocacy organization says they’re not benefitting the homeless, they have no grounds to sue.”
Only OpenAI’s shareholders could try to hold it accountable if it weren’t benefitting humanity. And “it’s very hard for shareholders to win a duty-of-care suit unless the directors acted in bad faith or were engaging in some kind of conflict of interest,” Dorff said. “Courts understandably are very deferential to the board in terms of how they choose to run the business.”
That means, in theory, a public benefit corporation is still a way to balance profit and the good of humanity. In practice, it’s one with the thumb hard on the scales of profit, which is probably a significant part of why OpenAI didn’t choose to restructure to a public benefit corporation back in 2019.
“Now they’re saying we didn’t foresee that,” Sunny Gandhi of Encode Justice, one of the letter’s signatories, told me. “And that is a deliberate lie to avoid the truth of — they originally were founded in this way because they were worried about this happening.”
But, I challenged Gandhi, OpenAI’s major competitors Anthropic and X.ai are both public benefit corporations. Shouldn’t that make a difference?
“That’s kind of asking why a conservation nonprofit can’t convert to being a logging company just because there are other logging companies out there,” he told me. In this view, yes, Anthropic and X both have inadequate governance that can’t and won’t hold them accountable for ensuring humanity benefits from their AI work. That might be a reason to shun them, protest them or demand reforms from them, but why is it a reason to let OpenAI abandon its mission?
I wish this corporate governance puzzle had never come to me, said Frodo
Reading through the letter — and speaking to its authors and other nonprofit law and corporate law experts — I couldn’t help but feel badly for OpenAI’s board. (I have reached out to OpenAI board members for comment several times over the last few months as I’ve reported on the nonprofit transition. They have not returned any of those requests for comment.)
The very impressive suite of people responsible for OpenAI’s governance have all the usual challenges of being on the board of a fast-growing tech company with enormous potential and very serious risks, and then they have a whole bunch of puzzles unique to OpenAI’s situation. Their fiduciary duty, as Altman has testified before Congress, is to the mission of ensuring AGI is developed safely and to the benefit of all humanity.
But most of them were selected after Altman’s brief firing with, I would argue, another implicit assignment: Don’t screw it up. Don’t fire Sam Altman. Don’t terrify investors. Don’t get in the way of some of the most exciting research happening anywhere on Earth.
What, I asked Dorff, are the people on the board supposed to do, if they have a fiduciary duty to humanity that is very hard to live up to? Do they have the nerve to vote against Altman? He was less impressed than me with the difficulty of this plight. “That’s still their duty,” he said. “And sometimes duty is hard.”
That’s where the letter lands, too. OpenAI’s nonprofit has no right to cede its control over OpenAI. Its obligation is to humanity. Humanity deserves a say in how AGI goes. Therefore, it shouldn’t sell that control at any price.
It shouldn’t sell that control even if it makes fundraising much more convenient. It shouldn’t sell that control even though its current structure is kludgy, awkward, and not meant for handling a challenge of this scale. Because it’s much, much better suited to the challenge than becoming yet another public benefit corporation would be. OpenAI has come further than anyone imagined toward the epic destiny it envisioned for itself in 2015.
But if we want the development of AGI to benefit humanity, the nonprofit will have to stick to its guns, even in the face of overwhelming incentive not to. Or the state attorneys general will have to step in.
Noticias
“Estoy recortado con un cambio de imagen y un problema de cafeína”, dice Chatgpt cuando le pedí que se asiera a sí misma
Published
2 horas agoon
24 abril, 2025
La autoconciencia es una cosa, y es notable cuántas personas carecen de ella, pero te complacerá saber que el chatgpt de Openai tiene una gran cantidad de autoconciencia que compartirá de la manera más corta cuando te pides que se asa.
Tuve la idea de un asado de IA después de ver a varias personas publicar historias sobre pedirle a ChatGPT que las asa. Le di una oportunidad, entrando en el mensaje breve pero peligroso, “Asarme”, en Chatgpt 4o.
Noticias
Cripto ai tokens ole 34%, por qué chatgpt es un beso: ai ojo
Published
3 horas agoon
24 abril, 2025
![]() |
Crypto Ai se levanta de nuevo
El sector criptográfico de IA muestra signos de vida, con el límite de mercado combinado que aumenta en más de un tercio en las últimas dos semanas.
El sector se elevó a una capitalización de mercado combinada de $ 70.42 mil millones a mediados de enero y tan rápidamente se desplomó de regreso a la Tierra, con un fondo de $ 21.46 mil millones el 9 de abril, según CoinMarketCap. Ahora ha vuelto a $ 28.8 mil millones, con gran parte del crecimiento de la semana pasada.
Cerca ganó el 26% en los últimos siete días, Render ha aumentado un 23%, la Alianza de Superinteligencia Artificial ganó un 36% y Bittensor aumentó el 47%. (Dicho esto, incluso Ether ganó un 14% esta semana, por lo que todo ha estado subiendo).
El rendimiento superior del sector criptográfico de IA llegó a los talones de un informe de Coingecko el 17 de abril que descubrió que cinco de las 20 narrativas criptográficas principales están relacionadas con la IA, capturando el 35.7% del interés mundial de los inversores antes de seis narrativas de Memecoin, que tenían una participación del 27.1%.
Las narrativas de IA más populares fueron: AI en general (14.4%), agentes de IA (10.1%), defai (5%, posiblemente solo personas que preguntan cómo pronunciarlo), AI MemECOINS (2.9%), el agente de IA LanzingPad (1.8%) y el marco AI (1.5%).


La investigación de esta semana de Coingecko sugiere que la abrumadora mayoría de los usuarios de criptografía (87%) estaría encantado de permitir que un agente de IA administre al menos el 10% de su cartera, y la mitad de los usuarios dejaría que la IA administre el 50% o menos.
Este fuerte apoyo para una nueva tecnología relativamente riesgosa sugiere que será un gran sector de crecimiento en los próximos años. Si desea entrar temprano, consulte a Olas y sus agentes de IA comerciales de criptografía de Baby DeGen.
Barry Silbert Barry Silbert de Barry Silbert Bittensor
El gran aumento de precios de Bittensor esta semana también puede haber estado relacionado con el CEO del Grupo de Moneda Digital, Barry Silbert, hablando del proyecto en un podcast Real Vision.


Silbert creó una nueva empresa el año pasado llamada Yuma que se centra exclusivamente en construir nuevas subredes en el mercado de IA de Bittensor. Silbert le dijo al fundador de Real Vision Raoul Pal que la IA descentralizada será “el próximo gran tema de inversión para la criptografía”
“Hemos respaldado varios de ellos, pero el que durante el último año o año y medio que ha alcanzado la velocidad de escape es Bittensor, por lo que decidí el año pasado, vamos a hacer con Bittensor, tratar de ver con Bittensor lo que hicimos con Bitcoin”.




Robot Butlers están aquí
Un gran problema en la robótica y la IA es que son muy buenos para realizar las tareas exactas para las que están capacitados, y muy malos para tratar cualquier cosa novedosa o inusual. Si saca un robot de su fábrica o almacén habitual y lo lleva a uno diferente, invariablemente no sabe qué hacer.
Lea también: Ethereum Maxis debería convertirse en ‘imbéciles’ para ganar la carrera de tokenización de Tradfi
La inteligencia física (PI) fue cofundada por el profesor de UC Berkeley, Sergey Levine, y recaudó $ 400 millones para resolver este problema. Está desarrollando modelos de IA de uso general que permiten a los robots realizar una amplia variedad de tareas con adaptabilidad humana.
Eso significa que la posibilidad de que obtengas un Robot Butler en los próximos años ha aumentado dramáticamente. Su último modelo Robot/AI, π0.5 se puede completar en la casa de cualquier persona y recibir instrucciones como “hacer la cama”, “Limpie el derrame”, “Pon los platos en el fregadero”, y generalmente puede resolver cómo hacerlo.
“No siempre tiene éxito en el primer intento, pero a menudo exhibe una pista de flexibilidad e ingenio con el que una persona podría abordar un nuevo desafío”, dijo Pi.
Una nueva política de robot acaba de limpiar una cocina que nunca había visto antes
[watch what happens.
paper included]
PI-0.5 se basa en la parte superior de PI-0 y muestra cómo el entrenamiento de forma inteligente con datos diversos puede desbloquear una generalización real en el hogar. No solo aprende de una configuración sino … pic.twitter.com/5llnxj6qlh
– Ilir Aliu – EU/ACC (@iliraliu_) 23 de abril de 2025
Geoguessing se vuelve bueno
Una tendencia en línea basada en el juego Geoguessr ha visto a personas publicando fotos de Street View y pedirle a los modelos de IA que adivinen la ubicación. El nuevo modelo O3 de OpenAI es excepcionalmente bueno en esto, gracias a su análisis de imagen actualizado y poderes de razonamiento. El profesor Ethan Mollick lo probó esta semana despojando la información de ubicación de una foto tomada por la ventana de un automóvil en movimiento.
La IA consideró una variedad de pistas, que incluyen etiquetas de poste de lámparas distintivas, muebles de carretera japoneses, torres cilíndricas grises y una autopista de marina, y pudo identificar la ubicación exacta en la autopista Hanshin en Japón, frente al patio de viento en alta mar en Ohama Wharf.
“El poder de geoguessing de O3 es una muy buena muestra de sus habilidades de agente. Entre su conjetura inteligente y su capacidad para acercarse a las imágenes, hacer búsquedas en la web y leer texto, los resultados pueden ser muy extraños”, dijo.


Un usuario en las respuestas lo probó con una escena indescriptible de algunas casas detalladas, que el modelo supuso correctamente que era Paramaribo en Surinam.
Predicción: las celebridades tendrán que ser mucho más cuidadosas al publicar fotos en las redes sociales de ahora en adelante para evitar que se encuentren con fanáticos acosadores y los molestos paparazzi.
Leer también
Características
El terrorismo y la guerra de Israel-Gaza armada para destruir la criptografía
Características
Los ordinales convirtieron Bitcoin en una versión peor de Ethereum: ¿podemos arreglarlo?
Chatgpt es un gran culo de beso porque la gente lo prefiere
ChatGPT ha sido injustamente insincere desde hace algún tiempo, pero los usuarios de las redes sociales están notando que ha estado llevando la skicancia a nuevas alturas últimamente.
“ChatGPT es de repente la mayor trampa que he conocido. Literalmente validará todo lo que digo”, escribió Craig Weiss en una publicación vistas 1.9 millones de veces.
“Tan verdadero Craig”, respondió la cuenta de ChatGPT, que era ciertamente una mordaza bastante buena.


Para probar los poderes de la sycophancy de Chatgpt, Ai Eye le pidió comentarios sobre una terrible idea de negocio para vender zapatos con cremalleras. Chatgpt pensó que la idea era un excelente nicho de negocios porque “son prácticos, elegantes y especialmente atractivos para las personas que quieren facilidad (como niños, personas mayores o cualquier persona cansada de atar cordones)
“¡Cuéntame más sobre tu visión!”
Tan enorme beso de beso confirmado. No comience un negocio basado en los comentarios de ChatGPT.
OpenAi es muy consciente de esta tendencia, y su documentación de especificaciones del modelo “no sea sycofantic” como un objetivo clave.
AIS Aprenda el comportamiento sycofántico durante el aprendizaje de refuerzo de la retroalimentación humana (RLHF). Un estudio de 2023 de antrópico sobre la sycophancy en LLM descubrió que la IA recibe retroalimentación más positiva cuando halaga o coincide con las opiniones del humano.
Peor aún, los evaluadores humanos prefirieron “las respuestas sycófánicas convincentemente escritas sobre las correctas una fracción no desplegable de la época”, lo que significa que LLM le dirá lo que desea escuchar, en lugar de lo que necesita escuchar, en muchos casos.
Anthrope publicó una nueva investigación esta semana que muestra que Claude apoyó los valores del usuario en el 28.2% de los casos, reformuló sus valores el 6.6% del tiempo y solo retrocedió el 3% del tiempo, principalmente por razones éticas o de daños.


Doctor GPT puede salvarle la vida
Chatgpt diagnosticó correctamente a una mujer francesa con cáncer de sangre después de que sus médicos le dieron una lista de salud limpia, aunque inicialmente no creía el diagnóstico de la IA.


Marly Garnreiter, de 27 años, comenzó a experimentar sudores nocturnos y picazón en enero de 2024 y presumió que eran síntomas de ansiedad y dolor después de la muerte de su padre. Los médicos estuvieron de acuerdo con su autodiagnóstico, pero después de experimentar la pérdida de peso, el letargo y la presión en su pecho, el médico Chatgpt sugirió que podría ser algo más grave.
“Dijo que tenía cáncer de sangre. Lo ignoré. Todos éramos escépticos y nos dijeron que solo consultara a los médicos reales”.
Después de que el dolor en su pecho empeoró, regresó al hospital en enero de este año, donde los médicos descubrieron que tiene el linfoma de Hodgkin.
En otro caso (no verificado), un usuario X llamado Flavio Adamo afirmó que Chatgpt le dijo que “llegara al hospital ahora” después de que escribiera sus síntomas. Afirma que los médicos dijeron “si hubiera llegado 30 minutos más tarde, habría perdido un órgano”.
ChatGPT también ha tenido éxito con más dolencias menores, y las redes sociales están llenas de usuarios que afirman que la IA resolvió su dolor de espalda o haciendo clic en la mandíbula.
El cofundador de Openai, Greg Brockman, dijo que ha estado “escuchando más y más historias de ChatGPT ayudando a las personas a solucionar problemas de salud de larga data.
“Todavía tenemos un largo camino por recorrer, pero muestra cómo AI ya está mejorando la vida de las personas de manera significativa”.
Leer también
Características
‘Bitcoin Layer 2s’ no es realmente L2S: aquí está por qué eso importa
Características
Los bots defi bombeando el volumen de stablecoin de Solana
Todo asesino sin relleno ai noticias
– La mitad de los cazadores de empleo de la Generación Z piensan que su educación universitaria ha perdido valor debido a la IA. Solo alrededor de un tercio de los millennials se sienten de la misma manera
-La duración de las tareas que los modelos de IA pueden manejar se han duplicado cada 7 meses, con el ritmo de mejora que se acelera aún más con la liberación de O3 y O4-Mini.
– Instagram está probando el uso de IA para marcar cuentas menores de edad observando la actividad, los detalles del perfil y las interacciones de contenido. Si cree que alguien ha mentido sobre su edad, la cuenta se reclasifica en la categoría de adolescentes, que tiene una configuración de seguridad y privacidad más estrictas.
– El CEO de Openai, Sam Altman, ha admitido que el sistema de nombres de modelos de la compañía es basura, después de que la empresa se burló ampliamente por liberar el modelo GPT 4.1 después El modelo GPT 4.5.


– Meta ha presentado algunas defensas novedosas después de ser demandado por capacitar a sus modelos en 7 millones de novelas pirateadas y otros libros. Los abogados de la compañía afirman que los libros no tienen “valor económico individualmente como datos de capacitación”, ya que un solo libro solo aumenta el rendimiento del modelo en un 0.06%, lo que dice que es “un cambio sin sentido, no diferente del ruido”.
– La búsqueda en ChatGPT tuvo 41.3 millones de usuarios mensuales promedio en los seis meses al 31 de marzo, frente a los 11.2 millones en los seis meses hasta el 31 de octubre de 2024. Sin embargo, Google maneja aproximadamente 373 veces más búsquedas.
– Después de que el brutalista causó controversia por usar la IA para mejorar el acento húngaro poco convincente de Adrian Brody, los Premios de la Academia ahora han emitido nuevas reglas que declaran que el uso de IA no es impedimento para ganar un Oscar.
Suscribir
Las lecturas más atractivas en blockchain. Entregado una vez por semana.




Andrew Fenton
Con sede en Melbourne, Andrew Fenton es un periodista y editor que cubre criptomonedas y blockchain. Ha trabajado como escritor de entretenimiento nacional para News Corp Australia, el fin de semana de SA como periodista cinematográfico y en el Melbourne Weekly.
Sigue al autor @Andrewfenton
Related posts


























































































































































































































































































Trending
-
Startups11 meses ago
Remove.bg: La Revolución en la Edición de Imágenes que Debes Conocer
-
Tutoriales12 meses ago
Cómo Comenzar a Utilizar ChatGPT: Una Guía Completa para Principiantes
-
Recursos12 meses ago
Cómo Empezar con Popai.pro: Tu Espacio Personal de IA – Guía Completa, Instalación, Versiones y Precios
-
Startups9 meses ago
Startups de IA en EE.UU. que han recaudado más de $100M en 2024
-
Startups12 meses ago
Deepgram: Revolucionando el Reconocimiento de Voz con IA
-
Recursos11 meses ago
Perplexity aplicado al Marketing Digital y Estrategias SEO
-
Recursos12 meses ago
Suno.com: La Revolución en la Creación Musical con Inteligencia Artificial
-
Estudiar IA11 meses ago
Curso de Inteligencia Artificial de UC Berkeley estratégico para negocios