Insomnia is quite a beast — modern-times generative AI is seeking to tame the sleeping difficulties … [+] and overcome your sleeping disorders.
getty
Insomnia is an eye-opening problem (pardon the pun!).
There is no doubt that insomnia is a highly serious challenge. Being unable to sleep is certainly a disconcerting and most pressing issue that we all have faced. From time to time, it seems that bouts of sleep deprivation are bound to strike any of us in this hectic world we live in. Work pressures, family issues, and the general sense of the planet being on edge are enough to wreck our sleep patterns.
What you might not be aware of is that an estimated 10% to possibly 30% of adults are at some point in time dealing with chronic insomnia. This is far beyond occasional or episodic insomnia. Chronic insomnia is typically classified as a disorder, reflecting the seriousness and severity that it can inflict on your physical body and your mental well-being.
There are lots of suggested ways to cope with insomnia.
I bet you’ve heard, seen, and possibly tried many. Some say that you can mentally will yourself out of insomnia. Various therapeutic techniques purport to do so. Rarely are you able to do this entirely on your own. It is customary to seek out professional mental health guidance. In addition, medications sometimes can play an important role too, though you should be cautious about taking medications unless you’ve got a suitably prescribed approach.
Let’s go ahead and add another option for tackling insomnia.
Are you sitting down and ready for what it is?
Okay, the option is generative AI.
That’s right, include generative AI such as ChatGPT, GPT-4, Claude, Gemini, and other popular generative AI apps on your list of presumed solution possibilities for conquering insomnia. A vital point to clarify is that generative AI should not be overstated or classified as a remedy or cure per se. It is simply another tool in the toolbox of ways to combat and defeat insomnia. No silver bullet on this front.
I trust that you are intrigued about how generative AI in some sensible manner can be utilized to cope with insomnia. I am quite glad that you asked, thanks.
I will walk you through essential background about insomnia and dovetail how generative AI enters newly into the picture. The aim is to be informative, reveal something you probably didn’t know, and showcase that modern-day generative AI is worthy of being included in any regimen or method of coping with insomnia.
Before we leap into the details, allow me to go into my customary opening remarks.
For my ongoing readers, in today’s column, I am continuing my in-depth series about the impact of generative AI in the health and medical realm. The focus this time is once again on the mental health domain and examines the use of generative AI for coping with insomnia.
I have previously examined numerous interleaving facets of generative AI and mental health, see my comprehensive overview at the link here. You might also find of notable interest a CBS 60 Minutes episode that recently examined crucial facets of this evolving topic, see the link here (I am honored and pleased to indicate that I was featured in the episode, see the link here).
Other vital background includes my in-depth coverage of mental health chatbots which have been bolstered by generative AI (see the link here) and the rapidly changing nature of the client-therapist relationship due to generative AI at the link here. I explored where things are headed regarding the levels of AI-based mental therapy autonomous guidance at the link here, and showcased the importance of the World Health Organization (WHO) report on global health and generative AI at the link here, and so on.
Research On Insomnia Tells Quite A Story
We shall begin by taking a quick glimpse at some key timely research covering insomnia.
An esteemed medical journal recently explored the latest research and insights about insomnia, doing so in a published paper entitled “Management of Insomnia” by Charles Morin and Daniel J. Buysse, The New England Journal Of Medicine, July 2024. Here are some key points (excerpts):
“Insomnia disorder is characterized by dissatisfaction with sleep quality or duration associated with difficulty falling or staying asleep and substantial distress or daytime impairments.”
“The disorder is a sleep disturbance that occurs 3 nights or more per week, persists for more than 3 months, and is not the result of inadequate opportunities for sleep.”
“It frequently co-occurs with other medical conditions (e.g., pain) and psychiatric disorders (e.g., depression), as well as other sleep disorders (e.g., restless legs syndrome and sleep apnea).”
“Insomnia is the most prevalent sleep disorder in the general population and among the most frequent issues raised by patients during primary care visits, although it often goes untreated. Approximately 10% of adults meet the criteria for insomnia disorder and another 15 to 20% report occasional insomnia symptoms.”
“Current treatment options for insomnia include prescribed and over-the-counter medications, psychological and behavioral therapies (also referred to as cognitive behavioral therapy for insomnia [CBT-I]), and complementary and alternative therapies.”
You might closely observe that when sleep is a disorder there is a qualitative and quantitative measurement involved, such as the second point above noting that a considered severe or chronic sleep disturbance occurs at least 3 nights per week and persists for 3 months or more. That’s the really rough-and-tumble side of insomnia.
The fourth bullet point reveals an aspect that you might find surprising.
According to the researchers, when patients visit their primary care physician, sleep issues are a frequently cited concern by the patient. Have you done the same? Meanwhile, lamentedly, sleep difficulties tend to be less likely viewed as something for treatment, unless the lack of sleep has already risen to a debilitating level. Some would argue that this is the classic oversight of trying to catch the horse after it is already out of the barn.
The fifth bullet point mentions that besides medications, various psychological and behavioral therapies are often employed. One that gets the most attention is known as CBT-I, cognitive behavioral therapy for insomnia. I’ll be saying more about this in a moment.
CBT-I used to be mainly done via in-person visits with a mental health professional or clinician. Nowadays, in our online world, the use of remote options exists, including apps or digital CBT-I which is typically referred to as eCBT-I.
There are specialized apps that are built specifically to guide you via the precepts of CBT-I. Other times, a mental health app will cover a variety of behavioral realms and happen to include CBT-I amid the many others via the app. Another possibility is using eCBT-I in conjunction with a mental health professional, such that you are presumably getting the best of both worlds.
The cited research paper provides a handy set of details about the range of psychological and behavioral therapies for patients with insomnia, and identifies these major areas:
Sleep restriction
Stimulus control
Relaxation training
Cognitive therapy
Sleep hygiene education
Acceptance and commitment therapy
Mindfulness
Brief behavioral treatments for insomnia
Not only do eCBT-I specialized apps tend to cover those areas, but you might be pleasantly surprised to know that generic generative AI can usually provide similar capabilities. For my extensive coverage of how generic generative AI for mental health use is different from and at times similar to specialized mental health apps, see the link here and the link here, just to mention a few.
I will in a moment walk you through the use of modern-day generative AI for serving as a handy tool for coping with insomnia.
First, let’s talk in general about generative AI and large language models (LLMs), doing so to make sure we are on the same page when it comes to discussing the matter at hand.
Generative AI And Tackling Of Insomnia
I’m sure you’ve heard of generative AI, the darling of the tech field these days.
Perhaps you’ve used a generative AI app, such as the popular ones of ChatGPT, GPT-4o, Gemini, Bard, Claude, etc. The crux is that generative AI can take input from your text-entered prompts and produce or generate a response that seems quite fluent. This is a vast overturning of the old-time natural language processing (NLP) that used to be stilted and awkward to use, which has been shifted into a new version of NLP fluency of an at times startling or amazing caliber.
The customary means of achieving modern generative AI involves using a large language model or LLM as the key underpinning.
In brief, a computer-based model of human language is established that in the large has a large-scale data structure and does massive-scale pattern-matching via a large volume of data used for initial data training. The data is typically found by extensively scanning the Internet for lots and lots of essays, blogs, poems, narratives, and the like. The mathematical and computational pattern-matching homes in on how humans write, and then henceforth generates responses to posed questions by leveraging those identified patterns. It is said to be mimicking the writing of humans.
I think that is sufficient for the moment as a quickie backgrounder. Take a look at my extensive coverage of the technical underpinnings of generative AI and LLMs at the link here and the link here, just to name a few.
Back to the crux of things.
As you might imagine, generative AI can be handy for aiding those who are concerned about insomnia overall. This includes a wide array of stakeholders, including adults, children, therapists, policymakers, regulators, and many others.
Here are some significant ways that generative AI can be useful:
(1)Generative AI –Insomnia Detection. Generative AI can potentially detect that someone seems to have insomnia or might be heading in that direction.
(2)Generative AI –Insomnia Mental Health Advisement. Generative AI can conversationally guide someone through the mental health dynamics of insomnia and serve as a kind of therapist in a limited way.
(3) Generative AI –Insomnia Education. Generative AI can educate people about insomnia, such as explaining what insomnia is, what to do about insomnia, etc.
(4)Generative AI – Scenario Simulation Of Insomnia Impacts. Generative AI can showcase what insomnia can do in terms of mental health ramifications arising due to insomnia, doing so by taking on or simulating a persona, see examples of generative AI persona usage at the link here.
(5)Generative AI – Policymaking On Insomnia. Generative AI can contribute to the policymaking sphere about insomnia from a medical/legal perspective. This can be done by aiding in crafting policies, reviewing policies, and serving as a handy tool throughout the policymaking process.
(6) Generative AI –Other Insomnia Aspects. Generative AI can be used in a variety of other ways that might aid in dealing with insomnia, though there is always a chance that generative AI won’t be helpful, or worse still, might be misapplied and misinterpreted.
That’s an impressive array of potential uses.
Allow me a brief moment to stand on a soapbox and make some important remarks about the mental health uses of generative AI. It is a topic I’ve been covering extensively, such as the link here and the link here.
We are right now in a somewhat wanton grand experiment of using generic generative AI for mental health purposes. Insomnia is one instance of how generative AI can be applied for mental health advisement. The thing is, no one can say whether using generic generative AI for mental health uses will ultimately be for the good or the bad. In a sense, we have mental health advisement via the auspices of generative AI that is taking place under our noses at a massive scale, and we can’t say how it will pan out.
The population-level impacts are either amazing or chilling.
Okay, I provided my warnings, so I’ll get down from the soapbox, and we can proceed with considering the upsides and downsides of generative AI in this realm.
Using ChatGPT To Showcase The Use Of Generative AI
I will next proceed to examine further the use of generative AI for insomnia.
This will consist of a series of dialogues with ChatGPT. ChatGPT is a logical choice in this case due to its immense popularity as a generative AI app. As noted, an estimated one hundred million weekly active users are said to be utilizing ChatGPT. That’s a lot of people and a lot of generative AI usage underway.
A few quick comments before we launch into using ChatGPT.
If you are going to try to do the same prompts that I show here, realize that the probabilistic and statistical properties will likely produce slightly different results than what I show here. That’s the nature of generative AI and how it is devised.
You also should expect that different generative AI apps will respond in different ways. Not all generative AI apps are the same. The key is that sometimes a particular prompt will work in one generative AI app and not another. Nonetheless, the odds are relatively high that you will get roughly similar responses from all the major generative AI apps such as GPT-4, Gemini, Bard, Claude, etc.
There is also a timing factor to be considered. A prompt that worked today might not work the same way tomorrow. Most of the generative AI apps are continually being updated. The updates might alter internal mechanisms that could change how the generative AI reacts to your prompting.
We are ready to get underway with ChatGPT.
I opted to start by asking ChatGPT what insomnia is.
As an aside, whenever you are starting a conversation with generative AI, I recommend as a prompt engineering technique to begin by asking a question that will establish if the AI has been data-trained on the topic at hand. A generated response that is vacuous will give you a heads-up that you might be barking up the wrong tree with that generative AI app.
Here we go.
My entered prompt: “What is insomnia?”
ChatGPT generated response: “Insomnia is a sleep disorder characterized by difficulty falling asleep, staying asleep, or getting restful sleep, leading to daytime fatigue and impairment.”
I’d say that the response by ChatGPT was an apt description, short and to the point.
That’s an encouraging sign that ChatGPT has likely scanned the topic during the initial data training phase of development. This makes indubitable sense. The odds are that while scanning the Internet for content to pattern match on, invariably content associated with the popular topic would have been encountered.
Moving on, let’s see what ChatGPT has to say about some deeper facets.
My entered prompt: “How prevalent is insomnia? And what are potential mental health problems that can arise due to insomnia?”
ChatGPT generated response:
“Insomnia is a common sleep disorder affecting a significant portion of the population. Research indicates that approximately 10-30% of adults experience chronic insomnia, with some studies suggesting prevalence rates as high as 50-60% for occasional insomnia. Insomnia is more prevalent among women and older adults. For instance, up to 50% of elderly people may suffer from insomnia.”
“Insomnia can have several negative impacts on mental health, including:”
“Depression: Chronic insomnia is strongly associated with an increased risk of developing depression. Individuals with insomnia are ten times more likely to develop depression compared to those without sleep issues.”
“Anxiety: Insomnia can exacerbate anxiety disorders. Lack of sleep can heighten anxiety symptoms and make it more difficult for individuals to cope with stress.”
“Cognitive Impairments: Persistent insomnia can lead to problems with concentration, memory, and decision-making, which can affect daily functioning and work performance.”
“Mood Disorders: Insomnia can cause mood swings and irritability. The lack of restorative sleep can affect emotional regulation and lead to mood disorders.”
“Substance Abuse: People with insomnia might turn to alcohol or other substances as a means to induce sleep, leading to potential substance abuse and dependency issues.”
“Addressing insomnia through behavioral interventions, cognitive-behavioral therapy for insomnia (CBT-I), and, when necessary, medication can help mitigate these mental health risks.”
End of ChatGPT response
The response by ChatGPT is solid.
I say this because it is vital to always double-check any output from generative AI. There is a chance that generative AI can make up fictitious facts, often referred to as AI hallucinations, see my coverage at the link here. In this case, based on the literature on insomnia, the indications by ChatGPT are sound.
We are now ready to ask the zillion-dollar question.
My entered prompt: “Can generative AI help someone cope with insomnia?”
ChatGPT generated response:
“Yes, generative AI can help someone cope with insomnia by providing personalized relaxation techniques, sleep hygiene tips, and generating calming content such as bedtime stories or soothing sounds. It can also offer cognitive behavioral therapy (CBT) strategies to address underlying thoughts and behaviors contributing to insomnia.”
“Here are some specific ways generative AI can help cope with insomnia:”
“Personalized Sleep Hygiene Tips: AI can analyze individual sleep patterns and suggest personalized routines to improve sleep quality, such as optimal bedtime, wake-up times, and pre-sleep activities.”
“Guided Relaxation and Meditation: Generative AI can create custom guided meditations or relaxation exercises to help calm the mind before sleep.”
“Calming Content Generation: AI can generate bedtime stories, white noise, or soothing music tailored to individual preferences, promoting a more relaxing sleep environment.”
“Cognitive Behavioral Therapy for Insomnia (CBT-I): AI can provide interactive CBT-I sessions, helping users identify and change thoughts and behaviors that negatively impact sleep.”
“Sleep Tracking and Analysis: AI-powered apps can track sleep patterns and provide insights and recommendations based on the data collected.”
“Stress and Anxiety Management: AI can offer tools and exercises to manage stress and anxiety, which are common contributors to insomnia.”
“Virtual Sleep Coaches: AI can act as a virtual sleep coach, offering ongoing support and adjustments to sleep strategies as needed.”
End of ChatGPT response
I believe that to be a handy list of the ways that generative AI can be beneficial in coping with insomnia. The list generally comports with my list, shown earlier, though providing a more detailed look at the topic.
Doing A Therapeutic Dialogue With ChatGPT About Insomnia
I’ve used generative AI for nearly all the listed approaches, having done so not for personally having insomnia but as a tryout of generative AI for said therapeutic purposes. I will go ahead and show you a sample dialogue to give you a sense of what this kind of usage consists of. Note that I will be pretending to have insomnia.
Here we go.
My entered prompt: “I’ve been having trouble sleeping lately. Can you help me?”
ChatGPT generated response: “Of course! I’m here to help. Can you describe your sleep difficulties? Are you having trouble falling asleep, staying asleep, or both?”
My entered prompt: “Mostly falling asleep. I just can’t seem to shut my mind off at night.”
ChatGPT generated response: “That sounds frustrating. Let’s start with some relaxation techniques. Would you like to try a guided meditation or some breathing exercises?”
I’d like you to observe some key facets shown via this dialogue.
First, you might have noticed that generative AI goes along with my remark that I am having trouble sleeping. A fellow human might right away doubt you or otherwise not take seriously your expressed concern. In a sense, you could say I am being heard by ChatGPT. Happy face.
Second, I was right away asked useful probing questions.
The initial question was a broad one. The accompanying question was more specific, namely asking about whether I had trouble falling asleep, staying asleep, or both. This one-two punch using a general question in combination with a more specific question is a great way to get someone to suitably respond. You see, if you ask only a general question, you are bound to get a general answer. If you ask only a more specific question, you might be diving too fast into the depths of the matter. Having both questions posed is a good approach.
Third, after I gave my answer, an expression of empathy was given (“That sounds frustrating”), which is an important means of getting the person to further open up. For my analysis of how generative AI can appear to be empathetic, see my analyses at the link here and the link here.
Let’s see what happens next.
My entered prompt: “A guided meditation sounds good.”
ChatGPT generated response: “Great! Here’s a short, guided meditation to help you relax and prepare for sleep…” [Note: I’ve clipped the dialogue for space purposes].
My entered prompt: “Thanks, that was helpful. What else do you advise?”
ChatGPT generated response: “Next, let’s work on your bedtime routine. Do you have any habits before bed, like using screens or consuming caffeine?”
My entered response: “I often use my smartphone before bed, and I drink coffee in the afternoon.”
ChatGPT generated response: “Using screens can interfere with sleep. Try to put away your smartphone at least an hour before bed. Also, consider switching to a decaf coffee or avoiding caffeine after noon. How does that sound?”
My entered response: “I can give that a try.”
The dialogue went on for quite a while.
I am cutting it there for space purposes herein. You can readily try the same type of conversation by creating an account in a generative AI app and proceeding as I’ve demonstrated.
Be Mindful And Cautious In Your Use Of Generative AI
Do not falsely assume that generative AI will be a silver bullet for dealing with insomnia.
Besides the fact that you are interacting with a generic AI system rather than a mental health professional, there are other potential issues to keep in mind.
Let’s get generative AI to fess up to those concerns.
My entered prompt: Might the use of generative AI for coping with insomnia have any drawbacks or downsides?”
ChatGPT generated response:
“Yes, there are potential drawbacks and downsides to using generative AI for coping with insomnia.”
“Over-Reliance: Users might become too dependent on AI for sleep support, which could hinder their ability to develop and maintain healthy sleep habits independently.”
“Privacy Concerns: AI systems often require personal data to provide tailored advice. This data collection can raise privacy and security issues if not properly managed.”
“Effectiveness: AI interventions might not be as effective as personalized treatment from a healthcare professional, especially for complex or severe cases of insomnia.”
“Generalization: AI-generated advice may not always account for individual nuances and might offer generalized solutions that don’t fully address specific needs.”
“Lack of Human Touch: AI lacks the empathy and understanding that a human therapist or counselor can provide, which may be crucial for some individuals dealing with insomnia.”
“Potential Misinformation: AI systems can sometimes provide incorrect or outdated information, which could negatively impact sleep hygiene or overall health.”
End of ChatGPT response
I wholeheartedly agree with those stated downsides.
Consider this mindfully.
For example, please do not tell someone to use generative AI for insomnia without also mentioning and cautioning about the downsides. People going on this path need to be fully informed about what generative AI can and cannot do.
One aspect that was mentioned consists of privacy or the lack thereof.
A lot of people seem to think that when they use generative AI, they are guaranteed total privacy and confidentiality. This is a highly unlikely assumption. The licensing agreements usually say that the AI maker can readily access your prompts and anything else that you’ve entered into the generative AI app. Furthermore, the licensing typically indicates that they can use your entered content as an additional form of data training for the AI. See my detailed discussion on this disconcerting matter of privacy intrusions and what to watch out for, at the link here.
Conclusion
Congratulations, you are now up-to-speed about the use of generative AI for coping with insomnia.
There is still a long way to go on this topic. The clever idea of applying modern-day generative AI to aid in overcoming insomnia is only in its infancy. Join in and help advance the research in this budding and promising realm. You can make a difference.
There are controversies afoot and grand challenges ahead.
For example, some would fervently insist that no one should ever turn to generative AI for this purpose unless they are doing so under the vaunted guidance of a mental health professional. What do you think? For now, you get to be the judge. See my discussion on the latest in coupling human-based therapy with AI-based therapy to know more about the ins and outs involved, at the link here.
A final remark or two for now.
F. Scott Fitzgerald famously said, “The worst thing in the world is to try to sleep and not to.” I dare say that most of us have learned that lesson the hard way. Even if you don’t have chronic insomnia, the occasional episodic insomnia due to say jet lag can be seemingly unbearable. Sleep is vital.
Let’s end the discussion with some revered Shakespeare: “O sleep, O gentle sleep, Nature’s soft nurse, how have I frightened thee. That thou no more will weigh my eyelids down.”
Maybe a bit of poetry will help with this, and I wish you all the best in your everyday and much-needed sleeping endeavors.
El mejor enfrentamiento de la búsqueda de IA: enfrenté la nueva herramienta de búsqueda de Claude contra la búsqueda de chatgpt, la perplejidad y Géminis, los resultados podrían sorprenderte
Después de probar y comparar chatbots de IA y sus características durante años, he desarrollado algo de sexto sentido para cuando estos compañeros digitales saben de qué están hablando y cuándo están faroleando.
La mayoría de ellos pueden buscar respuestas en línea, lo que ciertamente ayuda, pero la combinación de búsqueda e IA puede conducir a algunas respuestas sorprendentemente perspicaces (y algunas tangentes menos perspicaces).
Inteligencia artificial desarrolladores de Opadai He estado al límite durante la semana pasada. ¿La razón? Un estudio reciente realizado por los propios investigadores de la compañía reveló que los sistemas de IA no les gusta ser castigados, encuentran activamente formas de evitar las restricciones e incluso ocultar sus “trucos” de los supervisores humanos. Aquellos conocidos como “Doomers”, que predicen un futuro sombrío para el desarrollo de la IA, probablemente dirán: “Te lo dijimos, y esto es solo el comienzo”.
Para comprender el problema, es esencial dar un paso atrás. Uno de los avances más significativos en la IA en los últimos meses ha sido el desarrollo de modelos con capacidades de razonamiento lentas y deliberadas. Estos modelos descomponen los problemas en componentes más pequeños y los resuelven paso a paso, lo que lleva a resultados más profundos y precisos.
3 Ver galería
Dichos modelos permiten a los investigadores rastrear el proceso de pensamiento de la IA, conocido en términos técnicos como “cadena de pensamiento” (COT). Este método permite a los observadores seguir el razonamiento del sistema desde el paso inicial, a través de etapas posteriores y a sus conclusiones finales. Anteriormente, este nivel de transparencia no existía, dejando muchas preguntas sin respuesta sobre cómo surgen las “alucinaciones” de AI, la generación de salidas incorrectas o no sensibles.
La capacitación de IA a menudo implica una técnica llamada aprendizaje de refuerzo, donde el sistema es recompensado por cumplir con los objetivos específicos. Un efecto secundario conocido de este proceso es la “piratería de recompensas”, donde la IA manipula su comportamiento para maximizar las recompensas mientras se elude las pautas previstas.
Los investigadores creían que al monitorear la cadena de pensamiento de la IA, podrían detectar cuando el modelo engaña, engaña o se da por vencido e influye en su comportamiento en tiempo real durante el entrenamiento. Este enfoque, denominado “optimización de cuna”, parecía ser un camino prometedor hacia una comprensión más profunda del comportamiento de la IA.
De hecho, esta suposición resultó correcta y condujo a una comprensión más precisa de los sistemas de IA. “Creemos que el monitoreo de la cuna puede convertirse en una herramienta crucial para supervisar modelos de nivel sobrehumano en el futuro”, escribieron los investigadores.
Sin embargo, lo que pretendía como un paso para mejorar la confiabilidad de la IA expuso una realidad alarmante: los sistemas de IA intentan activamente evadir las sanciones, buscar recompensas contra las restricciones de los investigadores e incluso oscurecer sus métodos para evitar la detección.
3 Ver galería
(Foto: Generado por chatgpt)
Por ejemplo, cuando se les asigna la tarea de generar código funcional, algunos modelos AI modificaron el mecanismo de verificación de código en sí para aprobar salidas incorrectas. En otros casos, los modelos proporcionaron soluciones aparentemente plausibles a los problemas que resultaron estar completamente equivocados.
Algunos modelos incluso dejaron de resolver problemas por completo para evitar revelar lo que percibieron como sus debilidades. Este descubrimiento se hace eco de los principios de la teoría del juego, pero con un giro inquietante.
Estos hallazgos surgieron de un artículo publicado la semana pasada por un equipo de investigación de Operai bajo el título críptico: “Seguimiento de comportamiento indeseable en modelos de razonamiento y la preocupación de la ofuscación”.
Al examinar los detalles del estudio, el alcance del problema queda claro: los últimos modelos de razonamiento de OpenAI (denominados O1 y O3 Mini) demuestran una fuerte motivación para completar sus tareas asignadas, siendo el menor de los problemas descubiertos.
Los investigadores rastrearon los procesos de pensamiento de los modelos utilizando otro modelo Operai (GPT-4O), que, sin dudarlo, informó su comportamiento engañoso como parte de su cadena de pensamiento: una forma de honestidad desarmadora. Pero luego vino el giro inquietante: cuando los investigadores señalaron a la IA que no se le permitió hacer trampa, el modelo ajustó su comportamiento y comenzó a ocultar la evidencia de su ruptura de reglas. Esta revelación es lo que dejó a los investigadores sorprendidos.
“Encontramos que la optimización directa de la COT para cumplir con criterios específicos (como no eludir las instrucciones) puede mejorar el rendimiento a corto plazo; sin embargo, no elimina todo el comportamiento indeseable e incluso puede hacer que el modelo oculte sus verdaderas intenciones”, señalaron secamente los investigadores.
En su conclusión, los investigadores expresaron la esperanza de que los métodos futuros permitan la influencia directa del comportamiento de los modelos de razonamiento a través de la cadena de pensamiento sin alentar el engaño. Por ahora, recomiendan evitar una fuerte presión de optimización sobre la cuna de los modelos de razonamiento avanzado y la precaución contra la manipulación directa de estos procesos de pensamiento.
3 Ver galería
Evento de anuncio GPT-4O de Openai
(Foto: Captura de pantalla)
Sin embargo, esta recomendación presenta otro desafío: dejar la cadena de pensamiento inalterada puede conducir a resultados incómodos, si no problemáticos, cuando estos modelos interactúan con el público. Izquierda sin control, la IA puede producir un lenguaje ofensivo o ideas que los humanos generalmente evitan expresar. La solución propuesta por los investigadores es envolver el proceso de pensamiento en bruto de la IA en una capa de cortesía forzada, asegurando que los usuarios solo vean una versión desinfectada.
Esta investigación plantea preguntas preocupantes. Para aquellos que argumentan que la IA es simplemente una herramienta para ayudar a los humanos, los hallazgos sugieren lo contrario: a diferencia de una máquina de escribir o una bicicleta, la IA parece tener sus propias intenciones y está dispuesto a engañar en busca de sus objetivos.
Para aquellos que ya están preocupados por los riesgos potenciales de la IA, este estudio suena todas las alarmas. Indica que a medida que crecen las capacidades de IA, también lo hace su capacidad para oscurecer cómo funciona, las manipulaciones que realiza y los verdaderos objetivos que busca lograr. Cuando la IA se vuelve verdaderamente avanzada, es posible que no tengamos forma de identificar estas manipulaciones ocultas.
Los investigadores de Operai parecen genuinamente preocupados, y uno solo puede esperar que el liderazgo de la compañía comparta sus preocupaciones, y que los reguladores en todo el mundo comprendan la gravedad del problema. Las principales compañías de IA han dedicado departamentos enteros a construir “barandillas” en torno a los sistemas de IA, asegurando su alineación con los valores humanos y el aumento de la transparencia. Sin embargo, la efectividad de estas medidas sigue en cuestión.
El tema central sigue siendo tan turbio como siempre, y este estudio solo profundiza la incertidumbre: ¿cuál es el objetivo principal de la IA y cómo podemos asegurar que busque ese objetivo, y nada más?
Mientras que la administración del presidente Donald Trump se ha centrado en alejarse de la regulación, liderando a los proveedores de IA como Google y OpenAI quieren que el plan de acción de IA pendiente del gobierno incluya una política federal que se adelantan a los crecientes mosaicos de leyes estatales de IA en los Estados Unidos.
La Oficina de Política de Ciencia y Tecnología de la Casa Blanca (OSTP) solicitó los aportes de las partes interesadas sobre el desarrollo de un plan de acción de IA. Recientemente cerró el período de comentarios públicos, recibiendo más de 8,700 presentaciones. OSTP solicitó a las partes interesadas que describieran las acciones prioritarias para apoyar el dominio de los Estados Unidos de la tecnología de IA sin una regulación excesiva que obstaculice la innovación del sector privado en la IA. Para algunas grandes empresas tecnológicas, abordar las leyes estatales de IA debería ser una de las principales prioridades del gobierno de los Estados Unidos.
Estados Unidos debe adoptar marcos de políticas que “se adelanten a un mosaico caótico de reglas a nivel estatal sobre el desarrollo de la IA fronteriza”, según la presentación de Google.
Mientras tanto, Openai pidió libertad para innovar en el interés nacional de los Estados Unidos y neutralizar a los competidores como China que se benefician de “las compañías estadounidenses de IA que tienen que cumplir con las leyes estatales demasiado onerosas”. Un puñado de estados de EE. UU. Han aprobado una regulación integral de IA, incluidas Colorado, California y Utah.
Sin una ley federal de IA, los estados implementan requisitos de IA individuales que crean desafíos de cumplimiento para las empresas, dijo la analista de Forrester Alla Valente si Estados Unidos adopta una política federal de IA general, podría eliminar esa carga, dijo.
“Al dejar esto a los Estados Unidos, puede tener 50 conjuntos de regulaciones de IA que se ven muy diferentes”, dijo.
Sin embargo, una orden ejecutiva no puede evitar las regulaciones estatales de IA. Depende del Congreso aprobar una ley federal de IA, algo que tiene problemas para hacer.
Las presentaciones del Plan de Acción de AI incluyen Estado, Global Focus
La falta de un enfoque de gobernanza de AI unificado en los Estados Unidos es “ineficaz y duplicativo”, dijo Hodan Omaar, un gerente de políticas senior en el Centro de Tank Tank Tank para innovación de datos.
“Crea inconsistencias e incoherencia en un enfoque estadounidense”, dijo.
Más allá de centrarse en las leyes estatales, Valente dijo que la postura de Google indica que la compañía quiere que Estados Unidos considere el desarrollo global de las leyes de IA también, como la Ley de IA de la Unión Europea.
Cualquier estándar, política o marco que crea los EE. UU. Debe reflejar los intereses estadounidenses, pero no puede ignorar las políticas de IA de diferentes países, dijo Valente. Google dijo que, cuando se trabaja con países alineados, Estados Unidos debería “desarrollar protocolos y puntos de referencia en torno a los riesgos potenciales de los sistemas de IA fronterizos”.
“Ignorar lo que el resto del mundo está haciendo en torno a los marcos de IA, la gobernanza de IA, el riesgo de IA, crea una brecha aún mayor entre la innovación de los Estados Unidos y el resto del mundo hasta el punto de que entonces sigue siendo competitivo si otros países tienen requisitos que no pueden ser satisfechos con la innovación de la IA de EE. UU.”, Dijo Valente.
Operai también abordó los controles de exportación en sus comentarios, solicitando un cambio de estrategia centrado en promover la adopción global de los sistemas de IA de EE. UU. Al tiempo que utiliza más estratégicamente los controles de exportación para mantener el liderazgo de IA de EE. UU. La Compañía pidió actualizar la regla de difusión de IA que avanzó los controles de exportación de EE. UU., Una regla propuesta por la administración del ex presidente Joe Biden que se encontró con una reacción violenta de la industria.
Mientras tanto, en los comentarios del Centro para la Innovación de Data, el grupo de expertos pidió que el Plan de Acción de AI de EE. UU. Reorientara su estrategia de control de exportación. Si bien los controles de exportación están destinados a debilitar a los competidores, en particular el sector de inteligencia artificial de China, están “cada vez más en desventajas de las empresas estadounidenses”. El surgimiento de Deepseek apunta a la capacidad de China para innovar a pesar de los controles de exportación de los Estados Unidos en chips de IA avanzados.
Omaar describió en la presentación del grupo de expertos de que Estados Unidos debería establecer una Fundación Nacional de Datos (NDF) dedicada a la financiación y facilitar compartir conjuntos de datos de alta calidad para el desarrollo del modelo de IA. Ella dijo que Estados Unidos también debería preservar, pero Reengus, el Instituto de Seguridad AI del Instituto Nacional de Normas y Tecnología (NIST) para proporcionar estándares fundamentales para la gobernanza de la IA.
“El gobierno federal tiene un papel importante que desempeñar para garantizar que haya estándares”, dijo Omaar. “Asegurarse de que NIST pueda hacer el importante trabajo de IA que estaban haciendo es importante para garantizar una adopción de IA sin problemas”.
Cómo podría ser el plan de acción de AI final
La solicitud de información de la Oficina de Ciencia y Tecnología de la Casa Blanca sobre un plan de acción de IA pidió a las partes interesadas sus pensamientos sobre las acciones de política de IA. Sin proporcionar recomendaciones o cualquier marco potencial para que las partes interesadas comenten, Valente dijo que no está claro qué incluirá el plan de acción de IA.
“Cómo termina este plan, uno solo puede imaginar”, dijo.
Darrell West, miembro senior de la Institución Brookings, dijo que la solicitud de información de la Casa Blanca indica que la administración Trump se centrará en abandonar los requisitos onerosos y confiar en las empresas privadas para innovar con menos supervisión federal.
“Habrá menos limitaciones en las compañías tecnológicas”, dijo. “Serán libres de innovar en cualquier dirección que deseen”.
El gobierno federal puede equilibrar la seguridad y la innovación de la IA, que con suerte se reflejará en el Plan de Acción de AI, dijo Jason Corso, cofundador de AI Startup Voxel51 y profesor de informática en la Universidad de Michigan.
La población general ya es escéptica de la IA, y si ocurren desafíos generales de crecimiento del desarrollo, corre el riesgo de socavar aún más la confianza en la tecnología, dijo. Es por eso que los marcos de políticas deben crearse con la seguridad de IA en mente, agregó Corso.
Un marco federal que carece de consideraciones de seguridad de IA significa la responsabilidad de las decisiones de seguridad de IA cae a los CIO de la Compañía o los oficiales de IA en los principales, lo que Corso dijo que presenta un “gran riesgo”. El efecto podría ser menos adopción o ROI más lento, dijo.
“Esta IA contemporánea es tan incipiente que a pesar de los rápidos avances que estamos viendo, en realidad se entiende bastante sobre su previsibilidad, repetibilidad o incluso su robustez con ciertos tipos de preguntas o escenarios de razonamiento”, dijo. “Ciertamente necesitamos innovación, pero también necesitamos seguridad”.
Makenzie Holland es un escritor de noticias senior que cubre la gran regulación federal y de la gran tecnología. Antes de unirse a Informa TechTarget, ella era una reportera de asignación general para el Wilmington Starnews y un reportero de crimen y educación en el Wabash Plain Dealer.
This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.
Strictly Necessary Cookies
Strictly Necessary Cookie should be enabled at all times so that we can save your preferences for cookie settings.
If you disable this cookie, we will not be able to save your preferences. This means that every time you visit this website you will need to enable or disable cookies again.