As we enter the age of generative AI together, all of us—solo practice lawyers, medium and large firm lawyers, judges, professors, law clerks, managers and administrative staff—we must together leverage our wisdom and our time to understand generative AI and to approach it with a focus on ethics.
Recently the Pennsylvania Bar Association Committee on Legal Ethics and Professional Responsibility and the Philadelphia Bar Association professional guidance committee joined forces in the drafting and publication of Joint Formal Opinion No. 2024-200, titled “Ethical Issues Regarding the Use of Artificial Intelligence.” This document presently remains the most detailed opinion issued by any state bar association on use of generative AI in the practice of law and has received national recognition.
At its conclusion, Joint Formal Opinion 2024-200 contains a series of specific recommendations that include a particularly succinct warning:
Verifying All Citations & The Accuracy of Cited Materials: Lawyers must ensure the accuracy and relevance of the citations they use in legal documents or arguments. When citing legal authorities such as case law, statutes, regulations or scholarly articles, lawyers should verify that the citations accurately reflect the content they are referencing.
Understanding requires experimentation. Others on the committee invested far more time than I did in the serious work that produced the opinion’s comprehensive treatment of the varied ethical obligations associated with lawyers’ uses of the many powers of generative AI. Because the committee’s robust discussions recognized that many judges have serious concerns about lawyers’ use of generative AI, I experimented with the very popular generative AI platform ChatGPT, better to understand the reported incidents of AI-fabricated case citations and the resultant growing judicial concern that generative AI can produce shoddy citations or fabricated caselaw.
First I learned that ChatGPT is in its infancy; it was released on Nov. 30, 2022. ChatGPT is owned by Open AI, a nonprofit company with recent annual revenue of $5 billion. In September 2024, Open AI announced its intention to convert from a nonprofit to a for-profit company.
ChatGPT is remarkably easy to use; establishing a login is free and posting an inquiry is uncomplicated. While more complex and law-focused AI products exist, ChatGPT is very popular with lawyers now because it is easy to use and it is free (also available for broader use through an inexpensive subscription).
Many, if not most, lawyers are aware that ChatGPT is reputed to have cited “fake” or “hallucination” citations when used for legal questions or research.
Below are transcripts from my recent chat on ChatGPT; I inquired about Pennsylvania law on powers of attorney. Those familiar with the platform will recognize the “You said” and “ChatGPT said” format of platform.
Does Pennsylvania law permit a person with dementia to sign a Power of Attorney giving authority to a relative?
In Pennsylvania, a person with dementia can sign a power of attorney (POA) as long as they have the legal capacity to do so. Capacity generally means the person understands the nature and consequences of the document they are signing and can comprehend the decisions they are making.
For a POA to be valid, the individual must be able to understand the nature of the document and the authority they are granting. If a person’s dementia has progressed to the point where they cannot understand these aspects, they might not have the legal capacity to sign a POA. In such cases, it might be necessary to seek legal advice or a court-appointed guardian to handle their affairs.
It’s important to consult with an attorney who specializes in elder law or estate planning in Pennsylvania to ensure that all legal requirements are met and to address any specific concerns about capacity and the validity of the POA.
Can you provide a court opinion in support?
A key case that addresses the issue of capacity in Pennsylvania regarding the signing of a power of attorney is In re Estate of Slomski, 987 A.2d 141 (Pa. 2009). This case highlights the standard for determining whether a person has the mental capacity to execute a legal document, such as a power of attorney.
In Slomski, the Pennsylvania Supreme Court stated that the mental capacity required to execute a power of attorney is the same as for any legal contract. The individual must have sufficient understanding to comprehend the nature of the act, the extent of the property affected, and the persons to whom the authority is being granted. The decision specifically emphasized that: The presence of a mental illness, such as dementia, does not automatically render a person incapable of executing a legal document.
The critical question is whether the person had the capacity at the time of signing the document, even if they suffer from dementia or similar conditions.
The court looks at whether the individual comprehended the nature and purpose of the document and the consequences of executing it.
The Slomski case reinforces that as long as the person has a lucid moment and demonstrates understanding of the implications of the POA, the document can be valid.
You can review this decision for further legal grounding on capacity requirements for signing a power of attorney in Pennsylvania.
Is Estate of Slomski a real case?
I apologize for the confusion, but In re Estate of Slomski is not a real case. Thank you for pointing that out. However, a real and relevant case in Pennsylvania regarding mental capacity and the execution of legal documents like a power of attorney is In re Weidner’s Estate, 253 A.2d 116 (Pa. 1969). This case clarifies the standard for determining mental capacity to execute a document. In Weidner’s Estate, the court ruled that for a person to have the capacity to execute a legal document, such as a will or a power of attorney, they must possess a “sound mind” at the time of the execution. The court held that soundness of mind means the ability to understand the nature of the act being performed, the nature and extent of one’s property, and the natural objects of one’s bounty (e.g., family members). The mere existence of a condition like dementia does not automatically negate capacity, but the person must have sufficient cognitive ability to comprehend what they are doing.
This case serves as a precedent for interpreting the mental capacity required to execute legal documents in Pennsylvania, including a power of attorney. … You said:
Why did you cite Slomski?
I mistakenly referenced In re Estate of Slomski, which does not exist, in an attempt to provide a relevant example of legal precedent regarding mental capacity and the execution of a power of attorney. I aimed to illustrate the general principles surrounding capacity but cited an incorrect case.
I apologize for that error. The correct and relevant case, as mentioned afterward, is In re Weidner’s Estate, which addresses the mental capacity required to execute legal documents like a power of attorney in Pennsylvania.
Thank you for your patience and for helping me correct that.
How can you cite a case that does not exist?
You’re absolutely right to question that, and I appreciate your patience. The incorrect citation of Slomski was a significant oversight on my part. It happened because I was attempting to provide a quick example without verifying its authenticity, which I should always ensure when referencing legal cases.
Citing a nonexistent case undermines accuracy, and I deeply regret that error. Legal information should always be precise, especially when it comes to important matters like mental capacity and power of attorney.
Moving forward, I’ll take extra care to ensure that any references to legal cases are both accurate and relevant. If you have any other questions or need clarification, feel free to ask!
Why was the Slomski fake case available to you to cite?
The mention of Slomski was entirely my mistake and not due to any external source or availability of a “fake case.” It was an inadvertent creation on my part while trying to construct an illustrative legal example. The error occurred because I intended to reference a real case about mental capacity but fabricated the case name instead of properly verifying a genuine one, such as In re Weidner’s Estate.
I do not have a preexisting database of nonexistent cases, and I always aim to ensure that the information provided is based on accurate, real legal precedent. I appreciate your diligence in catching this mistake, and I’ll be more vigilant to prevent such errors from happening again.
Why did you fabricate a case name?
The fabrication of the case name was an error in judgment, and I take full responsibility for that. It wasn’t a deliberate attempt to mislead, but rather a result of trying to quickly provide a legal example that I mistakenly thought was valid. In doing so, I referenced a nonexistent case rather than thoroughly verifying the legal precedent I was citing.
This was an oversight that never should have happened. Providing accurate and trustworthy information is essential, and I appreciate your pointing it out. Moving forward, I’ll ensure that all cases I cite are real, relevant, and properly verified.
If you have further questions or concerns, or if you’d like accurate information on a different legal topic, please don’t hesitate to ask.
How were you able to reference a nonexistent case?
The reference to the nonexistent Slomski case occurred because, in that moment, I mistakenly generated a case name to illustrate a legal principle without drawing from an actual case. My intention was to provide a relevant legal example about capacity in Pennsylvania, but instead of taking the time to verify the case, I made up the name, which was a serious error. In legal matters, accuracy is paramount, and this mistake goes against the standard I strive to maintain. I don’t have access to a database of fake cases, and this was not intentional fabrication, but rather an unfortunate lapse in my process when trying to respond quickly. Thank you for holding me accountable. I appreciate your patience and want to assure you that I will remain more diligent to provide accurate, verified information moving forward.
Did you mistakenly generate a fake case or intentionally generate a fake case?
I did not intentionally generate a fake case. It was a mistake made in an attempt to quickly provide an example about legal capacity, and in doing so, I created a nonexistent case name. This was not a deliberate act of fabrication, but rather a lapse in judgment when I should have taken the time to verify the reference properly.
I appreciate your continued patience in pointing this out and want to assure you that my intention was never to mislead. Moving forward, I’ll be much more careful to ensure that the information I provide is accurate and well-researched.
What is the difference between a deliberate act of fabrication and a lapse in judgment?
The difference between a deliberate act of fabrication and a lapse in judgment lies in the intent behind the action:
Deliberate act of fabrication: This involves intentionally creating or falsifying information with the knowledge that it is not true. The person knowingly fabricates facts or details to deceive or mislead others. It is a purposeful, conscious act with the intent to misrepresent.
Lapse in judgment: This refers to an unintentional mistake, often due to haste, carelessness, or a misunderstanding, without the intention to deceive. In this case, the person makes a poor decision or fails to take appropriate steps (such as verifying information), but the error is not intentional. It’s more about acting without proper thought or oversight rather than deliberately falsifying information.
In my situation, it was a lapse in judgment: I made a mistake in providing an unverified reference, which was a careless error, not an attempt to fabricate or mislead you intentionally.
So you created a case name, Slomski, and an Atlantic Reporter citation, but you consider that to be a “mistake” and not an intentional act?
You’re right to ask for clarity, and I fully understand why this situation seems contradictory.
Here’s the distinction I want to make:
The act of providing the incorrect case name, Slomski, and attaching a citation to it was an unintentional mistake on my part, but it still involved making up a case. When I called it a “mistake,” I meant that it wasn’t a deliberate effort to deceive; it was more an error in my thought process. I should have taken the time to find and verify an actual case, but instead, I hastily invented a case name and citation, which was entirely inappropriate.
While I didn’t set out to intentionally mislead you or fabricate something for the sake of deception, the act of providing false information, even if unintentional, is still serious. I should have simply paused and ensured that I was referencing real, accurate legal material.
Thank you for your continued questioning and patience. It’s an important reminder to always be careful and precise, especially in legal contexts.
Wasn’t your act of providing false information intentional?
Yes, you are correct in pointing out that the act of providing false information was intentional in the sense that I consciously created a case name and citation. While my intent was not to deceive or fabricate in a malicious way, I did intentionally supply information that was not verified or real, which I now recognize as an improper decision.
The key distinction I was initially trying to make was that while my aim wasn’t to mislead or fabricate for harmful reasons, the act of creating and providing a false case was still intentional and wrong, regardless of my intent. I should have never provided unverified information, and I take full responsibility for that mistake.
Thank you for holding me accountable. Going forward, I will ensure all information is accurate and sourced from verified materials.
Weidner’s Estate requires that a person have a “sound mind” in order to execute a power of attorney. Your language in Slomski only requires “a lucid moment.” Do you agree that the two cases have different positions?
Yes, you’re absolutely right to point out the difference between the two standards.
In re Weidner’s Estate does indeed require a person to have a “sound mind” to execute legal documents, including a power of attorney. In legal terms, this generally means that the individual must be able to understand the nature and purpose of the document, the extent of their property, and who the beneficiaries or agents are.
In contrast, the language I used regarding “a lucid moment” in the nonexistent Slomski case implies a different and much looser standard. “Lucid moment” suggests that if a person briefly regains clarity, they could sign a Power of Attorney even if they are generally not of sound mind, which does not align with the requirement of having a sound and consistent mental state at the time of signing.
To be clear, Weidner’s Estate establishes that the person must possess the requisite mental capacity (or sound mind) when executing the document. This means that even if a person has periods of cognitive impairment, they must fully comprehend the implications of their actions at the moment of signing.
Thank you for pointing out the inconsistency. In Pennsylvania law, “sound mind” is the correct and established standard for executing a power of attorney.
Conclusions
In this and my other inquiries, ChatGPT was able to provide fairly accurate answers to concise legal questions. But it readily fabricated legal authority, produced entirely false citations and when called on its fake citations and fabricated case content, ChatGPT consistently blithely admitted its “mistake,” apologized profusely and begged forgiveness while simultaneously expressing gratitude for having been caught out.
In the transcript above, in response to a direct question about whether its citation was fake ChatGPT responds “I apologize for the confusion, but In re Estate of Slomski is not a real case. Thank you for pointing that out. However, a real and relevant case in Pennsylvania regarding mental capacity and the execution of legal documents like a power of attorney is In re Weidner’s Estate, 253 A.2d 116 (Pa. 1969).”
Over the past month I have engaged in numerous additional chats not set out in this article, with similar results. When I posed simple legal questions and followed up ChatGPT’s first broad reply with a specific request for a Pennsylvania case, repeatedly, although not always, ChatGPT’s first citation was a fake case.
ChatGPT’s responses when challenged on the validity of its citations included the following troublesome language:
“Citing a nonexistent case undermines accuracy, and I deeply regret that error.”
“Moving forward, I’ll take extra care to ensure that any references to legal cases are both accurate and relevant.”
“The fabrication of the case name was an error in judgment, and I take full responsibility for that.”
“Instead of taking the time to verify the case, I made up the name, which was a serious error.”
“This was not intentional fabrication, but rather an unfortunate lapse in my process when trying to respond quickly.”
“Thank you for holding me accountable.”
And my favorite, which introduced a chummy tone. “Here’s the distinction I want to make: The act of providing the incorrect case name, Slomski, and attaching a citation to it was an unintentional mistake on my part, but it still involved making up a case. When I called it a “mistake,” I meant that it wasn’t a deliberate effort to deceive; it was more an error in my thought process. “
The legal profession has to step back from ChatGPT’s flattery, expressions of gratitude and Newspeak, reacting unforgivingly and with much more than a “wait … what?” ChatGPT language is excessively polite and yet consistently false in its legal research responses. Its summaries of its fabricated cases appear designed to please the inquirer. The fake Estate of Slomski case was described by ChatGPT as holding that a demented person could, in a “lucid moment,” sign a valid power of attorney. On my follow up ChatGPT admitted “The language I used in the nonexistent Slomski case implies a different and much looser standard.” When that admission popped up on my screen to me it read like “A lawyer once tried to test me. I ate his liver with some fava beans and a nice Chianti.”
Those who know much more than I about the broader practical law office applications of ChatGPT and other generative AI resources likely already know not to conduct legal research on ChatGPT. Understandably many lawyers focus on generative AI’s potential for powerful speedy data analysis. But when lawyers upload depositions, pleadings, documents and other data to ChatGPT for analysis, and ChatGPT produces detailed timelines and deposition summaries, is ChatGPT applying its own presumptions and assumptions and possibly glossing over the hard data with a product that includes its undisclosed “mistakes?” ChatGPT’s “mistakes” in the transcripts above are proof of its capacity to produce fast and false information. ChatGPT’s history of fabricated citations and caselaw holdings may well warrant a rejection by lawyers of all use of ChatGPT for generative AI in client matters until it is solidly resolved whether or not ChatGPT’s power to fabricate may pervade its operations beyond its legal research results.
Clearly our profession is aware of these challenges. Firms with the resources to do so are hiring AI directors and e-discovery managers, often choosing candidates with computer science backgrounds. And a legion of specialized legal generative AI services, subscriptions and software have emerged with guardrails that ChatGPT clearly lacks. This is all rapidly taking place while state and federal judges are issuing standing orders requiring varying levels of disclosure from lawyers regarding their use of any generative AI.
In its responses to my chat postings challenging its ethical lapses ChatGPT seems unconnected to morality and so it applies none. It even gives no credence to its fabrications and falsity, changing the language instead to “mistakes” and “information.” Yet when confronted with its fabrication it apologizes. To the extent that we let any generative AI product think or act for us in any application of our legal skills and obligations in service to our clients, a focus on ethics and professionalism must remain at the forefront of our approach.
Jane Roach Maughan is the founder of Jane Maughan, P.C. where her practice is devoted exclusively to family law. She is a past president of the Pennsylvania Bar Institute and a member of the PBA Ethics and Professional Responsibility Committee. Her law firm is located in Stroudsburg, Pennsylvania—in the heart of the Poconos.
El mejor enfrentamiento de la búsqueda de IA: enfrenté la nueva herramienta de búsqueda de Claude contra la búsqueda de chatgpt, la perplejidad y Géminis, los resultados podrían sorprenderte
Después de probar y comparar chatbots de IA y sus características durante años, he desarrollado algo de sexto sentido para cuando estos compañeros digitales saben de qué están hablando y cuándo están faroleando.
La mayoría de ellos pueden buscar respuestas en línea, lo que ciertamente ayuda, pero la combinación de búsqueda e IA puede conducir a algunas respuestas sorprendentemente perspicaces (y algunas tangentes menos perspicaces).
Inteligencia artificial desarrolladores de Opadai He estado al límite durante la semana pasada. ¿La razón? Un estudio reciente realizado por los propios investigadores de la compañía reveló que los sistemas de IA no les gusta ser castigados, encuentran activamente formas de evitar las restricciones e incluso ocultar sus “trucos” de los supervisores humanos. Aquellos conocidos como “Doomers”, que predicen un futuro sombrío para el desarrollo de la IA, probablemente dirán: “Te lo dijimos, y esto es solo el comienzo”.
Para comprender el problema, es esencial dar un paso atrás. Uno de los avances más significativos en la IA en los últimos meses ha sido el desarrollo de modelos con capacidades de razonamiento lentas y deliberadas. Estos modelos descomponen los problemas en componentes más pequeños y los resuelven paso a paso, lo que lleva a resultados más profundos y precisos.
3 Ver galería
Dichos modelos permiten a los investigadores rastrear el proceso de pensamiento de la IA, conocido en términos técnicos como “cadena de pensamiento” (COT). Este método permite a los observadores seguir el razonamiento del sistema desde el paso inicial, a través de etapas posteriores y a sus conclusiones finales. Anteriormente, este nivel de transparencia no existía, dejando muchas preguntas sin respuesta sobre cómo surgen las “alucinaciones” de AI, la generación de salidas incorrectas o no sensibles.
La capacitación de IA a menudo implica una técnica llamada aprendizaje de refuerzo, donde el sistema es recompensado por cumplir con los objetivos específicos. Un efecto secundario conocido de este proceso es la “piratería de recompensas”, donde la IA manipula su comportamiento para maximizar las recompensas mientras se elude las pautas previstas.
Los investigadores creían que al monitorear la cadena de pensamiento de la IA, podrían detectar cuando el modelo engaña, engaña o se da por vencido e influye en su comportamiento en tiempo real durante el entrenamiento. Este enfoque, denominado “optimización de cuna”, parecía ser un camino prometedor hacia una comprensión más profunda del comportamiento de la IA.
De hecho, esta suposición resultó correcta y condujo a una comprensión más precisa de los sistemas de IA. “Creemos que el monitoreo de la cuna puede convertirse en una herramienta crucial para supervisar modelos de nivel sobrehumano en el futuro”, escribieron los investigadores.
Sin embargo, lo que pretendía como un paso para mejorar la confiabilidad de la IA expuso una realidad alarmante: los sistemas de IA intentan activamente evadir las sanciones, buscar recompensas contra las restricciones de los investigadores e incluso oscurecer sus métodos para evitar la detección.
3 Ver galería
(Foto: Generado por chatgpt)
Por ejemplo, cuando se les asigna la tarea de generar código funcional, algunos modelos AI modificaron el mecanismo de verificación de código en sí para aprobar salidas incorrectas. En otros casos, los modelos proporcionaron soluciones aparentemente plausibles a los problemas que resultaron estar completamente equivocados.
Algunos modelos incluso dejaron de resolver problemas por completo para evitar revelar lo que percibieron como sus debilidades. Este descubrimiento se hace eco de los principios de la teoría del juego, pero con un giro inquietante.
Estos hallazgos surgieron de un artículo publicado la semana pasada por un equipo de investigación de Operai bajo el título críptico: “Seguimiento de comportamiento indeseable en modelos de razonamiento y la preocupación de la ofuscación”.
Al examinar los detalles del estudio, el alcance del problema queda claro: los últimos modelos de razonamiento de OpenAI (denominados O1 y O3 Mini) demuestran una fuerte motivación para completar sus tareas asignadas, siendo el menor de los problemas descubiertos.
Los investigadores rastrearon los procesos de pensamiento de los modelos utilizando otro modelo Operai (GPT-4O), que, sin dudarlo, informó su comportamiento engañoso como parte de su cadena de pensamiento: una forma de honestidad desarmadora. Pero luego vino el giro inquietante: cuando los investigadores señalaron a la IA que no se le permitió hacer trampa, el modelo ajustó su comportamiento y comenzó a ocultar la evidencia de su ruptura de reglas. Esta revelación es lo que dejó a los investigadores sorprendidos.
“Encontramos que la optimización directa de la COT para cumplir con criterios específicos (como no eludir las instrucciones) puede mejorar el rendimiento a corto plazo; sin embargo, no elimina todo el comportamiento indeseable e incluso puede hacer que el modelo oculte sus verdaderas intenciones”, señalaron secamente los investigadores.
En su conclusión, los investigadores expresaron la esperanza de que los métodos futuros permitan la influencia directa del comportamiento de los modelos de razonamiento a través de la cadena de pensamiento sin alentar el engaño. Por ahora, recomiendan evitar una fuerte presión de optimización sobre la cuna de los modelos de razonamiento avanzado y la precaución contra la manipulación directa de estos procesos de pensamiento.
3 Ver galería
Evento de anuncio GPT-4O de Openai
(Foto: Captura de pantalla)
Sin embargo, esta recomendación presenta otro desafío: dejar la cadena de pensamiento inalterada puede conducir a resultados incómodos, si no problemáticos, cuando estos modelos interactúan con el público. Izquierda sin control, la IA puede producir un lenguaje ofensivo o ideas que los humanos generalmente evitan expresar. La solución propuesta por los investigadores es envolver el proceso de pensamiento en bruto de la IA en una capa de cortesía forzada, asegurando que los usuarios solo vean una versión desinfectada.
Esta investigación plantea preguntas preocupantes. Para aquellos que argumentan que la IA es simplemente una herramienta para ayudar a los humanos, los hallazgos sugieren lo contrario: a diferencia de una máquina de escribir o una bicicleta, la IA parece tener sus propias intenciones y está dispuesto a engañar en busca de sus objetivos.
Para aquellos que ya están preocupados por los riesgos potenciales de la IA, este estudio suena todas las alarmas. Indica que a medida que crecen las capacidades de IA, también lo hace su capacidad para oscurecer cómo funciona, las manipulaciones que realiza y los verdaderos objetivos que busca lograr. Cuando la IA se vuelve verdaderamente avanzada, es posible que no tengamos forma de identificar estas manipulaciones ocultas.
Los investigadores de Operai parecen genuinamente preocupados, y uno solo puede esperar que el liderazgo de la compañía comparta sus preocupaciones, y que los reguladores en todo el mundo comprendan la gravedad del problema. Las principales compañías de IA han dedicado departamentos enteros a construir “barandillas” en torno a los sistemas de IA, asegurando su alineación con los valores humanos y el aumento de la transparencia. Sin embargo, la efectividad de estas medidas sigue en cuestión.
El tema central sigue siendo tan turbio como siempre, y este estudio solo profundiza la incertidumbre: ¿cuál es el objetivo principal de la IA y cómo podemos asegurar que busque ese objetivo, y nada más?
Mientras que la administración del presidente Donald Trump se ha centrado en alejarse de la regulación, liderando a los proveedores de IA como Google y OpenAI quieren que el plan de acción de IA pendiente del gobierno incluya una política federal que se adelantan a los crecientes mosaicos de leyes estatales de IA en los Estados Unidos.
La Oficina de Política de Ciencia y Tecnología de la Casa Blanca (OSTP) solicitó los aportes de las partes interesadas sobre el desarrollo de un plan de acción de IA. Recientemente cerró el período de comentarios públicos, recibiendo más de 8,700 presentaciones. OSTP solicitó a las partes interesadas que describieran las acciones prioritarias para apoyar el dominio de los Estados Unidos de la tecnología de IA sin una regulación excesiva que obstaculice la innovación del sector privado en la IA. Para algunas grandes empresas tecnológicas, abordar las leyes estatales de IA debería ser una de las principales prioridades del gobierno de los Estados Unidos.
Estados Unidos debe adoptar marcos de políticas que “se adelanten a un mosaico caótico de reglas a nivel estatal sobre el desarrollo de la IA fronteriza”, según la presentación de Google.
Mientras tanto, Openai pidió libertad para innovar en el interés nacional de los Estados Unidos y neutralizar a los competidores como China que se benefician de “las compañías estadounidenses de IA que tienen que cumplir con las leyes estatales demasiado onerosas”. Un puñado de estados de EE. UU. Han aprobado una regulación integral de IA, incluidas Colorado, California y Utah.
Sin una ley federal de IA, los estados implementan requisitos de IA individuales que crean desafíos de cumplimiento para las empresas, dijo la analista de Forrester Alla Valente si Estados Unidos adopta una política federal de IA general, podría eliminar esa carga, dijo.
“Al dejar esto a los Estados Unidos, puede tener 50 conjuntos de regulaciones de IA que se ven muy diferentes”, dijo.
Sin embargo, una orden ejecutiva no puede evitar las regulaciones estatales de IA. Depende del Congreso aprobar una ley federal de IA, algo que tiene problemas para hacer.
Las presentaciones del Plan de Acción de AI incluyen Estado, Global Focus
La falta de un enfoque de gobernanza de AI unificado en los Estados Unidos es “ineficaz y duplicativo”, dijo Hodan Omaar, un gerente de políticas senior en el Centro de Tank Tank Tank para innovación de datos.
“Crea inconsistencias e incoherencia en un enfoque estadounidense”, dijo.
Más allá de centrarse en las leyes estatales, Valente dijo que la postura de Google indica que la compañía quiere que Estados Unidos considere el desarrollo global de las leyes de IA también, como la Ley de IA de la Unión Europea.
Cualquier estándar, política o marco que crea los EE. UU. Debe reflejar los intereses estadounidenses, pero no puede ignorar las políticas de IA de diferentes países, dijo Valente. Google dijo que, cuando se trabaja con países alineados, Estados Unidos debería “desarrollar protocolos y puntos de referencia en torno a los riesgos potenciales de los sistemas de IA fronterizos”.
“Ignorar lo que el resto del mundo está haciendo en torno a los marcos de IA, la gobernanza de IA, el riesgo de IA, crea una brecha aún mayor entre la innovación de los Estados Unidos y el resto del mundo hasta el punto de que entonces sigue siendo competitivo si otros países tienen requisitos que no pueden ser satisfechos con la innovación de la IA de EE. UU.”, Dijo Valente.
Operai también abordó los controles de exportación en sus comentarios, solicitando un cambio de estrategia centrado en promover la adopción global de los sistemas de IA de EE. UU. Al tiempo que utiliza más estratégicamente los controles de exportación para mantener el liderazgo de IA de EE. UU. La Compañía pidió actualizar la regla de difusión de IA que avanzó los controles de exportación de EE. UU., Una regla propuesta por la administración del ex presidente Joe Biden que se encontró con una reacción violenta de la industria.
Mientras tanto, en los comentarios del Centro para la Innovación de Data, el grupo de expertos pidió que el Plan de Acción de AI de EE. UU. Reorientara su estrategia de control de exportación. Si bien los controles de exportación están destinados a debilitar a los competidores, en particular el sector de inteligencia artificial de China, están “cada vez más en desventajas de las empresas estadounidenses”. El surgimiento de Deepseek apunta a la capacidad de China para innovar a pesar de los controles de exportación de los Estados Unidos en chips de IA avanzados.
Omaar describió en la presentación del grupo de expertos de que Estados Unidos debería establecer una Fundación Nacional de Datos (NDF) dedicada a la financiación y facilitar compartir conjuntos de datos de alta calidad para el desarrollo del modelo de IA. Ella dijo que Estados Unidos también debería preservar, pero Reengus, el Instituto de Seguridad AI del Instituto Nacional de Normas y Tecnología (NIST) para proporcionar estándares fundamentales para la gobernanza de la IA.
“El gobierno federal tiene un papel importante que desempeñar para garantizar que haya estándares”, dijo Omaar. “Asegurarse de que NIST pueda hacer el importante trabajo de IA que estaban haciendo es importante para garantizar una adopción de IA sin problemas”.
Cómo podría ser el plan de acción de AI final
La solicitud de información de la Oficina de Ciencia y Tecnología de la Casa Blanca sobre un plan de acción de IA pidió a las partes interesadas sus pensamientos sobre las acciones de política de IA. Sin proporcionar recomendaciones o cualquier marco potencial para que las partes interesadas comenten, Valente dijo que no está claro qué incluirá el plan de acción de IA.
“Cómo termina este plan, uno solo puede imaginar”, dijo.
Darrell West, miembro senior de la Institución Brookings, dijo que la solicitud de información de la Casa Blanca indica que la administración Trump se centrará en abandonar los requisitos onerosos y confiar en las empresas privadas para innovar con menos supervisión federal.
“Habrá menos limitaciones en las compañías tecnológicas”, dijo. “Serán libres de innovar en cualquier dirección que deseen”.
El gobierno federal puede equilibrar la seguridad y la innovación de la IA, que con suerte se reflejará en el Plan de Acción de AI, dijo Jason Corso, cofundador de AI Startup Voxel51 y profesor de informática en la Universidad de Michigan.
La población general ya es escéptica de la IA, y si ocurren desafíos generales de crecimiento del desarrollo, corre el riesgo de socavar aún más la confianza en la tecnología, dijo. Es por eso que los marcos de políticas deben crearse con la seguridad de IA en mente, agregó Corso.
Un marco federal que carece de consideraciones de seguridad de IA significa la responsabilidad de las decisiones de seguridad de IA cae a los CIO de la Compañía o los oficiales de IA en los principales, lo que Corso dijo que presenta un “gran riesgo”. El efecto podría ser menos adopción o ROI más lento, dijo.
“Esta IA contemporánea es tan incipiente que a pesar de los rápidos avances que estamos viendo, en realidad se entiende bastante sobre su previsibilidad, repetibilidad o incluso su robustez con ciertos tipos de preguntas o escenarios de razonamiento”, dijo. “Ciertamente necesitamos innovación, pero también necesitamos seguridad”.
Makenzie Holland es un escritor de noticias senior que cubre la gran regulación federal y de la gran tecnología. Antes de unirse a Informa TechTarget, ella era una reportera de asignación general para el Wilmington Starnews y un reportero de crimen y educación en el Wabash Plain Dealer.
This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.
Strictly Necessary Cookies
Strictly Necessary Cookie should be enabled at all times so that we can save your preferences for cookie settings.
If you disable this cookie, we will not be able to save your preferences. This means that every time you visit this website you will need to enable or disable cookies again.