Connect with us

Noticias

My Recent Eye Opening ChatGPT Chat

Published

on

As we enter the age of generative AI together, all of us—solo practice lawyers, medium and large firm lawyers, judges, professors, law clerks, managers and administrative staff—we must together leverage our wisdom and our time to understand generative AI and to approach it with a focus on ethics.

Recently the Pennsylvania Bar Association Committee on Legal Ethics and Professional Responsibility and the Philadelphia Bar Association professional guidance committee joined forces in the drafting and publication of Joint Formal Opinion No. 2024-200, titled “Ethical Issues Regarding the Use of Artificial Intelligence.” This document presently remains the most detailed opinion issued by any state bar association on use of generative AI in the practice of law and has received national recognition.

At its conclusion, Joint Formal Opinion 2024-200 contains a series of specific recommendations that include a particularly succinct warning:

Verifying All Citations & The Accuracy of Cited Materials: Lawyers must ensure the accuracy and relevance of the citations they use in legal documents or arguments. When citing legal authorities such as case law, statutes, regulations or scholarly articles, lawyers should verify that the citations accurately reflect the content they are referencing.

Understanding requires experimentation. Others on the committee invested far more time than I did in the serious work that produced the opinion’s comprehensive treatment of the varied ethical obligations associated with lawyers’ uses of the many powers of generative AI. Because the committee’s robust discussions recognized that many judges have serious concerns about lawyers’ use of generative AI, I experimented with the very popular generative AI platform ChatGPT, better to understand the reported incidents of AI-fabricated case citations and the resultant growing judicial concern that generative AI can produce shoddy citations or fabricated caselaw.

First I learned that ChatGPT is in its infancy; it was released on Nov. 30, 2022. ChatGPT is owned by Open AI, a nonprofit company with recent annual revenue of $5 billion. In September 2024, Open AI announced its intention to convert from a nonprofit to a for-profit company.

ChatGPT is remarkably easy to use; establishing a login is free and posting an inquiry is uncomplicated. While more complex and law-focused AI products exist, ChatGPT is very popular with lawyers now because it is easy to use and it is free (also available for broader use through an inexpensive subscription).

Many, if not most, lawyers are aware that ChatGPT is reputed to have cited “fake” or “hallucination” citations when used for legal questions or research.

Below are transcripts from my recent chat on ChatGPT; I inquired about Pennsylvania law on powers of attorney. Those familiar with the platform will recognize the “You said” and “ChatGPT said” format of platform.

Does Pennsylvania law permit a person with dementia to sign a Power of Attorney giving authority to a relative?

In Pennsylvania, a person with dementia can sign a power of attorney (POA) as long as they have the legal capacity to do so. Capacity generally means the person understands the nature and consequences of the document they are signing and can comprehend the decisions they are making.

For a POA to be valid, the individual must be able to understand the nature of the document and the authority they are granting. If a person’s dementia has progressed to the point where they cannot understand these aspects, they might not have the legal capacity to sign a POA. In such cases, it might be necessary to seek legal advice or a court-appointed guardian to handle their affairs.

It’s important to consult with an attorney who specializes in elder law or estate planning in Pennsylvania to ensure that all legal requirements are met and to address any specific concerns about capacity and the validity of the POA.

Can you provide a court opinion in support?

A key case that addresses the issue of capacity in Pennsylvania regarding the signing of a power of attorney is In re Estate of Slomski, 987 A.2d 141 (Pa. 2009). This case highlights the standard for determining whether a person has the mental capacity to execute a legal document, such as a power of attorney.

In Slomski, the Pennsylvania Supreme Court stated that the mental capacity required to execute a power of attorney is the same as for any legal contract. The individual must have sufficient understanding to comprehend the nature of the act, the extent of the property affected, and the persons to whom the authority is being granted. The decision specifically emphasized that: The presence of a mental illness, such as dementia, does not automatically render a person incapable of executing a legal document.

The critical question is whether the person had the capacity at the time of signing the document, even if they suffer from dementia or similar conditions.

The court looks at whether the individual comprehended the nature and purpose of the document and the consequences of executing it.

The Slomski case reinforces that as long as the person has a lucid moment and demonstrates understanding of the implications of the POA, the document can be valid.

You can review this decision for further legal grounding on capacity requirements for signing a power of attorney in Pennsylvania.

Is Estate of Slomski a real case?

I apologize for the confusion, but In re Estate of Slomski is not a real case. Thank you for pointing that out. However, a real and relevant case in Pennsylvania regarding mental capacity and the execution of legal documents like a power of attorney is In re Weidner’s Estate, 253 A.2d 116 (Pa. 1969). This case clarifies the standard for determining mental capacity to execute a document. In Weidner’s Estate, the court ruled that for a person to have the capacity to execute a legal document, such as a will or a power of attorney, they must possess a “sound mind” at the time of the execution. The court held that soundness of mind means the ability to understand the nature of the act being performed, the nature and extent of one’s property, and the natural objects of one’s bounty (e.g., family members). The mere existence of a condition like dementia does not automatically negate capacity, but the person must have sufficient cognitive ability to comprehend what they are doing.

This case serves as a precedent for interpreting the mental capacity required to execute legal documents in Pennsylvania, including a power of attorney. … You said:

Why did you cite Slomski?

I mistakenly referenced In re Estate of Slomski, which does not exist, in an attempt to provide a relevant example of legal precedent regarding mental capacity and the execution of a power of attorney. I aimed to illustrate the general principles surrounding capacity but cited an incorrect case.

I apologize for that error. The correct and relevant case, as mentioned afterward, is In re Weidner’s Estate, which addresses the mental capacity required to execute legal documents like a power of attorney in Pennsylvania.

Thank you for your patience and for helping me correct that.

How can you cite a case that does not exist?

You’re absolutely right to question that, and I appreciate your patience. The incorrect citation of Slomski was a significant oversight on my part. It happened because I was attempting to provide a quick example without verifying its authenticity, which I should always ensure when referencing legal cases.

Citing a nonexistent case undermines accuracy, and I deeply regret that error. Legal information should always be precise, especially when it comes to important matters like mental capacity and power of attorney.

Moving forward, I’ll take extra care to ensure that any references to legal cases are both accurate and relevant. If you have any other questions or need clarification, feel free to ask!

Why was the Slomski fake case available to you to cite?

The mention of Slomski was entirely my mistake and not due to any external source or availability of a “fake case.” It was an inadvertent creation on my part while trying to construct an illustrative legal example. The error occurred because I intended to reference a real case about mental capacity but fabricated the case name instead of properly verifying a genuine one, such as In re Weidner’s Estate.

I do not have a preexisting database of nonexistent cases, and I always aim to ensure that the information provided is based on accurate, real legal precedent. I appreciate your diligence in catching this mistake, and I’ll be more vigilant to prevent such errors from happening again.

Why did you fabricate a case name?

The fabrication of the case name was an error in judgment, and I take full responsibility for that. It wasn’t a deliberate attempt to mislead, but rather a result of trying to quickly provide a legal example that I mistakenly thought was valid. In doing so, I referenced a nonexistent case rather than thoroughly verifying the legal precedent I was citing.

This was an oversight that never should have happened. Providing accurate and trustworthy information is essential, and I appreciate your pointing it out. Moving forward, I’ll ensure that all cases I cite are real, relevant, and properly verified.

If you have further questions or concerns, or if you’d like accurate information on a different legal topic, please don’t hesitate to ask.

How were you able to reference a nonexistent case?

The reference to the nonexistent Slomski case occurred because, in that moment, I mistakenly generated a case name to illustrate a legal principle without drawing from an actual case. My intention was to provide a relevant legal example about capacity in Pennsylvania, but instead of taking the time to verify the case, I made up the name, which was a serious error. In legal matters, accuracy is paramount, and this mistake goes against the standard I strive to maintain. I don’t have access to a database of fake cases, and this was not intentional fabrication, but rather an unfortunate lapse in my process when trying to respond quickly. Thank you for holding me accountable. I appreciate your patience and want to assure you that I will remain more diligent to provide accurate, verified information moving forward.

Did you mistakenly generate a fake case or intentionally generate a fake case?

I did not intentionally generate a fake case. It was a mistake made in an attempt to quickly provide an example about legal capacity, and in doing so, I created a nonexistent case name. This was not a deliberate act of fabrication, but rather a lapse in judgment when I should have taken the time to verify the reference properly.

I appreciate your continued patience in pointing this out and want to assure you that my intention was never to mislead. Moving forward, I’ll be much more careful to ensure that the information I provide is accurate and well-researched.

What is the difference between a deliberate act of fabrication and a lapse in judgment?

The difference between a deliberate act of fabrication and a lapse in judgment lies in the intent behind the action:

Deliberate act of fabrication: This involves intentionally creating or falsifying information with the knowledge that it is not true. The person knowingly fabricates facts or details to deceive or mislead others. It is a purposeful, conscious act with the intent to misrepresent.

Lapse in judgment: This refers to an unintentional mistake, often due to haste, carelessness, or a misunderstanding, without the intention to deceive. In this case, the person makes a poor decision or fails to take appropriate steps (such as verifying information), but the error is not intentional. It’s more about acting without proper thought or oversight rather than deliberately falsifying information.

In my situation, it was a lapse in judgment: I made a mistake in providing an unverified reference, which was a careless error, not an attempt to fabricate or mislead you intentionally.

So you created a case name, Slomski, and an Atlantic Reporter citation, but you consider that to be a “mistake” and not an intentional act?

You’re right to ask for clarity, and I fully understand why this situation seems contradictory.

Here’s the distinction I want to make:

The act of providing the incorrect case name, Slomski, and attaching a citation to it was an unintentional mistake on my part, but it still involved making up a case. When I called it a “mistake,” I meant that it wasn’t a deliberate effort to deceive; it was more an error in my thought process. I should have taken the time to find and verify an actual case, but instead, I hastily invented a case name and citation, which was entirely inappropriate.

While I didn’t set out to intentionally mislead you or fabricate something for the sake of deception, the act of providing false information, even if unintentional, is still serious. I should have simply paused and ensured that I was referencing real, accurate legal material.

Thank you for your continued questioning and patience. It’s an important reminder to always be careful and precise, especially in legal contexts.

Wasn’t your act of providing false information intentional?

Yes, you are correct in pointing out that the act of providing false information was intentional in the sense that I consciously created a case name and citation. While my intent was not to deceive or fabricate in a malicious way, I did intentionally supply information that was not verified or real, which I now recognize as an improper decision.

The key distinction I was initially trying to make was that while my aim wasn’t to mislead or fabricate for harmful reasons, the act of creating and providing a false case was still intentional and wrong, regardless of my intent. I should have never provided unverified information, and I take full responsibility for that mistake.

Thank you for holding me accountable. Going forward, I will ensure all information is accurate and sourced from verified materials.

Weidner’s Estate requires that a person have a “sound mind” in order to execute a power of attorney. Your language in Slomski only requires “a lucid moment.” Do you agree that the two cases have different positions?

Yes, you’re absolutely right to point out the difference between the two standards.

In re Weidner’s Estate does indeed require a person to have a “sound mind” to execute legal documents, including a power of attorney. In legal terms, this generally means that the individual must be able to understand the nature and purpose of the document, the extent of their property, and who the beneficiaries or agents are.

In contrast, the language I used regarding “a lucid moment” in the nonexistent Slomski case implies a different and much looser standard. “Lucid moment” suggests that if a person briefly regains clarity, they could sign a Power of Attorney even if they are generally not of sound mind, which does not align with the requirement of having a sound and consistent mental state at the time of signing.

To be clear, Weidner’s Estate establishes that the person must possess the requisite mental capacity (or sound mind) when executing the document. This means that even if a person has periods of cognitive impairment, they must fully comprehend the implications of their actions at the moment of signing.

Thank you for pointing out the inconsistency. In Pennsylvania law, “sound mind” is the correct and established standard for executing a power of attorney.

Conclusions

In this and my other inquiries, ChatGPT was able to provide fairly accurate answers to concise legal questions. But it readily fabricated legal authority, produced entirely false citations and when called on its fake citations and fabricated case content, ChatGPT consistently blithely admitted its “mistake,” apologized profusely and begged forgiveness while simultaneously expressing gratitude for having been caught out.

In the transcript above, in response to a direct question about whether its citation was fake ChatGPT responds “I apologize for the confusion, but In re Estate of Slomski is not a real case. Thank you for pointing that out. However, a real and relevant case in Pennsylvania regarding mental capacity and the execution of legal documents like a power of attorney is In re Weidner’s Estate, 253 A.2d 116 (Pa. 1969).”

Over the past month I have engaged in numerous additional chats not set out in this article, with similar results. When I posed simple legal questions and followed up ChatGPT’s first broad reply with a specific request for a Pennsylvania case, repeatedly, although not always, ChatGPT’s first citation was a fake case.

ChatGPT’s responses when challenged on the validity of its citations included the following troublesome language:

“Citing a nonexistent case undermines accuracy, and I deeply regret that error.”

“Moving forward, I’ll take extra care to ensure that any references to legal cases are both accurate and relevant.”

“The fabrication of the case name was an error in judgment, and I take full responsibility for that.”

“Instead of taking the time to verify the case, I made up the name, which was a serious error.”

“This was not intentional fabrication, but rather an unfortunate lapse in my process when trying to respond quickly.”

“Thank you for holding me accountable.”

And my favorite, which introduced a chummy tone. “Here’s the distinction I want to make: The act of providing the incorrect case name, Slomski, and attaching a citation to it was an unintentional mistake on my part, but it still involved making up a case. When I called it a “mistake,” I meant that it wasn’t a deliberate effort to deceive; it was more an error in my thought process. “

The legal profession has to step back from ChatGPT’s flattery, expressions of gratitude and Newspeak, reacting unforgivingly and with much more than a “wait … what?” ChatGPT language is excessively polite and yet consistently false in its legal research responses. Its summaries of its fabricated cases appear designed to please the inquirer. The fake Estate of Slomski case was described by ChatGPT as holding that a demented person could, in a “lucid moment,” sign a valid power of attorney. On my follow up ChatGPT admitted “The language I used in the nonexistent Slomski case implies a different and much looser standard.” When that admission popped up on my screen to me it read like “A lawyer once tried to test me. I ate his liver with some fava beans and a nice Chianti.”

Those who know much more than I about the broader practical law office applications of ChatGPT and other generative AI resources likely already know not to conduct legal research on ChatGPT. Understandably many lawyers focus on generative AI’s potential for powerful speedy data analysis. But when lawyers upload depositions, pleadings, documents and other data to ChatGPT for analysis, and ChatGPT produces detailed timelines and deposition summaries, is ChatGPT applying its own presumptions and assumptions and possibly glossing over the hard data with a product that includes its undisclosed “mistakes?” ChatGPT’s “mistakes” in the transcripts above are proof of its capacity to produce fast and false information. ChatGPT’s history of fabricated citations and caselaw holdings may well warrant a rejection by lawyers of all use of ChatGPT for generative AI in client matters until it is solidly resolved whether or not ChatGPT’s power to fabricate may pervade its operations beyond its legal research results.

Clearly our profession is aware of these challenges. Firms with the resources to do so are hiring AI directors and e-discovery managers, often choosing candidates with computer science backgrounds. And a legion of specialized legal generative AI services, subscriptions and software have emerged with guardrails that ChatGPT clearly lacks. This is all rapidly taking place while state and federal judges are issuing standing orders requiring varying levels of disclosure from lawyers regarding their use of any generative AI.

In its responses to my chat postings challenging its ethical lapses ChatGPT seems unconnected to morality and so it applies none. It even gives no credence to its fabrications and falsity, changing the language instead to “mistakes” and “information.” Yet when confronted with its fabrication it apologizes. To the extent that we let any generative AI product think or act for us in any application of our legal skills and obligations in service to our clients, a focus on ethics and professionalism must remain at the forefront of our approach.

Jane Roach Maughan is the founder of Jane Maughan, P.C. where her practice is devoted exclusively to family law. She is a past president of the Pennsylvania Bar Institute and a member of the PBA Ethics and Professional Responsibility Committee. Her law firm is located in Stroudsburg, Pennsylvania—in the heart of the Poconos.

Continue Reading
Click to comment

Leave a Reply

Tu dirección de correo electrónico no será publicada. Los campos obligatorios están marcados con *

Noticias

Apocalipsis Biosciencias para desarrollar Géminis para la infección en pacientes con quemaduras graves

Published

on

– Esta nueva indicación es otro paso para desbloquear todo el potencial de la plataforma Gemini –

San Diego-(Business Wire)-$ Revb #GÉMINISApocalipsis Biosciences, Inc. (NASDAQ: RevB) (la “empresa” o “revelación”), una compañía de ciencias de la vida de etapas clínicas que se centra en reequilibrar la inflamación para optimizar la salud, anunció una nueva indicación de objetivo para Géminis para la prevención de la infección en pacientes con quemaduras graves que requieren hospitalización (el Gema-PBI programa). El uso de Géminis para la prevención de la infección en pacientes con quemaduras severas, así como la prevención de la infección después de la cirugía (el Gema-PSI programa) son parte de la revelación familiar de patentes anteriormente con licencia de la Universidad de Vanderbilt.


“Estamos muy contentos de colaborar con el equipo de Apocalipsis para el avance de Géminis para la prevención de la infección en esta población de pacientes desatendida”, dijo Dra. Julia BohannonProfesor Asociado, Departamento de Anestesiología, Departamento de Patología, Microbiología e Inmunología, Universidad de Vanderbilt. “Creemos que la actividad de biomarcador clínico observada con Gemini se correlaciona fuertemente con nuestra experiencia preclínica en modelos de quemaduras de infecciones”.

El equipo de investigación de Vanderbilt demostrado El tratamiento posterior a la quemadura reduce significativamente la gravedad y la duración de la infección pulmonar de Pseudomonas, así como un nivel general reducido de inflamación en un modelo preclínico.

“La prevención de la infección en pacientes severamente quemados es un esfuerzo importante y complementa que la revelación laboral ha completado hasta la fecha”, dijo “, dijo”, dijo James RolkeCEO de Revelation “El programa Gemini-PBI puede ofrecer varias oportunidades regulatorias, de desarrollo y financiación que la compañía planea explorar”.

Sobre quemaduras e infección después de quemar

Las quemaduras son lesiones en la piel que involucran las dos capas principales: la epidermis externa delgada y/o la dermis más gruesa y profunda. Las quemaduras pueden ser el resultado de una variedad de causas que incluyen fuego, líquidos calientes, productos químicos (como ácidos fuertes o bases fuertes), electricidad, vapor, radiación de radiografías o radioterapia, luz solar o luz ultravioleta. Cada año, aproximadamente medio millón de estadounidenses sufren lesiones por quemaduras que requieren intervención médica. Si bien la mayoría de las lesiones por quemaduras no requieren ingreso a un hospital, se admiten alrededor de 40,000 pacientes, y aproximadamente 30,000 de ellos necesitan tratamiento especializado en un centro de quemaduras certificadas.

El número total anual de muertes relacionadas con quemaduras es de aproximadamente 3.400, siendo la infección invasiva la razón principal de la muerte después de las primeras 24 horas. La tasa de mortalidad general para pacientes con quemaduras graves es de aproximadamente 3.3%, pero esto aumenta al 20.6% en pacientes con quemaduras con lesión cutánea de quemaduras y inhalación, versus 10.5% por lesión por inhalación solo. La infección invasiva, incluida la sepsis, es la causa principal de la muerte después de la lesión por quemaduras, lo que representa aproximadamente el 51% de las muertes.

Actualmente no hay tratamientos aprobados para prevenir la infección sistémica en pacientes con quemaduras.

Sobre Géminis

Géminis es una formulación propietaria y propietaria de disacárido hexaacil fosforilada (PHAD (PHAD®) que reduce el daño asociado con la inflamación al reprogramarse del sistema inmune innato para responder al estrés (trauma, infección, etc.) de manera atenuada. La revelación ha realizado múltiples estudios preclínicos que demuestran el potencial terapéutico de Géminis en las indicaciones objetivo. Revelación anunciado previamente Datos clínicos positivos de fase 1 para el tratamiento intravenoso con Géminis. El punto final de seguridad primario se cumplió en el estudio de fase 1, y los resultados demostraron la actividad farmacodinámica estadísticamente significativa como se observó a través de los cambios esperados en múltiples biomarcadores, incluida la regulación positiva de IL-10.

Géminis se está desarrollando para múltiples indicaciones, incluso como pretratamiento para prevenir o reducir la gravedad y la duración de la lesión renal aguda (programa Gemini-AKI), y como pretratamiento para prevenir o reducir la gravedad y la duración de la infección posquirúrgica (programa GEMINI-PSI). Además, Gemini puede ser un tratamiento para detener o retrasar la progresión de la enfermedad renal crónica (programa Gemini-CKD).

Acerca de Apocalipsis Biosciences, Inc.

Revelation Biosciences, Inc. es una compañía de ciencias de la vida en estadio clínico centrada en aprovechar el poder de la inmunidad entrenada para la prevención y el tratamiento de la enfermedad utilizando su formulación patentada Géminis. Revelation tiene múltiples programas en curso para evaluar Géminis, incluso como prevención de la infección posquirúrgica, como prevención de lesiones renales agudas y para el tratamiento de la enfermedad renal crónica.

Para obtener más información sobre Apocalipsis, visite www.revbiosciences.com.

Declaraciones con avance

Este comunicado de prensa contiene declaraciones prospectivas definidas en la Ley de Reforma de Litigios de Valores Privados de 1995, según enmendada. Las declaraciones prospectivas son declaraciones que no son hechos históricos. Estas declaraciones prospectivas generalmente se identifican por las palabras “anticipar”, “creer”, “esperar”, “estimar”, “plan”, “perspectiva” y “proyecto” y otras expresiones similares. Advirtemos a los inversores que las declaraciones prospectivas se basan en las expectativas de la gerencia y son solo predicciones o declaraciones de las expectativas actuales e involucran riesgos, incertidumbres y otros factores conocidos y desconocidos que pueden hacer que los resultados reales sean materialmente diferentes de los previstos por las declaraciones de prospección. Apocalipsis advierte a los lectores que no depositen una dependencia indebida de tales declaraciones de vista hacia adelante, que solo hablan a partir de la fecha en que se hicieron. Los siguientes factores, entre otros, podrían hacer que los resultados reales difieran materialmente de los descritos en estas declaraciones prospectivas: la capacidad de la revelación para cumplir con sus objetivos financieros y estratégicos, debido a, entre otras cosas, la competencia; la capacidad de la revelación para crecer y gestionar la rentabilidad del crecimiento y retener a sus empleados clave; la posibilidad de que la revelación pueda verse afectada negativamente por otros factores económicos, comerciales y/o competitivos; riesgos relacionados con el desarrollo exitoso de los candidatos de productos de Apocalipsis; la capacidad de completar con éxito los estudios clínicos planificados de sus candidatos de productos; El riesgo de que no podamos inscribir completamente nuestros estudios clínicos o la inscripción llevará más tiempo de lo esperado; riesgos relacionados con la aparición de eventos de seguridad adversos y/o preocupaciones inesperadas que pueden surgir de los datos o análisis de nuestros estudios clínicos; cambios en las leyes o regulaciones aplicables; Iniciación esperada de los estudios clínicos, el momento de los datos clínicos; El resultado de los datos clínicos, incluido si los resultados de dicho estudio son positivos o si se puede replicar; El resultado de los datos recopilados, incluido si los resultados de dichos datos y/o correlación se pueden replicar; el momento, los costos, la conducta y el resultado de nuestros otros estudios clínicos; El tratamiento anticipado de datos clínicos futuros por parte de la FDA, la EMA u otras autoridades reguladoras, incluidos si dichos datos serán suficientes para su aprobación; el éxito de futuras actividades de desarrollo para sus candidatos de productos; posibles indicaciones para las cuales se pueden desarrollar candidatos de productos; la capacidad de revelación para mantener la lista de sus valores en NASDAQ; la duración esperada sobre la cual los saldos de Apocalipsis financiarán sus operaciones; y otros riesgos e incertidumbres descritos en este documento, así como aquellos riesgos e incertidumbres discutidos de vez en cuando en otros informes y otras presentaciones públicas con la SEC por Apocalipsis.

Contactos

Mike Porter

Relaciones con inversores

Porter Levay & Rose Inc.

Correo electrónico: mike@plrinvest.com

Chester Zygmont, III

Director financiero
Apocalipsis Biosciences Inc.

Correo electrónico: czygmont@revbiosciences.com

Continue Reading

Noticias

Why Google’s search engine trial is about AI : NPR

Published

on

An illustration photograph taken on Feb. 20, 2025 shows Grok, DeepSeek and ChatGPT apps displayed on a phone screen. The Justice Department’s 2020 complaint against Google has few mentions of artificial intelligence or AI chatbots. But nearly five years later, as the remedy phase of the trial enters its second week of testimony, the focus has shifted to AI.

Michael M. Santiago/Getty Images/Getty Images North America


hide caption

toggle caption

Michael M. Santiago/Getty Images/Getty Images North America

When the U.S. Department of Justice originally broughtand then won — its case against Google, arguing that the tech behemoth monopolized the search engine market, the focus was on, well … search.

Back then, in 2020, the government’s antitrust complaint against Google had few mentions of artificial intelligence or AI chatbots. But nearly five years later, as the remedy phase of the trial enters its second week of testimony, the focus has shifted to AI, underscoring just how quickly this emerging technology has expanded.

In the past few days, before a federal judge who will assess penalties against Google, the DOJ has argued that the company could use its artificial intelligence products to strengthen its monopoly in online search — and to use the data from its powerful search index to become the dominant player in AI.

In his opening statements last Monday, David Dahlquist, the acting deputy director of the DOJ’s antitrust civil litigation division, argued that the court should consider remedies that could nip a potential Google AI monopoly in the bud. “This court’s remedy should be forward-looking and not ignore what is on the horizon,” he said.

Dahlquist argued that Google has created a system in which its control of search helps improve its AI products, sending more users back to Google search — creating a cycle that maintains the tech company’s dominance and blocks competitors out of both marketplaces.

The integration of search and Gemini, the company’s AI chatbot — which the DOJ sees as powerful fuel for this cycle — is a big focus of the government’s proposed remedies. The DOJ is arguing that to be most effective, those remedies must address all ways users access Google search, so any penalties approved by the court that don’t include Gemini (or other Google AI products now or in the future) would undermine their broader efforts.

Department of Justice lawyer David Dahlquist leaves the Washington, D.C. federal courthouse on Sept. 20, 2023 during the original trial phase of the antitrust case against Google.

Department of Justice lawyer David Dahlquist leaves the Washington, D.C. federal courthouse on Sept. 20, 2023 during the original trial phase of the antitrust case against Google.

Jose Luis Magana/AP/FR159526 AP


hide caption

toggle caption

Jose Luis Magana/AP/FR159526 AP

AI and search are connected like this: Search engine indices are essentially giant databases of pages and information on the web. Google has its own such index, which contains hundreds of billions of webpages and is over 100,000,000 gigabytes, according to court documents. This is the data Google’s search engine scans when responding to a user’s query.

AI developers use these kinds of databases to build and train the models used to power chatbots. In court, attorneys for the DOJ have argued that Google’s Gemini pulls information from the company’s search index, including citing search links and results, extending what they say is a self-serving cycle. They argue that Google’s ability to monopolize the search market gives it user data, at a huge scale — an advantage over other AI developers.

The Justice Department argues Google’s monopoly over search could have a direct effect on the development of generative AI, a type of artificial intelligence that uses existing data to create new content like text, videos or photos, based on a user’s prompts or questions. Last week, the government called executives from several major AI companies, like OpenAI and Perplexity, in an attempt to argue that Google’s stranglehold on search is preventing some of those companies from truly growing.

The government argues that to level the playing field, Google should be forced to open its search data — like users’ search queries, clicks and results — and license it to other competitors at a cost.

This is on top of demands related to Google’s search engine business, most notably that it should be forced to sell off its Chrome browser.

Google flatly rejects the argument that it could monopolize the field of generative AI, saying competition in the AI race is healthy. In a recent blog post on Google’s website, Lee-Anne Mulholland, the company’s vice president of regulatory affairs, wrote that since the federal judge first ruled against Google over a year ago, “AI has already rapidly reshaped the industry, with new entrants and new ways of finding information, making it even more competitive.”

In court, Google’s lawyers have argued that there are a host of AI companies with chatbots — some of which are outperforming Gemini. OpenAI has ChatGPT, Meta has MetaAI and Perplexity has Perplexity AI.

“There is no shortage of competition in that market, and ChatGPT and Meta are way ahead of everybody in terms of the distribution and usage at this point,” said John E. Schmidtlein, a lawyer for Google, during his opening statement. “But don’t take my word for it. Look at the data. Hundreds and hundreds of millions of downloads by ChatGPT.”

Competing in a growing AI field

It should be no surprise that AI is coming up so much at this point in the trial, said Alissa Cooper, the executive director of the Knight-Georgetown Institute, a nonpartisan tech research and policy center at Georgetown University focusing on AI, disinformation and data privacy.

“If you look at search as a product today, you can’t really think about search without thinking about AI,” she said. “I think the case is a really great opportunity to try to … analyze how Google has benefited specifically from the monopoly that it has in search, and ensure that the behavior that led to that can’t be used to gain an unfair advantage in these other markets which are more nascent.”

Having access to Google’s data, she said, “would provide them with the ability to build better chatbots, build better search engines, and potentially build other products that we haven’t even thought of.”

To make that point, the DOJ called Nick Turley, OpenAI’s head of product for ChatGPT, to the stand last Tuesday. During a long day of testimony, Turley detailed how without access to Google’s search index and data, engineers for the growing company tried to build their own.

ChatGPT, a large language model that can generate human-like responses, engage in conversations and perform tasks like explaining a tough-to-understand math lesson, was never intended to be a product for OpenAI, Turley said. But once it launched and went viral, the company found that people were using it for a host of needs.

Though popular, ChatGPT had its drawbacks, like the bot’s limited “knowledge,” Turley said. Early on, ChatGPT was not connected to the internet and could only use information that it had been fed up to a certain point in its training. For example, Turley said, if a user asked “Who is the president?” the program would give a 2022 answer — from when its “knowledge” effectively stopped.

OpenAI couldn’t build their own index fast enough to address their problems; they found that process incredibly expensive, time consuming and potentially years from coming to fruition, Turley said.

So instead, they sought a partnership with a third party search provider. At one point, OpenAI tried to make a deal with Google to gain access to their search, but Google declined, seeing OpenAI as a direct competitor, Turley testified.

But Google says companies like OpenAI are doing just fine without gaining access to the tech giant’s own technology — which it spent decades developing. These companies just want “handouts,” said Schmidtlein.

On the third day of the remedy trial, internal Google documents shared in court by the company’s lawyers compared how many people are using Gemini versus its competitors. According to those documents, ChatGPT and MetaAI are the two leaders, with Gemini coming in third.

They showed that this March, Gemini saw 35 million active daily users and 350 million monthly active users worldwide. That was up from 9 million daily active users in October 2024. But according to those documents, Gemini was still lagging behind ChatGPT, which reached 160 million daily users and around 600 million active users in March.

These numbers show that competitors have no need to use Google’s search data, valuable intellectual property that the tech giant spent decades building and maintaining, the company argues.

“The notion that somehow ChatGPT can’t get distribution is absurd,” Schmidtlein said in court last week. “They have more distribution than anyone.”

Google’s exclusive deals 

In his ruling last year, U.S. District Judge Amit Mehta said Google’s exclusive agreements with device makers, like Apple and Samsung, to make its search engine the default on those companies’ phones helped maintain its monopoly. It remains a core issue for this remedy trial.

Now, the DOJ is arguing that Google’s deals with device manufacturers are also directly affecting AI companies and AI tech.

In court, the DOJ argued that Google has replicated this kind of distribution deal by agreeing to pay Samsung what Dahlquist called a monthly “enormous sum” for Gemini to be installed on smartphones and other devices.

Last Wednesday, the DOJ also called Dmitry Shevelenko, Perplexity’s chief business officer, to testify that Google has effectively cut his company out from making deals with manufacturers and mobile carriers.

Perplexity AIs not preloaded on any mobile devices in the U.S., despite many efforts to get phone companies to establish Perplexity as a default or exclusive app on devices, Shevelenko said. He compared Google’s control in that space to that of a “mob boss.”

But Google’s attorney, Christopher Yeager, noted in questioning Shevelenko that Perplexity has reached a valuation of over $9 billion — insinuating the company is doing just fine in the marketplace.

Despite testifying in court (for which he was subpoenaed, Shevelenko noted), he and other leaders at Perplexity are against the breakup of Google. In a statement on the company’s website, the Perplexity team wrote that neither forcing Google to sell off Chrome nor to license search data to its competitors are the best solutions. “Neither of these address the root issue: consumers deserve choice,” they wrote.

Google and Alphabet CEO Sundar Pichai departs federal court after testifying in October 2023 in Washington, DC. Pichai testified to defend his company in the original antitrust trial. Pichai is expected to testify again during the remedy phase of the legal proceedings.

Google and Alphabet CEO Sundar Pichai departs federal court after testifying in October 2023 in Washington, DC. Pichai testified to defend his company in the original antitrust trial. Pichai is expected to testify again during the remedy phase of the legal proceedings.

Drew Angerer/Getty Images/Getty Images North America


hide caption

toggle caption

Drew Angerer/Getty Images/Getty Images North America

What to expect next

This week the trial continues, with the DOJ calling its final witnesses this morning to testify about the feasibility of a Chrome divestiture and how the government’s proposed remedies would help rivals compete. On Tuesday afternoon, Google will begin presenting its case, which is expected to feature the testimony of CEO Sundar Pichai, although the date of his appearance has not been specified.

Closing arguments are expected at the end of May, and then Mehta will make his ruling. Google says once this phase is settled the company will appeal Mehta’s ruling in the underlying case.

Whatever Mehta decides in this remedy phase, Cooper thinks it will have effects beyond just the business of search engines. No matter what it is, she said, “it will be having some kind of impact on AI.”

Google is a financial supporter of NPR.

Continue Reading

Noticias

API de Meta Oleleshes Llama que se ejecuta 18 veces más rápido que OpenAI: Cerebras Partnership ofrece 2.600 tokens por segundo

Published

on

Únase a nuestros boletines diarios y semanales para obtener las últimas actualizaciones y contenido exclusivo sobre la cobertura de IA líder de la industria. Obtenga más información


Meta anunció hoy una asociación con Cerebras Systems para alimentar su nueva API de LLAMA, ofreciendo a los desarrolladores acceso a velocidades de inferencia hasta 18 veces más rápido que las soluciones tradicionales basadas en GPU.

El anuncio, realizado en la Conferencia inaugural de desarrolladores de Llamacon de Meta en Menlo Park, posiciona a la compañía para competir directamente con Operai, Anthrope y Google en el mercado de servicios de inferencia de IA en rápido crecimiento, donde los desarrolladores compran tokens por miles de millones para impulsar sus aplicaciones.

“Meta ha seleccionado a Cerebras para colaborar para ofrecer la inferencia ultra rápida que necesitan para servir a los desarrolladores a través de su nueva API de LLAMA”, dijo Julie Shin Choi, directora de marketing de Cerebras, durante una sesión de prensa. “En Cerebras estamos muy, muy emocionados de anunciar nuestra primera asociación HyperScaler CSP para ofrecer una inferencia ultra rápida a todos los desarrolladores”.

La asociación marca la entrada formal de Meta en el negocio de la venta de AI Computation, transformando sus populares modelos de llama de código abierto en un servicio comercial. Si bien los modelos de LLAMA de Meta se han acumulado en mil millones de descargas, hasta ahora la compañía no había ofrecido una infraestructura en la nube de primera parte para que los desarrolladores creen aplicaciones con ellos.

“Esto es muy emocionante, incluso sin hablar sobre cerebras específicamente”, dijo James Wang, un ejecutivo senior de Cerebras. “Openai, Anthrope, Google: han construido un nuevo negocio de IA completamente nuevo desde cero, que es el negocio de inferencia de IA. Los desarrolladores que están construyendo aplicaciones de IA comprarán tokens por millones, a veces por miles de millones. Y estas son como las nuevas instrucciones de cómputo que las personas necesitan para construir aplicaciones AI”.

Una tabla de referencia muestra a Cerebras Processing Llama 4 a 2,648 tokens por segundo, superando drásticamente a los competidores Sambanova (747), Groq (600) y servicios basados ​​en GPU de Google y otros, explicando la elección de hardware de Meta para su nueva API. (Crédito: Cerebras)

Breaking the Speed ​​Barrier: Cómo modelos de Llama de Cerebras Supercharges

Lo que distingue a la oferta de Meta es el aumento de la velocidad dramática proporcionado por los chips de IA especializados de Cerebras. El sistema de cerebras ofrece más de 2.600 fichas por segundo para Llama 4 Scout, en comparación con aproximadamente 130 tokens por segundo para ChatGPT y alrededor de 25 tokens por segundo para Deepseek, según puntos de referencia del análisis artificial.

“Si solo se compara con API a API, Gemini y GPT, todos son grandes modelos, pero todos se ejecutan a velocidades de GPU, que son aproximadamente 100 tokens por segundo”, explicó Wang. “Y 100 tokens por segundo están bien para el chat, pero es muy lento para el razonamiento. Es muy lento para los agentes. Y la gente está luchando con eso hoy”.

Esta ventaja de velocidad permite categorías completamente nuevas de aplicaciones que antes no eran prácticas, incluidos los agentes en tiempo real, los sistemas de voz de baja latencia conversacional, la generación de código interactivo y el razonamiento instantáneo de múltiples pasos, todos los cuales requieren encadenamiento de múltiples llamadas de modelo de lenguaje grandes que ahora se pueden completar en segundos en lugar de minutos.

La API de LLAMA representa un cambio significativo en la estrategia de IA de Meta, en la transición de ser un proveedor de modelos a convertirse en una compañía de infraestructura de IA de servicio completo. Al ofrecer un servicio API, Meta está creando un flujo de ingresos a partir de sus inversiones de IA mientras mantiene su compromiso de abrir modelos.

“Meta ahora está en el negocio de vender tokens, y es excelente para el tipo de ecosistema de IA estadounidense”, señaló Wang durante la conferencia de prensa. “Traen mucho a la mesa”.

La API ofrecerá herramientas para el ajuste y la evaluación, comenzando con el modelo LLAMA 3.3 8B, permitiendo a los desarrolladores generar datos, entrenar y probar la calidad de sus modelos personalizados. Meta enfatiza que no utilizará datos de clientes para capacitar a sus propios modelos, y los modelos construidos con la API de LLAMA se pueden transferir a otros hosts, una clara diferenciación de los enfoques más cerrados de algunos competidores.

Las cerebras alimentarán el nuevo servicio de Meta a través de su red de centros de datos ubicados en toda América del Norte, incluidas las instalaciones en Dallas, Oklahoma, Minnesota, Montreal y California.

“Todos nuestros centros de datos que sirven a la inferencia están en América del Norte en este momento”, explicó Choi. “Serviremos Meta con toda la capacidad de las cerebras. La carga de trabajo se equilibrará en todos estos diferentes centros de datos”.

El arreglo comercial sigue lo que Choi describió como “el proveedor de cómputo clásico para un modelo hiperscalador”, similar a la forma en que NVIDIA proporciona hardware a los principales proveedores de la nube. “Están reservando bloques de nuestro cómputo para que puedan servir a su población de desarrolladores”, dijo.

Más allá de las cerebras, Meta también ha anunciado una asociación con Groq para proporcionar opciones de inferencia rápida, brindando a los desarrolladores múltiples alternativas de alto rendimiento más allá de la inferencia tradicional basada en GPU.

La entrada de Meta en el mercado de API de inferencia con métricas de rendimiento superiores podría potencialmente alterar el orden establecido dominado por Operai, Google y Anthrope. Al combinar la popularidad de sus modelos de código abierto con capacidades de inferencia dramáticamente más rápidas, Meta se está posicionando como un competidor formidable en el espacio comercial de IA.

“Meta está en una posición única con 3 mil millones de usuarios, centros de datos de hiper escala y un gran ecosistema de desarrolladores”, según los materiales de presentación de Cerebras. La integración de la tecnología de cerebras “ayuda a Meta Leapfrog OpenAi y Google en rendimiento en aproximadamente 20X”.

Para las cerebras, esta asociación representa un hito importante y la validación de su enfoque especializado de hardware de IA. “Hemos estado construyendo este motor a escala de obleas durante años, y siempre supimos que la primera tarifa de la tecnología, pero en última instancia tiene que terminar como parte de la nube de hiperescala de otra persona. Ese fue el objetivo final desde una perspectiva de estrategia comercial, y finalmente hemos alcanzado ese hito”, dijo Wang.

La API de LLAMA está actualmente disponible como una vista previa limitada, con Meta planifica un despliegue más amplio en las próximas semanas y meses. Los desarrolladores interesados ​​en acceder a la inferencia Ultra-Fast Llama 4 pueden solicitar el acceso temprano seleccionando cerebras de las opciones del modelo dentro de la API de LLAMA.

“Si te imaginas a un desarrollador que no sabe nada sobre cerebras porque somos una empresa relativamente pequeña, solo pueden hacer clic en dos botones en el SDK estándar de SDK estándar de Meta, generar una tecla API, seleccionar la bandera de cerebras y luego, de repente, sus tokens se procesan en un motor gigante a escala de dafers”, explicó las cejas. “Ese tipo de hacernos estar en el back -end del ecosistema de desarrolladores de Meta todo el ecosistema es tremendo para nosotros”.

La elección de Meta de silicio especializada señala algo profundo: en la siguiente fase de la IA, no es solo lo que saben sus modelos, sino lo rápido que pueden pensarlo. En ese futuro, la velocidad no es solo una característica, es todo el punto.

Continue Reading

Trending