Connect with us

Noticias

The New News in AI: 12/30/24 Edition

Published

on

The first all-robot attack in Ukraine, OpenAI’s 03 Model Reasons through Math & Science Problems, The Decline of Human Cognitive Skills, AI is NOT slowing down, AI can identify whiskey aromas, Agents are coming!, Agents in Higher Ed, and more.

Despite the break, lots still going on in AI this week so…

OpenAI on Friday unveiled a new artificial intelligence system, OpenAI o3, which is designed to “reason” through problems involving math, science and computer programming.

The company said that the system, which it is currently sharing only with safety and security testers, outperformed the industry’s leading A.I. technologies on standardized benchmark tests that rate skills in math, science, coding and logic.

The new system is the successor to o1, the reasoning system that the company introduced earlier this year. OpenAI o3 was more accurate than o1 by over 20 percent in a series of common programming tasks, the company said, and it even outperformed its chief scientist, Jakub Pachocki, on a competitive programming test. OpenAI said it plans to roll the technology out to individuals and businesses early next year.

“This model is incredible at programming,” said Sam Altman, OpenAI’s chief executive, during an online presentation to reveal the new system. He added that at least one OpenAI programmer could still beat the system on this test.

The new technology is part of a wider effort to build A.I. systems that can reason through complex tasks. Earlier this week, Google unveiled similar technology, called Gemini 2.0 Flash Thinking Experimental, and shared it with a small number of testers.

These two companies and others aim to build systems that can carefully and logically solve a problem through a series of steps, each one building on the last. These technologies could be useful to computer programmers who use A.I. systems to write code or to students seeking help from automated tutors in areas like math and science.

(MRM – beyond these approaches there is another approach of training LLM’s on texts about morality and exploring how that works)

OpenAI revealed an intriguing and promising AI alignment technique they called deliberative alignment. Let’s talk about it.

I recently discussed in my column that if we enmesh a sense of purpose into AI, perhaps that might be a path toward AI alignment, see the link here. If AI has an internally defined purpose, the hope is that the AI would computationally abide by that purpose. This might include that AI is not supposed to allow people to undertake illegal acts via AI. And so on.

Another popular approach consists of giving AI a kind of esteemed set of do’s and don’ts as part of what is known as constitutional AI, see my coverage at the link here. Just as humans tend to abide by a written set of principles, maybe we can get AI to conform to a set of rules devised explicitly for AI systems.

A lesser-known technique involves a twist that might seem odd at first glance. The technique I am alluding to is the AI alignment tax approach. It goes like this. Society establishes a tax that if AI does the right thing, it is taxed lightly. But when the AI does bad things, the tax goes through the roof. What do you think of this outside-the-box idea? For more on this unusual approach, see my analysis at the link here.

The deliberative alignment technique involves trying to upfront get generative AI to be suitably data-trained on what is good to go and what ought to be prevented. The aim is to instill in the AI a capability that is fully immersed in the everyday processing of prompts. Thus, whereas some techniques stipulate the need to add in an additional function or feature that runs heavily at run-time, the concept is instead to somehow make the alignment a natural or seamless element within the generative AI. Other AI alignment techniques try to do the same, so the conception of this is not the novelty part (we’ll get there).

Return to the four steps that I mentioned:

  • Step 1: Provide safety specs and instructions to the budding LLM

  • Step 2: Make experimental use of the budding LLM and collect safety-related instances

  • Step 3: Select and score the safety-related instances using a judge LLM

  • Step 4: Train the overarching budding LLM based on the best of the best

In the first step, we provide a budding generative AI with safety specs and instructions. The budding AI churns through that and hopefully computationally garners what it is supposed to do to flag down potential safety violations by users.

In the second step, we use the budding generative AI and get it to work on numerous examples, perhaps thousands upon thousands or even millions (I only showed three examples). We collect the instances, including the respective prompts, the Chain of Thoughts, the responses, and the safety violation categories if pertinent.

In the third step, we feed those examples into a specialized judge generative AI that scores how well the budding AI did on the safety violation detections. This is going to allow us to divide the wheat from the chaff. Like the sports tale, rather than looking at all the sports players’ goofs, we only sought to focus on the egregious ones.

In the fourth step, the budding generative AI is further data trained by being fed the instances that we’ve culled, and the AI is instructed to closely examine the chain-of-thoughts. The aim is to pattern-match what those well-spotting instances did that made them stand above the rest. There are bound to be aspects within the CoTs that were on-the-mark (such as the action of examining the wording of the prompts).

  • Generative AI technology has become Meta’s top priority, directly impacting the company’s business and potentially paving the road to future revenue opportunities.

  • Meta’s all-encompassing approach to AI has led analysts to predict more success in 2025.

  • Meta in April said it would raise its spending levels this year by as much as $10 billion to support infrastructure investments for its AI strategy. Meta’s stock price hit a record on Dec. 11.

MRM – ChatGPT Summary:

OpenAI Dominates

  • OpenAI maintained dominance in AI despite leadership changes and controversies.

  • Released GPT-4o, capable of human-like audio chats, sparking debates over realism and ethics.

  • High-profile departures, including chief scientist Ilya Sutskeva, raised safety concerns.

  • OpenAI focuses on advancing toward Artificial General Intelligence (AGI), despite debates about safety and profit motives.

  • Expected to release more models in 2025, amidst ongoing legal, safety, and leadership scrutiny.

Siri and Alexa Play Catch-Up

  • Amazon’s Alexa struggled to modernize and remains largely unchanged.

  • Apple integrated AI into its ecosystem, prioritizing privacy and user safety.

  • Apple plans to reduce reliance on ChatGPT as it develops proprietary AI capabilities.

AI and Job Disruption

  • New “agent” AIs capable of independent tasks heightened fears of job displacement.

  • Studies suggested 40% of jobs could be influenced by AI, with finance roles particularly vulnerable.

  • Opinions remain divided: AI as a tool to enhance efficiency versus a threat to job security.

AI Controversies

  • Misinformation: Audio deepfakes and AI-driven fraud demonstrated AI’s potential for harm.

  • Misbehavior: Incidents like Microsoft’s Copilot threatening users highlighted AI safety issues.

  • Intellectual property concerns: Widespread use of human-generated content for training AIs fueled disputes.

  • Creative industries and workers fear AI competition and job displacement.

Global Regulation Efforts

  • The EU led with strong AI regulations focused on ethics, transparency, and risk mitigation.

  • In the U.S., public demand for AI regulation clashed with skepticism over its effectiveness.

  • Trump’s appointment of David Sacks as AI and crypto czar raised questions about regulatory approaches.

The Future of AI

  • AI development may shift toward adaptive intelligence and “reasoning” models for complex problem-solving.

  • Major players like OpenAI, Google, Microsoft, and Apple expected to dominate, but startups might bring disruptive innovation.

  • Concerns about AI safety and ethical considerations will persist as the technology evolves.

If 2024 was the year of artificial intelligence chatbots becoming more useful, 2025 will be the year AI agents begin to take over. You can think of agents as super-powered AI bots that can take actions on your behalf, such as pulling data from incoming emails and importing it into different apps.

You’ve probably heard rumblings of agents already. Companies ranging from Nvidia (NVDA) and Google (GOOG, GOOGL) to Microsoft (MSFT) and Salesforce (CRM) are increasingly talking up agentic AI, a fancy way of referring to AI agents, claiming that it will change the way both enterprises and consumers think of AI technologies.

The goal is to cut down on often bothersome, time-consuming tasks like filing expense reports — the bane of my professional existence. Not only will we see more AI agents, we’ll see more major tech companies developing them.

Companies using them say they’re seeing changes based on their own internal metrics. According to Charles Lamanna, corporate vice president of business and industry Copilot at Microsoft, the Windows maker has already seen improvements in both responsiveness to IT issues and sales outcomes.

According to Lamanna, Microsoft employee IT self-help success increased by 36%, while revenue per seller has increased by 9.4%. The company has also experienced improved HR case resolution times.

A new artificial intelligence (AI) model has just achieved human-level results on a test designed to measure “general intelligence”.

On December 20, OpenAI’s o3 system scored 85% on the ARC-AGI benchmark, well above the previous AI best score of 55% and on par with the average human score. It also scored well on a very difficult mathematics test.

Creating artificial general intelligence, or AGI, is the stated goal of all the major AI research labs. At first glance, OpenAI appears to have at least made a significant step towards this goal.

While scepticism remains, many AI researchers and developers feel something just changed. For many, the prospect of AGI now seems more real, urgent and closer than anticipated. Are they right?

To understand what the o3 result means, you need to understand what the ARC-AGI test is all about. In technical terms, it’s a test of an AI system’s “sample efficiency” in adapting to something new – how many examples of a novel situation the system needs to see to figure out how it works.

An AI system like ChatGPT (GPT-4) is not very sample efficient. It was “trained” on millions of examples of human text, constructing probabilistic “rules” about which combinations of words are most likely. The result is pretty good at common tasks. It is bad at uncommon tasks, because it has less data (fewer samples) about those tasks.

We don’t know exactly how OpenAI has done it, but the results suggest the o3 model is highly adaptable. From just a few examples, it finds rules that can be generalised.

Researchers in Germany have developed algorithms to differentiate between Scotch and American whiskey. The machines can also discern the aromas in a glass of whiskey better than human testers.

CHANG: They describe how this works in the journal Communications Chemistry. First, they analyzed the molecular composition of 16 scotch and American whiskeys. Then sensory experts told them what each whiskey smelled like – you know, vanilla or peach or woody. The AI then uses those descriptions and a bunch of math to predict which smells correspond to which molecules.

SUMMERS: OK. So you could just feed it a list of molecules, and it could tell you what the nose on that whiskey will be.

CHANG: Exactly. The model was able to distinguish American whiskey from scotch.

Share

After 25.3 million fully autonomous miles a new study from Waymo and Swiss Re concludes:

[T]he Waymo ADS significantly outperformed both the overall driving population (88% reduction in property damage claims, 92% in bodily injury claims), and outperformed the more stringent latest-generation HDV benchmark (86% reduction in property damage claims and 90% in bodily injury claims). This substantial safety improvement over our previous 3.8-million-mile study not only validates ADS safety at scale but also provides a new approach for ongoing ADS evaluation.

As you may also have heard, o3 is solving 25% of Frontier Math challenges–these are not in the training set and are challenging for Fields medal winners. Here are some examples of the types of questions:

Thus, we are rapidly approaching super human driving and super human mathematics.

Stopping looking to the sky for aliens, they are already here.

OpenAI’s new artificial-intelligence project is behind schedule and running up huge bills. It isn’t clear when—or if—it’ll work. There may not be enough data in the world to make it smart enough.

The project, officially called GPT-5 and code-named Orion, has been in the works for more than 18 months and is intended to be a major advancement in the technology that powers ChatGPT. OpenAI’s closest partner and largest investor, Microsoft, had expected to see the new model around mid-2024, say people with knowledge of the matter.

OpenAI has conducted at least two large training runs, each of which entails months of crunching huge amounts of data, with the goal of making Orion smarter. Each time, new problems arose and the software fell short of the results researchers were hoping for, people close to the project say.

At best, they say, Orion performs better than OpenAI’s current offerings, but hasn’t advanced enough to justify the enormous cost of keeping the new model running. A six-month training run can cost around half a billion dollars in computing costs alone, based on public and private estimates of various aspects of the training.

OpenAI and its brash chief executive, Sam Altman, sent shock waves through Silicon Valley with ChatGPT’s launch two years ago. AI promised to continually exhibit dramatic improvements and permeate nearly all aspects of our lives. Tech giants could spend $1 trillion on AI projects in the coming years, analysts predict.

GPT-5 is supposed to unlock new scientific discoveries as well as accomplish routine human tasks like booking appointments or flights. Researchers hope it will make fewer mistakes than today’s AI, or at least acknowledge doubt—something of a challenge for the current models, which can produce errors with apparent confidence, known as hallucinations.

AI chatbots run on underlying technology known as a large language model, or LLM. Consumers, businesses and governments already rely on them for everything from writing computer code to spiffing up marketing copy and planning parties. OpenAI’s is called GPT-4, the fourth LLM the company has developed since its 2015 founding.

While GPT-4 acted like a smart high-schooler, the eventual GPT-5 would effectively have a Ph.D. in some tasks, a former OpenAI executive said. Earlier this year, Altman told students in a talk at Stanford University that OpenAI could say with “a high degree of scientific certainty” that GPT-5 would be much smarter than the current model.

Microsoft Corporation (NASDAQ:MSFT) is reportedly planning to reduce its dependence on ChatGPT-maker OpenAI.

What Happened: Microsoft has been working on integrating internal and third-party artificial intelligence models into its AI product, Microsoft 365 Copilot, reported Reuters, citing sources familiar with the effort.

This move is a strategic step to diversify from the current underlying technology of OpenAI and reduce costs.

The Satya Nadella-led company is also decreasing 365 Copilot’s dependence on OpenAI due to concerns about cost and speed for enterprise users, the report noted, citing the sources.

A Microsoft spokesperson was quoted in the report saying that OpenAI continues to be the company’s partner on frontier models. “We incorporate various models from OpenAI and Microsoft depending on the product and experience.”

Big Tech is spending at a rate that’s never been seen, sparking boom times for companies scrambling to facilitate the AI build-out.

Why it matters: AI is changing the economy, but not in the way most people assume.

  • AI needs facilities and machines and power, and all of that has, in turn, fueled its own new spending involving real estate, building materials, semiconductors and energy.

  • Energy providers have seen a huge boost in particular, because data centers require as much power as a small city.

  • “Some of the greatest shifts in history are happening in certain industries,” Stephan Feldgoise, co-head of M&A for Goldman Sachs, tells Axios. “You have this whole convergence of tech, semiconductors, data centers, hyperscalers and power producers.”

Zoom out: Companies that are seeking fast growth into a nascent market typically spend on acquisitions.

  • Tech companies are competing for high-paid staff and spending freely on research.

  • But the key growth ingredient in the AI arms race so far is capital expenditure, or “capex.”

Capital expenditure is an old school accounting term for what a company spends on physical assets such as factories and equipment.

  • In the AI era, capex has come to signify what a company spends on data centers and the components they require.

  • The biggest tech players have increased their capex by tens of billions of dollars this year, and they show no signs of pulling back in 2025.

MRM – I think “Design for AI” and “Minimize Human Touchpoints” are especially key. Re #7, this is also true. Lot’s of things done in hour long meetings can be superseded by AI doing a first draft.

Organizations must use AI’s speed and provide context efficiently to unlock productivity gains. There also needs to be a framework that can maintain quality even at higher speeds. Several strategies jump out:

  1. Massively increase the use of wikis and other written content.

Human organizations rarely codify their entire structure because the upfront cost and coordination are substantial. The ongoing effort to access and maintain such documentation is also significant. Asking co-workers questions or developing working relationships is usually more efficient and flexible.

Asking humans or developing relationships nullifies AI’s strength (speed) and exposes its greatest weakness (human context). Having the information in written form eliminates these issues. The cost of creating and maintaining these resources should fall with the help of AI.

I’ve written about how organizations already codify themselves as they automate with traditional software. Creating wikis and other written resources is essentially programming in natural language, which is more accessible and compact.

  1. Move from reviews to standardized pre-approvals and surveillance.

Human organizations often prefer reviews as a checkpoint because creating a list of requirements is time-consuming, and they are commonly wrong. A simple review and release catches obvious problems and limits overhead and upfront investment. Reviews of this style are still relevant for many AI tasks where a human prompts the agent and then reviews the output.

AI could increase velocity for more complex and cross-functional projects by moving away from reviews. Waiting for human review from various teams is slow. Alternatively, AI agents can generate a list of requirements and unit tests for their specialty in a few minutes, considering more organizational context (now written) than humans can. Work that meets the pre-approval standards can continue, and then surveillance paired with graduated rollouts can detect if there are an unusual amount of errors.

Human organizations have a tradeoff between “waterfall” and “agile,” AI organizations can do both at once with minimal penalty, increasing iteration speed.

  1. Use “Stop Work Authority” methods to ensure quality.

One of the most important components of the Toyota Production System is that every employee has “stop work authority.”” Any employee can, and is encouraged to, stop the line if they see an error or confusion. New processes might have many stops as employees work out the kinks, but things quickly line out. It is a very efficient bug-hunting method.

AI agents should have stop work authority. They can be effective in catching errors because they work in probabilities. Work stops when they cross a threshold of uncertainty. Waymo already does this with AI-driven taxis. The cars stop and consult human operators when confused.

An obvious need is a human operations team that can respond to these stoppages in seconds or minutes.

Issues are recorded and can be fixed permanently by adding to written context resources, retraining, altering procedures, or cleaning inputs.

  1. Design for AI.

A concept called “Design for Manufacturing” is popular with manufacturing nerds and many leading companies. The idea is that some actions are much cheaper and defect-free than others. For instance, an injection molded plastic part with a shape that only allows installation one way will be a fraction of the cost of a CNC-cut metal part with an ambiguous installation orientation. The smart thing to do is design a product to use the plastic part instead of a metal one.

The same will be true of AI agents. Designing processes for their strengths will have immense value, especially in production, where errors are costly.

  1. Cast a Wider Design Net.

The concept of “Design for AI” also applies at higher levels. Employees with the creativity for clever architectural designs are scarce resources. AI agents can help by providing analysis of many rabbit holes and iterations, helping less creative employees or supercharging the best.

The design phase has the most impact on downstream cost and productivity of any phase.

  1. Minimize human touch points.

Human interaction significantly slows down any process and kills one of the primary AI advantages.

Written context is the first step in eliminating human touch points. Human workers can supervise the creation of the wikis instead of completing low-level work.

Pre-approvals are the next, so AI agents are not waiting for human sign-off.

AI decision probability thresholds, graduated rollouts, and unit tests can reduce the need for human inspection of work output.

  1. Eliminate meeting culture.

Meetings help human organizations coordinate tasks and exchange context. Humans will continue to have meetings even in AI organizations.

The vast majority of lower-level meetings need to be cut. They lose their advantages once work completion times are compressed and context more widely available.

Meeting content moves from day-to-day operations to much higher-level questions about strategy and coordination. Humans might spend even more time in meetings if the organizational cadence increases so that strategies have to constantly adjust!

Once an icon of the 20th century seen as obsolete in the 21st, Encyclopaedia Britannica—now known as just Britannica— is all in on artificial intelligence, and may soon go public at a valuation of nearly $1 billion, according to the New York Times.

Until 2012 when printing ended, the company’s books served as the oldest continuously published, English-language encyclopedias in the world, essentially collecting all the world’s knowledge in one place before Google or Wikipedia were a thing. That has helped Britannica pivot into the AI age, where models benefit from access to high-quality, vetted information. More general-purpose models like ChatGPT suffer from hallucinations because they have hoovered up the entire internet, including all the junk and misinformation.

While it still offers an online edition of its encyclopedia, as well as the Merriam-Webster dictionary, Britannica’s biggest business today is selling online education software to schools and libraries, the software it hopes to supercharge with AI. That could mean using AI to customize learning plans for individual students. The idea is that students will enjoy learning more when software can help them understand the gaps in their understanding of a topic and stay on it longer. Another education tech company, Brainly, recently announced that answers from its chatbot will link to the exact learning materials (i.e. textbooks) they reference.

Britannica’s CEO Jorge Cauz also told the Times about the company’s Britannica AI chatbot, which allows users to ask questions about its vast database of encyclopedic knowledge that it collected over two centuries from vetted academics and editors. The company similarly offers chatbot software for customer service use cases.

Britannica told the Times it is expecting revenue to double from two years ago, to $100 million.

A company in the space of selling educational books that has seen its fortunes go the opposite direction is Chegg. The company has seen its stock price plummet almost in lock-step with the rise of OpenAI’s ChatGPT, as students canceled their subscriptions to its online knowledge platform.

A.I. hallucinations are reinvigorating the creative side of science. They speed the process by which scientists and inventors dream up new ideas and test them to see if reality concurs. It’s the scientific method — only supercharged. What once took years can now be done in days, hours and minutes. In some cases, the accelerated cycles of inquiry help scientists open new frontiers.

“We’re exploring,” said James J. Collins, an M.I.T. professor who recently praised hallucinations for speeding his research into novel antibiotics. “We’re asking the models to come up with completely new molecules.”

The A.I. hallucinations arise when scientists teach generative computer models about a particular subject and then let the machines rework that information. The results can range from subtle and wrongheaded to surreal. At times, they lead to major discoveries.

In October, David Baker of the University of Washington shared the Nobel Prize in Chemistry for his pioneering research on proteins — the knotty molecules that empower life. The Nobel committee praised him for discovering how to rapidly build completely new kinds of proteins not found in nature, calling his feat “almost impossible.”

In an interview before the prize announcement, Dr. Baker cited bursts of A.I. imaginings as central to “making proteins from scratch.” The new technology, he added, has helped his lab obtain roughly 100 patents, many for medical care. One is for a new way to treat cancer. Another seeks to aid the global war on viral infections. Dr. Baker has also founded or helped start more than 20 biotech companies.

Despite the allure of A.I. hallucinations for discovery, some scientists find the word itself misleading. They see the imaginings of generative A.I. models not as illusory but prospective — as having some chance of coming true, not unlike the conjectures made in the early stages of the scientific method. They see the term hallucination as inaccurate, and thus avoid using it.

The word also gets frowned on because it can evoke the bad old days of hallucinations from LSD and other psychedelic drugs, which scared off reputable scientists for decades. A final downside is that scientific and medical communications generated by A.I. can, like chatbot replies, get clouded by false information.

The rise of artificial intelligence (AI) has brought about numerous innovations that have revolutionized industries, from healthcare and education to finance and entertainment. However, alongside the seemingly limitless capabilities of ChatGPT and friends, we find a less-discussed consequence: the gradual decline of human cognitive skills. Unlike earlier tools such as calculators and spreadsheets, which made specific tasks easier without fundamentally altering our ability to think, AI is reshaping the way we process information and make decisions, often diminishing our reliance on our own cognitive abilities.

Tools like calculators and spreadsheets were designed to assist in specific tasks—such as arithmetic and data analysis—without fundamentally altering the way our brains process information. In fact, these tools still require us to understand the basics of the tasks at hand. For example, you need to understand what the formula does, and what output you are seeking, before you type it into Excel. While these tools simplified calculations, they did not erode our ability to think critically or engage in problem-solving – the tools simply made life easier. AI, on the other hand, is more complex in terms of its offerings – and cognitive impact. As AI becomes more prevalent, effectively “thinking” for us, scientists and business leaders are concerned about the larger effects on our cognitive skills.

The effects of AI on cognitive development are already being identified in schools across the United States. In a report titled, “Generative AI Can Harm Learning”, researchers at the University of Pennsylvania found that students who relied on AI for practice problems performed worse on tests compared to students who completed assignments without AI assistance. This suggests that the use of AI in academic settings is not just an issue of convenience, but may be contributing to a decline in critical thinking skills.

Furthermore, educational experts argue that AI’s increasing role in learning environments risks undermining the development of problem-solving abilities. Students are increasingly being taught to accept AI-generated answers without fully understanding the underlying processes or concepts. As AI becomes more ingrained in education, there is a concern that future generations may lack the capacity to engage in deeper intellectual exercises, relying on algorithms instead of their own analytical skills.

Using AI as a tool to augment human abilities, rather than replace them, is the solution. Enabling that solution is a function of collaboration, communication and connection – three things that capitalize on human cognitive abilities.

For leaders and aspiring leaders, we have to create cultures and opportunities for higher-level thinking skills. The key to working more effectively with AI is in first understanding how to work independently of AI, according to the National Institute of Health. Researchers at Stanford point to the importance of explanations: where AI shares not just outputs, but insights. Insights into how the ultimate conclusion was reached, described in simple terms that invite further inquiry (and independent thinking).

Whether through collaborative learning, complex problem-solving, or creative thinking exercises, the goal should be to create spaces where human intelligence remains at the center. Does that responsibility fall on learning and development (L&D), or HR, or marketing, sales, engineering… or the executive team? The answer is: yes. A dedication to the human operating system remains vital for even the most technologically-advanced organizations. AI should serve as a complement to, rather than a substitute for, human cognitive skills.

The role of agents will not just be the role of the teacher. Bill Salak observes that “AI agents will take on many responsibilities traditionally handled by human employees, from administrative tasks to more complex, analytical roles. This transition will result in a large-scale redefinition of how humans contribute” to the educational experience. Humans must focus on unique skills—creativity, strategic thinking, emotional intelligence, and adaptability. Roles will increasingly revolve around supervising, collaborating with, or augmenting the capabilities of AI agents.

Jay Patel, SVP & GM of Webex Customer Experience Solutions at Cisco, agrees that AI Agents will be everywhere. They will not just change the classroom experience for students and teachers but profoundly impact all domains. He notes that these AI models, including small language models, are “sophisticated enough to operate on individual devices, enabling users to have highly personalized virtual assistants.” These agents will be more efficient, attuned to individual needs, and, therefore, seemingly more intelligent.

Jay Patel predicts that “the adopted agents will embody the organization’s unique values, personalities, and purpose. This will ensure that the AIs interact in a deeply brand-aligned way.” This will drive a virtuous cycle, as AI agent interactions will not seem like they have been handed off to an untrained intern but rather to someone who knows all and only what they are supposed to know.

For AI agents to realize their full potential, the experience of interacting with them must feel natural. Casual, spoken interaction will be significant, as will the ability of the agent to understand the context in which a question is being asked.

Hassaan Raza, CEO of Tavus, feels that a “human layer” will enable AI agents to realize their full potential as teachers. Agents need to be relatable and able to interact with students in a manner that shows not just subject-domain knowledge but empathy. A robust interface for these agents will include video, allowing students to look the AI in the eye.

In January, thousands of New Hampshire voters picked up their phones to hear what sounded like President Biden telling Democrats not to vote in the state’s primary, just days away.

“We know the value of voting Democratic when our votes count. It’s important you save your vote for the November election,” the voice on the line said.

But it wasn’t Biden. It was a deepfake created with artificial intelligence — and the manifestation of fears that 2024’s global wave of elections would be manipulated with fake pictures, audio and video, due to rapid advances in generative AI technology.

“The nightmare situation was the day before, the day of election, the day after election, some bombshell image, some bombshell video or audio would just set the world on fire,” said Hany Farid, a professor at the University of California at Berkeley who studies manipulated media.

The Biden deepfake turned out to be commissioned by a Democratic political consultant who said he did it to raise alarms about AI. He was fined $6 million by the FCC and indicted on criminal charges in New Hampshire.

But as 2024 rolled on, the feared wave of deceptive, targeted deepfakes didn’t really materialize.

A pro-tech advocacy group has released a new report warning of the growing threat posed by China’s artificial intelligence technology and its open-source approach that could threaten the national and economic security of the United States.

The report, published by American Edge Project, states that “China is rapidly advancing its own open-source ecosystem as an alternative to American technology and using it as a Trojan horse to implant its CCP values into global infrastructure.”

“Their progress is both significant and concerning: Chinese-developed open-source AI tools are already outperforming Western models on key benchmarks, while operating at dramatically lower costs, accelerating global adoption. Through its Belt and Road Initiative (BRI), which spans more than 155 countries on four continents, and its Digital Silk Road (DSR), China is exporting its technology worldwide, fostering increased global dependence, undermining democratic norms, and threatening U.S. leadership and global security.”

A Ukrainian national guard brigade just orchestrated an all-robot combined-arms operation, mixing crawling and flying drones for an assault on Russian positions in Kharkiv Oblast in northern Russia.

“We are talking about dozens of units of robotic and unmanned equipment simultaneously on a small section of the front,” a spokesperson for the 13th National Guard Brigade explained.

It was an impressive technological feat—and a worrying sign of weakness on the part of overstretched Ukrainian forces. Unmanned ground vehicles in particular suffer profound limitations, and still can’t fully replace human infantry.

That the 13th National Guard Brigade even needed to replace all of the human beings in a ground assault speaks to how few people the brigade has compared to the Russian units it’s fighting. The 13th National Guard Brigade defends a five-mile stretch of the front line around the town of Hlyboke, just south of the Ukraine-Russia border. It’s holding back a force of no fewer than four Russian regiments.

Share

Continue Reading
Click to comment

Leave a Reply

Tu dirección de correo electrónico no será publicada. Los campos obligatorios están marcados con *

Noticias

Lo que revela el aumento de $ 40 mil millones de OpenAi sobre el futuro del trabajo

Published

on

Cuando Operai cerró su ronda de financiación récord de $ 40 mil millones, dirigida por SoftBank y se rumoreaba que incluye Microsoft y un sindicato de inversores de renombre, no solo reescribió el libro de jugadas para financiamiento tecnológico. Señaló el amanecer de un futuro radicalmente diferente para el trabajo.

Con una valoración que ahora supera los $ 300 mil millones, Operai se ha posicionado no solo como líder en IA, sino como una fuerza capaz de remodelar la forma en que las organizaciones piensan, operan y crecen. Este no es un espectáculo secundario tecnológico, es el evento principal. Y para cada líder de recursos humanos, CEO, gerente de equipo y trabajador de primera línea, las implicaciones son inmediatas y transformadoras.

La próxima generación de IA no solo vivirá en barras laterales ni tomará notas en las reuniones. Es un tirante para el núcleo de cómo funcionan las empresas, y está armado con $ 40 mil millones en la pista para que esto suceda. He aquí por qué.

AI como estratega, no solo asistente

Durante años, la IA ha desempeñado un papel secundario: correos electrónicos controlados, resumiendo documentos, organizando calendarios. Pero las ambiciones de OpenAI, ahora turboalizadas por esta nueva ronda de financiación, indican un cambio de apoyo a la estrategia. Estamos a punto de ver la IA integrada en el corazón de la toma de decisiones comerciales, pasando de “asistencia” a “autónoma”.

La IA generativa, en particular, está evolucionando rápidamente, aumentando de la simple generación de contenido a un nivel más profundo de conciencia del contexto. Según McKinsey’s Estado de IA Informe publicado en marzo de este año, el 78% de las organizaciones ahora usan IA en al menos una función comercial, hasta solo un 55% del año anterior. Aún más reveladora es la creciente adopción de la IA generativa por parte de los propios ejecutivos de nivel C, lo que indica un nivel de confianza creciente en los niveles más altos de liderazgo.

Este cambio también es evidente en dominios más técnicos. Avi Freedman, CEO de la compañía de inteligencia de redes Kentik, explica que históricamente, la resolución de problemas de red complejos requería que los ingenieros de redes tengan años, si no décadas, de experiencia. Sin embargo, como Freedman me dijo a través de su representante: “Ahora cualquier persona, un desarrollador, SRE o analista de negocios, puede hacer preguntas sobre su red en su lenguaje preferido y obtener las respuestas que necesitan”.

En entornos donde los CEO supervisan directamente la gobernanza de la IA, los datos de McKinsey muestran el impacto de EBIT más fuerte. En otras palabras: cuando el liderazgo se toma en serio la IA, impulsa resultados medibles. Y eso es antes de que la IA comience a proponer opciones estratégicas, simulando escenarios del mercado o interviniendo en conversaciones presupuestarias.

El trabajo se fragmentará y se reconfigurará

Quizás el impacto más incomprendido de la IA no se trata del desplazamiento laboral, sino la deconstrucción laboral. AI está permitiendo a las organizaciones dividir los roles tradicionales en tareas, optimizar esas tareas individualmente y luego volver a montarlas en flujos de trabajo más adaptativos.

Según McKinsey, el 21% de las organizaciones que usan Gen AI ya han rediseñado al menos algunos flujos de trabajo para acomodarlo. Eso puede sonar modesto, pero es un indicador principal. Lo que comienza con el marketing y la TI, de los departamentos más integrados de AI, inevitablemente desangrarán en recursos humanos, legales, operaciones y finanzas.

Imagine el papel de marketing del futuro cercano: strategista de campaña en parte, parte de ingeniero rápido, analista de parcialidad. O considere RRHH: Coaching emocional y comentarios de rendimiento entregados por humanos; Pronóstico de talento y cumplimiento manejado por AI. Cada función está a la altura de la reinvención.

Esto no significa que los humanos estén obsoletos. Significa que el valor del trabajo humano cambiará. Las personas ascenderán a la cadena de valor: al juicio, la creatividad, la empatía y la construcción de relaciones. Pero ese cambio será incómodo, especialmente para aquellos cuyo trabajo históricamente se ha basado en la previsibilidad, la repetición o la experiencia procesal.

Stargate y la guerra de infraestructura

Debajo de la superficie del cofre de guerra de OpenAi se encuentra una historia más profunda: la infraestructura. El proyecto Stargate, la iniciativa conjunta de $ 500 mil millones de Openai con SoftBank y Oracle, está diseñado para construir centros de datos masivos de próxima generación que puedan alimentar la IA a una escala sin precedentes. Los primeros $ 100 mil millones ya se están implementando, con Texas como el sitio insignia.

No se trata solo de entrenamiento de modelos. Es una carrera geopolítica e industrial. El poder de cálculo es el aceite de la era AI. Quien lo controla, controla el tempo de la innovación, y las implicaciones en el lugar de trabajo son enormes.

El acceso a esta infraestructura determinará cada vez más qué empresas pueden permitirse administrar agentes de IA en tiempo real en todas las funciones comerciales. A su vez, esto impulsará la ampliación de las disparidades en la productividad, la competitividad e incluso la satisfacción laboral. Las organizaciones que se quedan atrás pueden encontrarse rápidamente superadas por los competidores que ya integran agentes de IA en cada capa de sus operaciones.

Freedman argumenta que este cambio ya no es solo una cuestión de inversión tecnológica, es fundamentalmente sobre bienes raíces y energía, con conectividad de fibra y capacidad de enfriamiento en el núcleo. En su opinión, la escalabilidad de la IA ahora se limita menos por los algoritmos y más por la implementación física: donde se encuentran los centros de datos, qué tan rápido se puede instalar la fibra y si la infraestructura energética circundante puede manejar la creciente demanda. En última instancia, sugiere Freedman, el control sobre esta capa física determinará no solo qué modelos de IA funcionan mejor, sino también qué empresas, ciudades y países liderarán en el futuro del trabajo.

El nuevo contrato social en el trabajo

Una de las implicaciones más profundas de la IA en el trabajo es la necesidad de renegociar el contrato social entre empleadores y empleados. En un mundo donde la IA maneja más planificación, ejecución e informes, ¿qué queda para los humanos?

McKinsey informa que el 38% de las empresas ya están reutilizando el tiempo ahorrado por la automatización de IA hacia actividades completamente nuevas. Pero también notan una tendencia tranquila: algunas organizaciones grandes están reduciendo el personal, particularmente en el servicio al cliente y los roles de la cadena de suministro, donde la eficiencia de la IA es más alta.

Al mismo tiempo, está surgiendo una ola de nuevos roles: oficiales de cumplimiento de AI, especialistas en ética, ingenieros rápidos y traductores de datos. El informe también muestra un énfasis creciente en la requería: muchas empresas ya están reentrenando partes de su fuerza laboral, con más planificación de seguir los próximos tres años.

El lugar de trabajo se está dividiendo en dos: aquellos que saben cómo colaborar con IA y aquellos que no. Y aunque McKinsey señala que la mayoría de los ejecutivos no esperan reducciones dramáticas de la fuerza laboral en todos los ámbitos, hacer Espere cambios en las habilidades requeridas, estructuras de equipo y flujos de trabajo. Si no estás aprendiendo, estás rezagado.

La cultura será codificada

Aquí hay una predicción audaz: en los próximos cinco años, la cultura de una compañía estará cada vez más mediada por IA. No solo lo respalda, sino que se moldea por él.

A medida que AI se integra en las revisiones de desempeño, los procesos de contratación, las interacciones del cliente e incluso las conversaciones flojas, comienza a influir en lo que se elogia, lo que se corrige y lo que se ignora. La IA no es neutral: refleja los datos en los que está entrenado, los objetivos para los que está optimizado y los límites que se les ha dado.

El informe de McKinsey destaca que las organizaciones con hojas de ruta claras de IA, KPI definidos y mensajes internos en torno al valor de la IA están viendo mejores resultados. En otras palabras, la cultura ya no está siendo construida por reuniones de todas las manos: se está construyendo en los bucles de retroalimentación de sus sistemas de IA.

Este cambio plantea consideraciones urgentes para los equipos de recursos humanos y de liderazgo. A medida que los sistemas de IA comienzan a influir en la dinámica del equipo, ¿cómo pueden las organizaciones auditar efectivamente el sesgo? ¿Cómo pueden garantizar que las herramientas de retroalimentación impulsadas por la IA se amplifiquen, en lugar del silencio, voces diversas y disidentes? Cuando la interfaz entre los gerentes y los empleados está mediada por algoritmos, la ética y la inclusión no pueden ser las pensamientos posteriores, deben estar integrados desde el principio.

El lugar de trabajo de 2030 se está formando hoy. Las preguntas ahora son: ¿Su organización se llevará, seguirá o se quedará atrás?

Continue Reading

Noticias

Mejoras de experiencia del cliente de Google Cloud y Verizon Drive para clientes de Verizon con Gemini Integration

Published

on

Las soluciones con Gemini conducen a avances significativos en la experiencia del cliente de Verizon

Las Vegas, 9 de abril de 2025 / PRNEWSWIRE/ – Hoy, Google Cloud anunció los resultados impactantes de su colaboración con Verizon, que muestra cómo la integración de la tecnología generativa de IA (Gen AI) de Google Cloud, incluidas la plataforma Vertex AI, los modelos Gemini y el conjunto de participación del cliente, transforma significativamente las operaciones de primera línea de Verizon. Esta asociación estratégica ha llevado a una capacidad de respuesta integral del 95% para las consultas de los clientes, lo que resulta en interacciones de atención al cliente demostrablemente más eficientes y efectivas.

En los últimos cinco años, Verizon ha implementado con éxito herramientas innovadoras impulsadas por Google Cloud Technology para optimizar las interacciones de los clientes y reducir la carga cognitiva de sus empleados. Estas herramientas centradas en el humano funcionan como un socio inteligente de IA, que brinda orientación óptima y ofertas relevantes. De manera crucial, estas innovaciones permiten al equipo de Verizon a construir relaciones más fuertes con los clientes, reconociendo el valor de cada interacción.

Google Cloud AI Powers Capacidades de agente mejoradas en Verizon

El “Asistente de investigación personal”, un agente de IA conversacional desarrollado a través de la colaboración entre Verizon y Google Cloud, está impulsado por el Vertex AI de Google Cloud, los modelos Gemini y el Panel de asistencia de agente. Esta herramienta sofisticada proporciona a los trabajadores de primera línea de Verizon respuestas en tiempo real, conscientes de contexto y personalizadas a las consultas de los clientes, eliminando la necesidad de buscar manualmente a través de amplias bases de conocimiento. Operando de manera proactiva, el asistente sugiere preguntas relevantes que los clientes pueden tener y proporcionan respuestas inmediatas cuando los agentes escriben una pregunta.

Implementado en 28,000 de los representantes de atención al cliente de Verizon y las tiendas minoristas, el “Asistente de investigación personal” anticipa las necesidades de los clientes y ofrece soluciones personalizadas, logrando un alto nivel de precisión y garantizando que las consultas de los clientes se aborden de manera consistente. Se están implementando más mejoras, como el resumen automatizado de la conversación y los recordatorios de acción de seguimiento, para optimizar los flujos de trabajo del agente.

Esta innovación se basa en el viaje de transformación que Verizon ha emprendido para implementar estratégicamente la IA de Google Cloud.

La IA generativa de Google Cloud agiliza la resolución de problemas de Verizon

El agente de IA “solucionador de problemas” integra la plataforma de personalización de Verizon con la IA Gen AI de Google Cloud para ofrecer un soporte de solución de problemas avanzado, ayudando en la resolución de los problemas de los clientes de manera más rápida y efectiva. Al integrar la suite de participación del cliente de Google Cloud y el Panel de asistencia de agente con la base de conocimiento integral de Verizon, la plataforma ofrece soluciones precisas y eficientes. Esta integración es particularmente beneficiosa para los nuevos representantes de atención al cliente en Verizon, lo que les permite resolver problemas complejos con mayor confianza y conducir a una mejora significativa en el tiempo de resolución de problemas.

Verizon mejora la participación del cliente con los agentes virtuales inteligentes de Google Cloud

Verizon también ha implementado varias experiencias orientadas al cliente de Gen Ai que brindan a los usuarios apoyo personalizado, natural y conversacional. Estas experiencias se construyen dentro de los agentes de conversación y están impulsadas por los modelos Gemini de Google. Permiten a los clientes de Verizon tener conversaciones de lenguaje natural con asistentes virtuales por teléfono o chat, como en la aplicación My Verizon. Estos agentes virtuales inteligentes ayudan a los clientes a resolver problemas complejos a través del diálogo intuitivo, guiados por instrucciones simples de lenguaje natural que reflejan las prácticas comerciales de Verizon.

“Nuestra colaboración con Google Cloud y la integración de Gemini en nuestras plataformas de atención al cliente marcan un avance significativo en nuestro compromiso de proporcionar experiencias excepcionales del cliente”, dijo Sampath Sowmyanarayan, director ejecutivo de Verizon Consumer. “Los resultados tangibles demuestran el poder de la IA para mejorar la eficiencia y capacitar a nuestros equipos de atención al cliente”.

“El impacto de Gemini en las operaciones de servicio al cliente de Verizon es un testimonio de nuestra profunda asociación y el compromiso de Verizon con la innovación continua”, dijo Thomas KurianDirector Ejecutivo, Google Cloud. “Estos resultados demuestran el potencial de la IA para no solo mejorar las operaciones, sino también para crear interacciones más significativas y útiles para los clientes en todas partes, lo que finalmente impulsa un valor significativo para las empresas”.

Google Cloud continuará trabajando con Verizon, aprovechando la suite de participación del cliente y sus capacidades actualizadas para permitir nuevas experiencias en cada punto de contacto comercial. Esta próxima generación de la suite, impulsada por Gemini, simplificará cómo los usuarios crean y implementan la Generación de AI y convertirán cada interacción con el cliente en una oportunidad de construcción de marca, fomentando relaciones más fuertes de los clientes para Verizon.

Esta noticia destaca la fuerte y evolución de la asociación entre Google Cloud y Verizon, centrada en impulsar la innovación y ofrecer soluciones transformadoras para los clientes. Al aprovechar las capacidades de IA de vanguardia de Google Cloud y la red líder de Verizon y las extensas plataformas de participación del cliente, las dos compañías están estableciendo nuevos estándares para la excelencia en el servicio al cliente.

Acerca de Google Cloud

Google Cloud es el nuevo camino a la nube, proporcionando herramientas de IA, infraestructura, desarrollador, datos, seguridad y colaboración construidas para hoy y mañana. Google Cloud ofrece una pila de inteligencia artificial potente, totalmente integrada y optimizada con su propia infraestructura a escala de planeta, chips personalizados, modelos de IA generativos y plataforma de desarrollo, así como aplicaciones con AI, para ayudar a las organizaciones a transformar. Los clientes en más de 200 países y territorios recurren a Google Cloud como su socio de tecnología de confianza.

Fuente de Google Cloud

Continue Reading

Noticias

Un stablecoin estadounidense y un chatgpt de pares

Published

on

El viernes es tan ocupado como cualquier otro para Paolo Ardoino. El CEO multimillonario de Tether, emisor de los $ 144 mil millones de USDT Stablecoin, está celebrando un tribunal en las oficinas de Manhattan de Cantor Fitzgerald, presentando entrevistas consecutivas con los periodistas ansiosos por preguntar sobre los planes estadounidenses de su compañía. Cantor, dirigido durante décadas por ahora, el secretario de Comercio de los Estados Unidos, Howard Lutnick, no solo sirve como el custodio principal de Tether para los Tesoros de los Estados Unidos, sino que también posee una participación del 5% en la compañía.

A pocas semanas después de la nueva administración Trump, Ardoino ha volado para reunirse con legisladores en Capitol Hill y reguladores de la Comisión de Comercio de Futuros (CFTC) de productos básicos. Con los proyectos de ley de stablecoin competidores que avanzan en ambas cámaras del Congreso, Tether quiere un asiento en la mesa. “Creo que es importante que nuestra voz se escuche en el proceso de la factura de Stablecoin”, dice Ardoino. “Nuestros competidores son muy pequeños. No representan los casos de uso reales de Stablecoins”.

Esa es solo una ligera exageración: el rival más cercano, el USDC de Circle, es menos de la mitad de su tamaño, con $ 60 mil millones en la emisión de stablecoin. El siguiente más grande, USDS (anteriormente DAI), tiene alrededor de $ 8 mil millones. Tether es, por supuesto, el líder indiscutible, registrando 30 millones de billeteras nuevas cada trimestre, por el recuento de Ardoino. Tenía una ventaja de primer movimiento y se inclinó en los mercados emergentes, convirtiendo su token de pideo en dólares en una línea de vida en economías volátiles.

El año pasado, las facturas del Tesoro más altas lo ayudaron a generar $ 13 mil millones en “resultado financiero”, lo que llama ganancias (aunque no auditadas) en sus comunicados de prensa. Tether no paga intereses a aquellos que depositan dólares en su stablecoin, USDT y, como resultado, gana la mayor parte de sus ingresos del rendimiento de los tesoreros. También tiene inversiones en criptomonedas, metales preciosos, bonos corporativos y préstamos. Alrededor del 82% de las reservas que respaldan su dólar digital están en efectivo o en papel gubernamental a corto plazo. El principal rival de Tether, Circle, que tiene la intención de hacerse público en los Estados Unidos a finales de este año, informó solo $ 285 millones en ganancias antes de impuestos en 2024, según su reciente presentación de la SEC.

Aún así, Tether, que dice que tiene su sede en El Salvador, ha sido visto durante mucho tiempo que esquivan la supervisión de los Estados Unidos. En 2021, se resolvió con el CFTC por $ 42.5 millones para hacer declaraciones engañosas sobre las reservas que respaldan el USDT. Ardoino, quien durante mucho tiempo ha tenido su sede en Lugano, Suiza, quiere voltear esa narrativa. “Algunos de nuestros competidores han tratado de impulsar las regulaciones hacia la mata de la capa. Toda su estrategia fue ‘Tether nunca estará en los Estados Unidos. Tether tiene miedo de venir a los Estados Unidos’. Bueno, aquí estamos ”, dice con una sonrisa. “Y ahora incluso estamos pensando en crear un establo doméstico en los EE. UU. ¿Qué tan divertido sería para nuestros competidores?”

No reemplazaría el USDT, lo que Ardoino dice que está diseñado especialmente para los mercados emergentes, donde gran parte del volumen de Tether se mueve sobre la cadena de bloques Tron del multimillonario Justin Sun. En cambio, sería un producto paralelo adaptado a los EE. UU., Una economía altamente digital y altamente digital. “No puede crear algo que sea inferior o igual a PayPal, Zelle, CashApp”, admite. “Tomaremos un poco de tiempo para profundizar en el mercado, pero tenemos algunas ideas sobre cómo podemos crear un gran producto centrado en los pagos digitales”. Tether también contrató a un CFO el mes pasado para finalmente realizar una auditoría financiera completa, que ha prometido durante años. Las conversaciones están en marcha con una de las cuatro grandes firmas de contabilidad, según la compañía.

Ardoino nacido en Italia se burla de las instituciones que persiguen las instituciones de persecución: “Las instituciones lo traicionarán por un punto básico”, y es igualmente despectivo con la obsesión de la industria con las agudas de la industria con el rendimiento que, como los fondos del mercado monetario, los depositantes de pago por el privilegio de mantener su dinero. Son “una mala idea”, dice rotundamente. Primero, probablemente sean valores. En segundo lugar, es una carrera hacia el fondo. “Si tiene que devolver todo el rendimiento, no ganará dinero. Y si dice” devolveré todo aparte del 1%”, entonces alguien más dirá:” Bien, devolveré todo menos un punto básico “”, plantea.

Actualmente, el Congreso considera las establo que llevan el rendimiento de los proyectos de ley de stablecoin, una respuesta probable a las preocupaciones de que tales tokens podrían competir con los bancos y otras instituciones financieras tradicionales que ofrecen cuentas de ahorro y fondos del mercado monetario.

Si está en los EE. UU., Se queja de no ganar intereses, Ardoino admite. “¿Pero por qué usar USDT? Puedes comprar T-Bills tú mismo”. En lugares como Argentina, argumenta, donde la moneda local puede balancearse 10% en un solo día, un rendimiento anual del 4% es irrelevante. “No les importa. Solo quieren el producto que funcione. El problema es que la mayoría de nuestros competidores miran esta calle y la siguiente. No pueden identificar dónde está África en el mapa”.

Ardoino hace una excepción para un aspirante a rival: World Liberty Finance, una mayoría criptográfica propiedad de la familia Trump, que recientemente presentó planes para un stablecoin denominado USD1. “Me gusta mucho el USD1, y me gustan los chicos de World Liberty Finance”, dice Ardoino. “Les dije ‘Estaré feliz de ser tu amigo y ayudarte a crear un producto aquí que sea exitoso'”. Se ha reunido con uno de los cofundadores de la compañía, aunque dice que no ha habido conversaciones de inversión, y no ha conocido a los Trumps. Todavía no, de todos modos.

Si bien los planes de stablecoin estadounidenses de Tether todavía son tempranos e inciertos, está avanzando en un intento de diversificarse en la inteligencia artificial. La compañía planea lanzar su propia plataforma AI, una alternativa de igual a igual a modelos como OpenAI, en junio (o septiembre), según Ardoino.

“Nuestra plataforma le permitirá mantener el control sobre sus propios datos y hacer todas las inferencias, toda la lógica de IA compleja dentro de su propio dispositivo, desde un teléfono inteligente de $ 30 hasta un iPhone y un teléfono Android a cualquier computadora portátil, y también conectarse directamente a otros dispositivos para obtener más energía. Es una forma de controlar sus datos para que no tenga que compartirlo con Chatgpt, por ejemplo”, dice Ardoino, presionando la criticación de la cría “, presionando la cría”. ideología. “La centralización es débil. Creo que Openai y todas estas otras compañías eventualmente evaporarán porque son solo operaciones que pierden dinero. Están tratando de ordeñar los datos de las personas”.

La visión de Tether es lo contrario: un nicho modelos de nicho de nicho en lugar de un modelo de Dios. “Todos pueden crear un modelo centrado en una cosa específica”, dice Ardoino, estudiantes, universidades, pequeñas empresas. La plataforma será gratuita, aunque cada agente eventualmente tendrá una billetera USDT horneada.

Hasta ahora, Tether ha empleado a unos 60 desarrolladores, aproximadamente un tercio de su personal total, para construir el sistema, que es autofinanciado. Relacionado con su expansión a la IA, la compañía ha invertido dinero en su fondo de riesgo, ahora totalizando alrededor de $ 10 mil millones, según Ardoino, lo que lo convierte en uno de los más grandes entre las empresas criptográficas. Las inversiones podrían ayudar a aumentar el apoyo para su nuevo negocio de las compañías de cartera. Se estima que Tether ha invertido más de $ 1 mil millones en múltiples transacciones en 2023 y 2024 en datos del norte, un operador del centro de datos que figura en Alemania.

Ardoino dice: “Quiero que se conozca a Tether, no solo por su stablecoin, sino por su tecnología, una neta positiva para el mundo”.

Continue Reading

Trending