Connect with us

Noticias

The New News in AI: 12/30/24 Edition

Published

on

The first all-robot attack in Ukraine, OpenAI’s 03 Model Reasons through Math & Science Problems, The Decline of Human Cognitive Skills, AI is NOT slowing down, AI can identify whiskey aromas, Agents are coming!, Agents in Higher Ed, and more.

Despite the break, lots still going on in AI this week so…

OpenAI on Friday unveiled a new artificial intelligence system, OpenAI o3, which is designed to “reason” through problems involving math, science and computer programming.

The company said that the system, which it is currently sharing only with safety and security testers, outperformed the industry’s leading A.I. technologies on standardized benchmark tests that rate skills in math, science, coding and logic.

The new system is the successor to o1, the reasoning system that the company introduced earlier this year. OpenAI o3 was more accurate than o1 by over 20 percent in a series of common programming tasks, the company said, and it even outperformed its chief scientist, Jakub Pachocki, on a competitive programming test. OpenAI said it plans to roll the technology out to individuals and businesses early next year.

“This model is incredible at programming,” said Sam Altman, OpenAI’s chief executive, during an online presentation to reveal the new system. He added that at least one OpenAI programmer could still beat the system on this test.

The new technology is part of a wider effort to build A.I. systems that can reason through complex tasks. Earlier this week, Google unveiled similar technology, called Gemini 2.0 Flash Thinking Experimental, and shared it with a small number of testers.

These two companies and others aim to build systems that can carefully and logically solve a problem through a series of steps, each one building on the last. These technologies could be useful to computer programmers who use A.I. systems to write code or to students seeking help from automated tutors in areas like math and science.

(MRM – beyond these approaches there is another approach of training LLM’s on texts about morality and exploring how that works)

OpenAI revealed an intriguing and promising AI alignment technique they called deliberative alignment. Let’s talk about it.

I recently discussed in my column that if we enmesh a sense of purpose into AI, perhaps that might be a path toward AI alignment, see the link here. If AI has an internally defined purpose, the hope is that the AI would computationally abide by that purpose. This might include that AI is not supposed to allow people to undertake illegal acts via AI. And so on.

Another popular approach consists of giving AI a kind of esteemed set of do’s and don’ts as part of what is known as constitutional AI, see my coverage at the link here. Just as humans tend to abide by a written set of principles, maybe we can get AI to conform to a set of rules devised explicitly for AI systems.

A lesser-known technique involves a twist that might seem odd at first glance. The technique I am alluding to is the AI alignment tax approach. It goes like this. Society establishes a tax that if AI does the right thing, it is taxed lightly. But when the AI does bad things, the tax goes through the roof. What do you think of this outside-the-box idea? For more on this unusual approach, see my analysis at the link here.

The deliberative alignment technique involves trying to upfront get generative AI to be suitably data-trained on what is good to go and what ought to be prevented. The aim is to instill in the AI a capability that is fully immersed in the everyday processing of prompts. Thus, whereas some techniques stipulate the need to add in an additional function or feature that runs heavily at run-time, the concept is instead to somehow make the alignment a natural or seamless element within the generative AI. Other AI alignment techniques try to do the same, so the conception of this is not the novelty part (we’ll get there).

Return to the four steps that I mentioned:

  • Step 1: Provide safety specs and instructions to the budding LLM

  • Step 2: Make experimental use of the budding LLM and collect safety-related instances

  • Step 3: Select and score the safety-related instances using a judge LLM

  • Step 4: Train the overarching budding LLM based on the best of the best

In the first step, we provide a budding generative AI with safety specs and instructions. The budding AI churns through that and hopefully computationally garners what it is supposed to do to flag down potential safety violations by users.

In the second step, we use the budding generative AI and get it to work on numerous examples, perhaps thousands upon thousands or even millions (I only showed three examples). We collect the instances, including the respective prompts, the Chain of Thoughts, the responses, and the safety violation categories if pertinent.

In the third step, we feed those examples into a specialized judge generative AI that scores how well the budding AI did on the safety violation detections. This is going to allow us to divide the wheat from the chaff. Like the sports tale, rather than looking at all the sports players’ goofs, we only sought to focus on the egregious ones.

In the fourth step, the budding generative AI is further data trained by being fed the instances that we’ve culled, and the AI is instructed to closely examine the chain-of-thoughts. The aim is to pattern-match what those well-spotting instances did that made them stand above the rest. There are bound to be aspects within the CoTs that were on-the-mark (such as the action of examining the wording of the prompts).

  • Generative AI technology has become Meta’s top priority, directly impacting the company’s business and potentially paving the road to future revenue opportunities.

  • Meta’s all-encompassing approach to AI has led analysts to predict more success in 2025.

  • Meta in April said it would raise its spending levels this year by as much as $10 billion to support infrastructure investments for its AI strategy. Meta’s stock price hit a record on Dec. 11.

MRM – ChatGPT Summary:

OpenAI Dominates

  • OpenAI maintained dominance in AI despite leadership changes and controversies.

  • Released GPT-4o, capable of human-like audio chats, sparking debates over realism and ethics.

  • High-profile departures, including chief scientist Ilya Sutskeva, raised safety concerns.

  • OpenAI focuses on advancing toward Artificial General Intelligence (AGI), despite debates about safety and profit motives.

  • Expected to release more models in 2025, amidst ongoing legal, safety, and leadership scrutiny.

Siri and Alexa Play Catch-Up

  • Amazon’s Alexa struggled to modernize and remains largely unchanged.

  • Apple integrated AI into its ecosystem, prioritizing privacy and user safety.

  • Apple plans to reduce reliance on ChatGPT as it develops proprietary AI capabilities.

AI and Job Disruption

  • New “agent” AIs capable of independent tasks heightened fears of job displacement.

  • Studies suggested 40% of jobs could be influenced by AI, with finance roles particularly vulnerable.

  • Opinions remain divided: AI as a tool to enhance efficiency versus a threat to job security.

AI Controversies

  • Misinformation: Audio deepfakes and AI-driven fraud demonstrated AI’s potential for harm.

  • Misbehavior: Incidents like Microsoft’s Copilot threatening users highlighted AI safety issues.

  • Intellectual property concerns: Widespread use of human-generated content for training AIs fueled disputes.

  • Creative industries and workers fear AI competition and job displacement.

Global Regulation Efforts

  • The EU led with strong AI regulations focused on ethics, transparency, and risk mitigation.

  • In the U.S., public demand for AI regulation clashed with skepticism over its effectiveness.

  • Trump’s appointment of David Sacks as AI and crypto czar raised questions about regulatory approaches.

The Future of AI

  • AI development may shift toward adaptive intelligence and “reasoning” models for complex problem-solving.

  • Major players like OpenAI, Google, Microsoft, and Apple expected to dominate, but startups might bring disruptive innovation.

  • Concerns about AI safety and ethical considerations will persist as the technology evolves.

If 2024 was the year of artificial intelligence chatbots becoming more useful, 2025 will be the year AI agents begin to take over. You can think of agents as super-powered AI bots that can take actions on your behalf, such as pulling data from incoming emails and importing it into different apps.

You’ve probably heard rumblings of agents already. Companies ranging from Nvidia (NVDA) and Google (GOOG, GOOGL) to Microsoft (MSFT) and Salesforce (CRM) are increasingly talking up agentic AI, a fancy way of referring to AI agents, claiming that it will change the way both enterprises and consumers think of AI technologies.

The goal is to cut down on often bothersome, time-consuming tasks like filing expense reports — the bane of my professional existence. Not only will we see more AI agents, we’ll see more major tech companies developing them.

Companies using them say they’re seeing changes based on their own internal metrics. According to Charles Lamanna, corporate vice president of business and industry Copilot at Microsoft, the Windows maker has already seen improvements in both responsiveness to IT issues and sales outcomes.

According to Lamanna, Microsoft employee IT self-help success increased by 36%, while revenue per seller has increased by 9.4%. The company has also experienced improved HR case resolution times.

A new artificial intelligence (AI) model has just achieved human-level results on a test designed to measure “general intelligence”.

On December 20, OpenAI’s o3 system scored 85% on the ARC-AGI benchmark, well above the previous AI best score of 55% and on par with the average human score. It also scored well on a very difficult mathematics test.

Creating artificial general intelligence, or AGI, is the stated goal of all the major AI research labs. At first glance, OpenAI appears to have at least made a significant step towards this goal.

While scepticism remains, many AI researchers and developers feel something just changed. For many, the prospect of AGI now seems more real, urgent and closer than anticipated. Are they right?

To understand what the o3 result means, you need to understand what the ARC-AGI test is all about. In technical terms, it’s a test of an AI system’s “sample efficiency” in adapting to something new – how many examples of a novel situation the system needs to see to figure out how it works.

An AI system like ChatGPT (GPT-4) is not very sample efficient. It was “trained” on millions of examples of human text, constructing probabilistic “rules” about which combinations of words are most likely. The result is pretty good at common tasks. It is bad at uncommon tasks, because it has less data (fewer samples) about those tasks.

We don’t know exactly how OpenAI has done it, but the results suggest the o3 model is highly adaptable. From just a few examples, it finds rules that can be generalised.

Researchers in Germany have developed algorithms to differentiate between Scotch and American whiskey. The machines can also discern the aromas in a glass of whiskey better than human testers.

CHANG: They describe how this works in the journal Communications Chemistry. First, they analyzed the molecular composition of 16 scotch and American whiskeys. Then sensory experts told them what each whiskey smelled like – you know, vanilla or peach or woody. The AI then uses those descriptions and a bunch of math to predict which smells correspond to which molecules.

SUMMERS: OK. So you could just feed it a list of molecules, and it could tell you what the nose on that whiskey will be.

CHANG: Exactly. The model was able to distinguish American whiskey from scotch.

Share

After 25.3 million fully autonomous miles a new study from Waymo and Swiss Re concludes:

[T]he Waymo ADS significantly outperformed both the overall driving population (88% reduction in property damage claims, 92% in bodily injury claims), and outperformed the more stringent latest-generation HDV benchmark (86% reduction in property damage claims and 90% in bodily injury claims). This substantial safety improvement over our previous 3.8-million-mile study not only validates ADS safety at scale but also provides a new approach for ongoing ADS evaluation.

As you may also have heard, o3 is solving 25% of Frontier Math challenges–these are not in the training set and are challenging for Fields medal winners. Here are some examples of the types of questions:

Thus, we are rapidly approaching super human driving and super human mathematics.

Stopping looking to the sky for aliens, they are already here.

OpenAI’s new artificial-intelligence project is behind schedule and running up huge bills. It isn’t clear when—or if—it’ll work. There may not be enough data in the world to make it smart enough.

The project, officially called GPT-5 and code-named Orion, has been in the works for more than 18 months and is intended to be a major advancement in the technology that powers ChatGPT. OpenAI’s closest partner and largest investor, Microsoft, had expected to see the new model around mid-2024, say people with knowledge of the matter.

OpenAI has conducted at least two large training runs, each of which entails months of crunching huge amounts of data, with the goal of making Orion smarter. Each time, new problems arose and the software fell short of the results researchers were hoping for, people close to the project say.

At best, they say, Orion performs better than OpenAI’s current offerings, but hasn’t advanced enough to justify the enormous cost of keeping the new model running. A six-month training run can cost around half a billion dollars in computing costs alone, based on public and private estimates of various aspects of the training.

OpenAI and its brash chief executive, Sam Altman, sent shock waves through Silicon Valley with ChatGPT’s launch two years ago. AI promised to continually exhibit dramatic improvements and permeate nearly all aspects of our lives. Tech giants could spend $1 trillion on AI projects in the coming years, analysts predict.

GPT-5 is supposed to unlock new scientific discoveries as well as accomplish routine human tasks like booking appointments or flights. Researchers hope it will make fewer mistakes than today’s AI, or at least acknowledge doubt—something of a challenge for the current models, which can produce errors with apparent confidence, known as hallucinations.

AI chatbots run on underlying technology known as a large language model, or LLM. Consumers, businesses and governments already rely on them for everything from writing computer code to spiffing up marketing copy and planning parties. OpenAI’s is called GPT-4, the fourth LLM the company has developed since its 2015 founding.

While GPT-4 acted like a smart high-schooler, the eventual GPT-5 would effectively have a Ph.D. in some tasks, a former OpenAI executive said. Earlier this year, Altman told students in a talk at Stanford University that OpenAI could say with “a high degree of scientific certainty” that GPT-5 would be much smarter than the current model.

Microsoft Corporation (NASDAQ:MSFT) is reportedly planning to reduce its dependence on ChatGPT-maker OpenAI.

What Happened: Microsoft has been working on integrating internal and third-party artificial intelligence models into its AI product, Microsoft 365 Copilot, reported Reuters, citing sources familiar with the effort.

This move is a strategic step to diversify from the current underlying technology of OpenAI and reduce costs.

The Satya Nadella-led company is also decreasing 365 Copilot’s dependence on OpenAI due to concerns about cost and speed for enterprise users, the report noted, citing the sources.

A Microsoft spokesperson was quoted in the report saying that OpenAI continues to be the company’s partner on frontier models. “We incorporate various models from OpenAI and Microsoft depending on the product and experience.”

Big Tech is spending at a rate that’s never been seen, sparking boom times for companies scrambling to facilitate the AI build-out.

Why it matters: AI is changing the economy, but not in the way most people assume.

  • AI needs facilities and machines and power, and all of that has, in turn, fueled its own new spending involving real estate, building materials, semiconductors and energy.

  • Energy providers have seen a huge boost in particular, because data centers require as much power as a small city.

  • “Some of the greatest shifts in history are happening in certain industries,” Stephan Feldgoise, co-head of M&A for Goldman Sachs, tells Axios. “You have this whole convergence of tech, semiconductors, data centers, hyperscalers and power producers.”

Zoom out: Companies that are seeking fast growth into a nascent market typically spend on acquisitions.

  • Tech companies are competing for high-paid staff and spending freely on research.

  • But the key growth ingredient in the AI arms race so far is capital expenditure, or “capex.”

Capital expenditure is an old school accounting term for what a company spends on physical assets such as factories and equipment.

  • In the AI era, capex has come to signify what a company spends on data centers and the components they require.

  • The biggest tech players have increased their capex by tens of billions of dollars this year, and they show no signs of pulling back in 2025.

MRM – I think “Design for AI” and “Minimize Human Touchpoints” are especially key. Re #7, this is also true. Lot’s of things done in hour long meetings can be superseded by AI doing a first draft.

Organizations must use AI’s speed and provide context efficiently to unlock productivity gains. There also needs to be a framework that can maintain quality even at higher speeds. Several strategies jump out:

  1. Massively increase the use of wikis and other written content.

Human organizations rarely codify their entire structure because the upfront cost and coordination are substantial. The ongoing effort to access and maintain such documentation is also significant. Asking co-workers questions or developing working relationships is usually more efficient and flexible.

Asking humans or developing relationships nullifies AI’s strength (speed) and exposes its greatest weakness (human context). Having the information in written form eliminates these issues. The cost of creating and maintaining these resources should fall with the help of AI.

I’ve written about how organizations already codify themselves as they automate with traditional software. Creating wikis and other written resources is essentially programming in natural language, which is more accessible and compact.

  1. Move from reviews to standardized pre-approvals and surveillance.

Human organizations often prefer reviews as a checkpoint because creating a list of requirements is time-consuming, and they are commonly wrong. A simple review and release catches obvious problems and limits overhead and upfront investment. Reviews of this style are still relevant for many AI tasks where a human prompts the agent and then reviews the output.

AI could increase velocity for more complex and cross-functional projects by moving away from reviews. Waiting for human review from various teams is slow. Alternatively, AI agents can generate a list of requirements and unit tests for their specialty in a few minutes, considering more organizational context (now written) than humans can. Work that meets the pre-approval standards can continue, and then surveillance paired with graduated rollouts can detect if there are an unusual amount of errors.

Human organizations have a tradeoff between “waterfall” and “agile,” AI organizations can do both at once with minimal penalty, increasing iteration speed.

  1. Use “Stop Work Authority” methods to ensure quality.

One of the most important components of the Toyota Production System is that every employee has “stop work authority.”” Any employee can, and is encouraged to, stop the line if they see an error or confusion. New processes might have many stops as employees work out the kinks, but things quickly line out. It is a very efficient bug-hunting method.

AI agents should have stop work authority. They can be effective in catching errors because they work in probabilities. Work stops when they cross a threshold of uncertainty. Waymo already does this with AI-driven taxis. The cars stop and consult human operators when confused.

An obvious need is a human operations team that can respond to these stoppages in seconds or minutes.

Issues are recorded and can be fixed permanently by adding to written context resources, retraining, altering procedures, or cleaning inputs.

  1. Design for AI.

A concept called “Design for Manufacturing” is popular with manufacturing nerds and many leading companies. The idea is that some actions are much cheaper and defect-free than others. For instance, an injection molded plastic part with a shape that only allows installation one way will be a fraction of the cost of a CNC-cut metal part with an ambiguous installation orientation. The smart thing to do is design a product to use the plastic part instead of a metal one.

The same will be true of AI agents. Designing processes for their strengths will have immense value, especially in production, where errors are costly.

  1. Cast a Wider Design Net.

The concept of “Design for AI” also applies at higher levels. Employees with the creativity for clever architectural designs are scarce resources. AI agents can help by providing analysis of many rabbit holes and iterations, helping less creative employees or supercharging the best.

The design phase has the most impact on downstream cost and productivity of any phase.

  1. Minimize human touch points.

Human interaction significantly slows down any process and kills one of the primary AI advantages.

Written context is the first step in eliminating human touch points. Human workers can supervise the creation of the wikis instead of completing low-level work.

Pre-approvals are the next, so AI agents are not waiting for human sign-off.

AI decision probability thresholds, graduated rollouts, and unit tests can reduce the need for human inspection of work output.

  1. Eliminate meeting culture.

Meetings help human organizations coordinate tasks and exchange context. Humans will continue to have meetings even in AI organizations.

The vast majority of lower-level meetings need to be cut. They lose their advantages once work completion times are compressed and context more widely available.

Meeting content moves from day-to-day operations to much higher-level questions about strategy and coordination. Humans might spend even more time in meetings if the organizational cadence increases so that strategies have to constantly adjust!

Once an icon of the 20th century seen as obsolete in the 21st, Encyclopaedia Britannica—now known as just Britannica— is all in on artificial intelligence, and may soon go public at a valuation of nearly $1 billion, according to the New York Times.

Until 2012 when printing ended, the company’s books served as the oldest continuously published, English-language encyclopedias in the world, essentially collecting all the world’s knowledge in one place before Google or Wikipedia were a thing. That has helped Britannica pivot into the AI age, where models benefit from access to high-quality, vetted information. More general-purpose models like ChatGPT suffer from hallucinations because they have hoovered up the entire internet, including all the junk and misinformation.

While it still offers an online edition of its encyclopedia, as well as the Merriam-Webster dictionary, Britannica’s biggest business today is selling online education software to schools and libraries, the software it hopes to supercharge with AI. That could mean using AI to customize learning plans for individual students. The idea is that students will enjoy learning more when software can help them understand the gaps in their understanding of a topic and stay on it longer. Another education tech company, Brainly, recently announced that answers from its chatbot will link to the exact learning materials (i.e. textbooks) they reference.

Britannica’s CEO Jorge Cauz also told the Times about the company’s Britannica AI chatbot, which allows users to ask questions about its vast database of encyclopedic knowledge that it collected over two centuries from vetted academics and editors. The company similarly offers chatbot software for customer service use cases.

Britannica told the Times it is expecting revenue to double from two years ago, to $100 million.

A company in the space of selling educational books that has seen its fortunes go the opposite direction is Chegg. The company has seen its stock price plummet almost in lock-step with the rise of OpenAI’s ChatGPT, as students canceled their subscriptions to its online knowledge platform.

A.I. hallucinations are reinvigorating the creative side of science. They speed the process by which scientists and inventors dream up new ideas and test them to see if reality concurs. It’s the scientific method — only supercharged. What once took years can now be done in days, hours and minutes. In some cases, the accelerated cycles of inquiry help scientists open new frontiers.

“We’re exploring,” said James J. Collins, an M.I.T. professor who recently praised hallucinations for speeding his research into novel antibiotics. “We’re asking the models to come up with completely new molecules.”

The A.I. hallucinations arise when scientists teach generative computer models about a particular subject and then let the machines rework that information. The results can range from subtle and wrongheaded to surreal. At times, they lead to major discoveries.

In October, David Baker of the University of Washington shared the Nobel Prize in Chemistry for his pioneering research on proteins — the knotty molecules that empower life. The Nobel committee praised him for discovering how to rapidly build completely new kinds of proteins not found in nature, calling his feat “almost impossible.”

In an interview before the prize announcement, Dr. Baker cited bursts of A.I. imaginings as central to “making proteins from scratch.” The new technology, he added, has helped his lab obtain roughly 100 patents, many for medical care. One is for a new way to treat cancer. Another seeks to aid the global war on viral infections. Dr. Baker has also founded or helped start more than 20 biotech companies.

Despite the allure of A.I. hallucinations for discovery, some scientists find the word itself misleading. They see the imaginings of generative A.I. models not as illusory but prospective — as having some chance of coming true, not unlike the conjectures made in the early stages of the scientific method. They see the term hallucination as inaccurate, and thus avoid using it.

The word also gets frowned on because it can evoke the bad old days of hallucinations from LSD and other psychedelic drugs, which scared off reputable scientists for decades. A final downside is that scientific and medical communications generated by A.I. can, like chatbot replies, get clouded by false information.

The rise of artificial intelligence (AI) has brought about numerous innovations that have revolutionized industries, from healthcare and education to finance and entertainment. However, alongside the seemingly limitless capabilities of ChatGPT and friends, we find a less-discussed consequence: the gradual decline of human cognitive skills. Unlike earlier tools such as calculators and spreadsheets, which made specific tasks easier without fundamentally altering our ability to think, AI is reshaping the way we process information and make decisions, often diminishing our reliance on our own cognitive abilities.

Tools like calculators and spreadsheets were designed to assist in specific tasks—such as arithmetic and data analysis—without fundamentally altering the way our brains process information. In fact, these tools still require us to understand the basics of the tasks at hand. For example, you need to understand what the formula does, and what output you are seeking, before you type it into Excel. While these tools simplified calculations, they did not erode our ability to think critically or engage in problem-solving – the tools simply made life easier. AI, on the other hand, is more complex in terms of its offerings – and cognitive impact. As AI becomes more prevalent, effectively “thinking” for us, scientists and business leaders are concerned about the larger effects on our cognitive skills.

The effects of AI on cognitive development are already being identified in schools across the United States. In a report titled, “Generative AI Can Harm Learning”, researchers at the University of Pennsylvania found that students who relied on AI for practice problems performed worse on tests compared to students who completed assignments without AI assistance. This suggests that the use of AI in academic settings is not just an issue of convenience, but may be contributing to a decline in critical thinking skills.

Furthermore, educational experts argue that AI’s increasing role in learning environments risks undermining the development of problem-solving abilities. Students are increasingly being taught to accept AI-generated answers without fully understanding the underlying processes or concepts. As AI becomes more ingrained in education, there is a concern that future generations may lack the capacity to engage in deeper intellectual exercises, relying on algorithms instead of their own analytical skills.

Using AI as a tool to augment human abilities, rather than replace them, is the solution. Enabling that solution is a function of collaboration, communication and connection – three things that capitalize on human cognitive abilities.

For leaders and aspiring leaders, we have to create cultures and opportunities for higher-level thinking skills. The key to working more effectively with AI is in first understanding how to work independently of AI, according to the National Institute of Health. Researchers at Stanford point to the importance of explanations: where AI shares not just outputs, but insights. Insights into how the ultimate conclusion was reached, described in simple terms that invite further inquiry (and independent thinking).

Whether through collaborative learning, complex problem-solving, or creative thinking exercises, the goal should be to create spaces where human intelligence remains at the center. Does that responsibility fall on learning and development (L&D), or HR, or marketing, sales, engineering… or the executive team? The answer is: yes. A dedication to the human operating system remains vital for even the most technologically-advanced organizations. AI should serve as a complement to, rather than a substitute for, human cognitive skills.

The role of agents will not just be the role of the teacher. Bill Salak observes that “AI agents will take on many responsibilities traditionally handled by human employees, from administrative tasks to more complex, analytical roles. This transition will result in a large-scale redefinition of how humans contribute” to the educational experience. Humans must focus on unique skills—creativity, strategic thinking, emotional intelligence, and adaptability. Roles will increasingly revolve around supervising, collaborating with, or augmenting the capabilities of AI agents.

Jay Patel, SVP & GM of Webex Customer Experience Solutions at Cisco, agrees that AI Agents will be everywhere. They will not just change the classroom experience for students and teachers but profoundly impact all domains. He notes that these AI models, including small language models, are “sophisticated enough to operate on individual devices, enabling users to have highly personalized virtual assistants.” These agents will be more efficient, attuned to individual needs, and, therefore, seemingly more intelligent.

Jay Patel predicts that “the adopted agents will embody the organization’s unique values, personalities, and purpose. This will ensure that the AIs interact in a deeply brand-aligned way.” This will drive a virtuous cycle, as AI agent interactions will not seem like they have been handed off to an untrained intern but rather to someone who knows all and only what they are supposed to know.

For AI agents to realize their full potential, the experience of interacting with them must feel natural. Casual, spoken interaction will be significant, as will the ability of the agent to understand the context in which a question is being asked.

Hassaan Raza, CEO of Tavus, feels that a “human layer” will enable AI agents to realize their full potential as teachers. Agents need to be relatable and able to interact with students in a manner that shows not just subject-domain knowledge but empathy. A robust interface for these agents will include video, allowing students to look the AI in the eye.

In January, thousands of New Hampshire voters picked up their phones to hear what sounded like President Biden telling Democrats not to vote in the state’s primary, just days away.

“We know the value of voting Democratic when our votes count. It’s important you save your vote for the November election,” the voice on the line said.

But it wasn’t Biden. It was a deepfake created with artificial intelligence — and the manifestation of fears that 2024’s global wave of elections would be manipulated with fake pictures, audio and video, due to rapid advances in generative AI technology.

“The nightmare situation was the day before, the day of election, the day after election, some bombshell image, some bombshell video or audio would just set the world on fire,” said Hany Farid, a professor at the University of California at Berkeley who studies manipulated media.

The Biden deepfake turned out to be commissioned by a Democratic political consultant who said he did it to raise alarms about AI. He was fined $6 million by the FCC and indicted on criminal charges in New Hampshire.

But as 2024 rolled on, the feared wave of deceptive, targeted deepfakes didn’t really materialize.

A pro-tech advocacy group has released a new report warning of the growing threat posed by China’s artificial intelligence technology and its open-source approach that could threaten the national and economic security of the United States.

The report, published by American Edge Project, states that “China is rapidly advancing its own open-source ecosystem as an alternative to American technology and using it as a Trojan horse to implant its CCP values into global infrastructure.”

“Their progress is both significant and concerning: Chinese-developed open-source AI tools are already outperforming Western models on key benchmarks, while operating at dramatically lower costs, accelerating global adoption. Through its Belt and Road Initiative (BRI), which spans more than 155 countries on four continents, and its Digital Silk Road (DSR), China is exporting its technology worldwide, fostering increased global dependence, undermining democratic norms, and threatening U.S. leadership and global security.”

A Ukrainian national guard brigade just orchestrated an all-robot combined-arms operation, mixing crawling and flying drones for an assault on Russian positions in Kharkiv Oblast in northern Russia.

“We are talking about dozens of units of robotic and unmanned equipment simultaneously on a small section of the front,” a spokesperson for the 13th National Guard Brigade explained.

It was an impressive technological feat—and a worrying sign of weakness on the part of overstretched Ukrainian forces. Unmanned ground vehicles in particular suffer profound limitations, and still can’t fully replace human infantry.

That the 13th National Guard Brigade even needed to replace all of the human beings in a ground assault speaks to how few people the brigade has compared to the Russian units it’s fighting. The 13th National Guard Brigade defends a five-mile stretch of the front line around the town of Hlyboke, just south of the Ukraine-Russia border. It’s holding back a force of no fewer than four Russian regiments.

Share

Continue Reading
Click to comment

Leave a Reply

Tu dirección de correo electrónico no será publicada. Los campos obligatorios están marcados con *

Noticias

Artificial intelligence may affect diversity: architecture and cultural context reflected through ChatGPT, Midjourney, and Google Maps

Published

on

  • Adobor H (2021) Open strategy: what is the impact of national culture? Manag Res Rev. 44:1277–1297. https://doi.org/10.1108/MRR-06-2020-0334

    Article 
    MATH 

    Google Scholar 

  • Airoldi M, Rokka J (2022) Algorithmic consumer culture. Consum Mark. Cult. 25:411–428. https://doi.org/10.1080/10253866.2022.2084726

    Article 
    MATH 

    Google Scholar 

  • Ali T, Marc B, Omar B et al. (2021) Exploring destination’s negative e-reputation using aspect-based sentiment analysis approach: case of Marrakech destination on TripAdvisor. Tour. Manag Perspect. 40:100892. https://doi.org/10.1016/j.tmp.2021.100892

    Article 

    Google Scholar 

  • Andersson R, Bråmå Å (2018) The Stockholm estates: a tale of the importance of initial conditions, macroeconomic dependencies, tenure and immigration. In: Hess D, Tammaru T, van Ham M (eds.) Housing estates in Europe, 1st edn. Springer, Cham, p 361–388

  • Anon (2023) How to worry wisely about artificial intelligence. The Economist. https://www.economist.com/leaders/2023/04/20/how-to-worry-wisely-about-artificial-intelligence

  • Berman A, de Fine Licht K, Carlsson V (2024) Trustworthy AI in the public sector: an empirical analysis of a Swedish labor market decision-support system. Technol. Soc. 76:102471. https://doi.org/10.1016/j.techsoc.2024.102471

    Article 

    Google Scholar 

  • Bircan T, Korkmaz EE (2021) Big data for whose sake? Governing migration through artificial intelligence. Humanit Soc. Sci. Commun. 8:1–5. https://doi.org/10.1057/s41599-021-00910-x

    Article 
    MATH 

    Google Scholar 

  • Borbáth E, Hutter S, Leininger A (2023) Cleavage politics, polarisation and participation in Western Europe. West Eur. Polit. 0:1–21. https://doi.org/10.1080/01402382.2022.2161786

    Article 
    MATH 

    Google Scholar 

  • Bozdag E (2013) Bias in algorithmic filtering and personalization. Ethics Inf Technol 15:209–227. https://doi.org/10.1007/s10676-013-9321-6

  • Bratton B (2021) AI urbanism: a design framework for governance, program, and platform cognition. AI Soc. 36:1307–1312. https://doi.org/10.1007/s00146-020-01121-9

    Article 
    MATH 

    Google Scholar 

  • Cachat-Rosset G, Klarsfeld A (2023) Diversity, equity, and inclusion in Artificial Intelligence: An evaluation of guidelines. Appl Artif Intell 37. https://doi.org/10.1080/08839514.2023.2176618

  • Campo-Ruiz I (2024) Economic powers encompass the largest cultural buildings: market, culture and equality in Stockholm, Sweden (1918–2023). ArchNet-IJAR. https://doi.org/10.1108/ARCH-06-2023-0160

  • Churchland PM, Churchland PS (1990) Could a machine think? Sci. Am. Assoc. Adv. Sci. 262:32–39

    CAS 

    Google Scholar 

  • Crawford K (2021) Atlas of AI: power, politics, and the planetary costs of artificial intelligence. Yale University Press, New Haven

    Book 
    MATH 

    Google Scholar 

  • Cugurullo F (2021) Frankenstein urbanism: eco, smart and autonomous cities, artificial intelligence and the end of the city. Routledge, Taylor & Francis Group, London

    Book 

    Google Scholar 

  • Cugurullo F, Caprotti F, Cook M et al. (2024) The rise of AI urbanism in post-smart cities: a critical commentary on urban artificial intelligence. Urban Stud. 61:1168–1182. https://doi.org/10.1177/00420980231203386

    Article 
    MATH 

    Google Scholar 

  • Cugurullo F (2020) Urban artificial intelligence: from automation to autonomy in the smart city. Front Sustain Cities 2. https://doi.org/10.3389/frsc.2020.00038

  • Cugurullo F, Caprotti F, Cook M, Karvonen A, McGuirk P, Marvin S (eds.) (2023) Artificial intelligence and the city: urbanistic perspectives on AI, 1st edn. Taylor & Francis, London

  • Dervin F (2023) The paradoxes of interculturality: a toolbox of out-of-the-box ideas for intercultural communication education. Routledge, New York

    MATH 

    Google Scholar 

  • Dreyfus HL, Dreyfus SE (1988) Making a mind versus modeling the brain: Artificial Intelligence back at a branchpoint. Daedalus Camb. Mass 117:15–43

    MATH 

    Google Scholar 

  • Duberry J (2022) Artificial Intelligence and democracy: risks and promises of AI-mediated citizen-government relations. Edward Elgar Publishing, Northampton

  • Elena-Bucea A, Cruz-Jesus F, Oliveira T, Coelho PS (2021) Assessing the role of age, education, gender and income on the digital divide: evidence for the European Union. Inf. Syst. Front 23:1007–1021. https://doi.org/10.1007/s10796-020-10012-9

    Article 

    Google Scholar 

  • Elliott A (2018) The Culture of AI: everyday life and the digital revolution, 1st edn. Routledge, Milton

    MATH 

    Google Scholar 

  • von Eschenbach WJ (2021) Transparency and the black box problem: why we do not trust AI. Philos. Technol. 34:1607–1622. https://doi.org/10.1007/s13347-021-00477-0

    Article 
    MATH 

    Google Scholar 

  • European Parliament and the Council of the European Union (2024) Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act). https://eur-lex.europa.eu/eli/reg/2024/1689/oj. Accessed 20 Sep 2024

  • Flikschuh K, Ypi L, Ajei M (2015) Kant and colonialism: historical and critical perspectives. University Press, Oxford

  • Forty A (2000) Words and buildings: a vocabulary of modern architecture. Thames & Hudson, London

  • Fosch-Villaronga E, Poulsen A (2022) Diversity and inclusion in Artificial Intelligence. In: Custers B, Fosch-Villaronga E (eds) Law and Artificial Intelligence: regulating AI and applying AI in legal practice. T.M.C. Asser Press, The Hague, p 109–134

  • Future of Life Institute (2023) Pause giant AI experiments: an open letter. In: Future Life Inst. https://futureoflife.org/open-letter/pause-giant-ai-experiments/. Accessed 3 Sep 2024

  • George AS, George ASH, Martin ASG (2023) The environmental impact of AI: a case study of water consumption by Chat GPT. Part. Univers Int Innov. J. 1:97–104. https://doi.org/10.5281/zenodo.7855594

    Article 
    MATH 

    Google Scholar 

  • Geschke D, Lorenz J, Holtz P (2019) The triple-filter bubble: Using agent-based modelling to test a meta-theoretical framework for the emergence of filter bubbles and echo chambers. Br. J. Soc. Psychol. 58:129–149. https://doi.org/10.1111/bjso.12286

    Article 
    PubMed 
    MATH 

    Google Scholar 

  • Gupta R, Nair K, Mishra M et al. (2024) Adoption and impacts of generative artificial intelligence: theoretical underpinnings and research agenda. Int J. Inf. Manag Data Insights 4:100232. https://doi.org/10.1016/j.jjimei.2024.100232

    Article 

    Google Scholar 

  • Haandrikman K, Costa R, Malmberg B et al. (2023) Socio-economic segregation in European cities. A comparative study of Brussels, Copenhagen, Amsterdam, Oslo and Stockholm. Urban Geogr. 44:1–36. https://doi.org/10.1080/02723638.2021.1959778

    Article 

    Google Scholar 

  • Haria V, Shah Y, Gangwar V et al. (2019) The working of Google Maps, and the commercial usage of navigation systems. IJIRT 6:184–191

    MATH 

    Google Scholar 

  • Heckmann R, Kock S, Gaspers L (2022) Artificial intelligence supporting sustainable and individual mobility: development of an algorithm for mobility planning and choice of means of transport. In: Coors V, Pietruschka D, Zeitler B (eds) iCity. Transformative research for the livable, intelligent, and sustainable city. Springer International Publishing, Stuttgart, p 27–40

  • Henning M, Westlund H, Enflo K (2023) Urban–rural population changes and spatial inequalities in Sweden. Reg. Sci. Policy Pr. 15:878–892. https://doi.org/10.1111/rsp3.12602

    Article 
    MATH 

    Google Scholar 

  • Howarth J (2024) Number of parameters in GPT-4 (Latest Data). https://explodingtopics.com/blog/gpt-parameters. Accessed 8 Sep 2024

  • Howe B, Brown JM, Han B, Herman B, Weber N, Yan A, Yang S, Yang Y (2022) Integrative urban AI to expand coverage, access, and equity of urban data. Eur. Phys. J. Spec. Top. 231:1741–1752. https://doi.org/10.1140/epjs/s11734-022-00475-z

    Article 
    PubMed 
    PubMed Central 
    MATH 

    Google Scholar 

  • Jang KM, Chen J, Kang Y, Kim J, Lee J, Duarte F (2023) Understanding place identity with generative AI. Leibniz International Proceedings in Informatics, Leibniz, pp 1–6

  • Kent N (2008) A concise history of Sweden. Cambridge University Press, Cambridge

    Book 
    MATH 

    Google Scholar 

  • Khan I (2024a) The quick guide to prompt engineering. John Wiley & Sons, Newark

    MATH 

    Google Scholar 

  • Khan S (2024b) Cultural diversity and social cohesion: perspectives from social science. Phys. Educ. Health Soc. Sci. 2:40–48

    MATH 

    Google Scholar 

  • Lo Piano S (2020) Ethical principles in machine learning and artificial intelligence: cases from the field and possible ways forward. Humanit Soc Sci Commun 7:1–7. https://doi.org/10.1057/s41599-020-0501-9

  • Luusua A, Ylipulli J, Foth M, Aurigi A (2023) Urban AI: understanding the emerging role of artificial intelligence in smart cities. AI Soc. 38:1039–1044. https://doi.org/10.1007/s00146-022-01537-5

    Article 
    PubMed 

    Google Scholar 

  • Lyu Y, Lu H, Lee MK, Schmitt G, Lim B (2024) IF-City: intelligible fair city planning to measure, explain and mitigate inequality. IEEE Trans. Vis. Comput Graph 30:3749–3766. https://doi.org/10.1109/TVCG.2023.3239909

    Article 
    PubMed 

    Google Scholar 

  • Macrorie R, Marvin S, While A (2021) Robotics and automation in the city: a research agenda. Urban Geogr. 42(2):197–217

    Article 
    MATH 

    Google Scholar 

  • Mahmud HJA (2013) Social cohesion in multicultural society: a case of Bangladeshi immigrants in Stockholm. Dissertation, Stockholm University

  • Marion G (2017) How Culture Affects Language and Dialogue. In: The Routledge Handbook of Language and Dialogue. Routledge, London

  • McNealy JE (2021) Framing and language of ethics: technology, persuasion, and cultural context. J. Soc. Comput 2:226–237. https://doi.org/10.23919/JSC.2021.0027

    Article 
    MATH 

    Google Scholar 

  • McQuire S (2019) One map to rule them all? Google Maps as digital technical object. Commun. Public 4:150–165. https://doi.org/10.1177/2057047319850192

    Article 

    Google Scholar 

  • Mehta H, Kanani P, Lande P (2019) Google Maps. Int J. Comput. Appl 178:41–46. https://doi.org/10.5120/ijca2019918791

    Article 

    Google Scholar 

  • Midjourney (2024) About. www.midjourney.com/home. Accessed 23 Oct 2024

  • Mironenko IA, Sorokin PS (2018) Seeking for the definition of “culture”: current concerns and their implications. a comment on Gustav Jahoda’s article “Critical reflections on some recent definitions of “culture’”. Integr. Psychol. Behav. Sci. 52:331–340. https://doi.org/10.1007/s12124-018-9425-y

    Article 
    PubMed 
    MATH 

    Google Scholar 

  • Mozur P (2017) Beijing wants A.I. to be made in China by 2030. N. Y. Times. https://www.nytimes.com/2017/07/20/business/china-artificial-intelligence.html

  • Murdie RA, Borgegård L-E (1998) Immigration, spatial segregation and housing segmentation of immigrants in metropolitan Stockholm, 1960-95. Urban Stud. Edinb. Scotl. 35:1869–1888. https://doi.org/10.1080/0042098984196

    Article 

    Google Scholar 

  • Musterd S, Marcińczak S, van Ham M, Tammaru T (2017) Socioeconomic segregation in European capital cities. Increasing separation between poor and rich. Urban Geogr. 38:1062–1083. https://doi.org/10.1080/02723638.2016.1228371

    Article 

    Google Scholar 

  • Newell A, Simon HA (1976) Computer science as empirical inquiry: symbols and search. Commun. ACM 19:113–126. https://doi.org/10.1145/360018.360022

    Article 
    MathSciNet 
    MATH 

    Google Scholar 

  • Norocel OC, Hellström A, Jørgensen MB (2020) Nostalgia and hope: intersections between politics of culture, welfare, and migration in Europe, 1st edn. Springer International Publishing, Cham

  • OECD (2019) Artificial Intelligence in Society. OECD Publishing, Paris. https://doi.org/10.1787/eedfee77-en. Accessed 23 Oct 2024

  • OpenAI (2023) https://openai.com/about. Accessed 20 Sep 2024

  • Palmini O, Cugurullo F (2023) Charting AI urbanism: conceptual sources and spatial implications of urban artificial intelligence. Discov Artif Intell 3. https://doi.org/10.1007/s44163-023-00060-w

  • Pariser E (2011) The filter bubble: what the Internet is hiding from you. Viking, London

  • Parker PD, Van Zanden B, Marsh HW, Owen K, Duineveld J, Noetel M (2020) The intersection of gender, social class, and cultural context: a meta-analysis. Educ. Psychol. Rev. 32:197–228. https://doi.org/10.1007/s10648-019-09493-1

    Article 

    Google Scholar 

  • Peng J, Strijker D, Wu Q (2020) Place identity: how far have we come in exploring its meanings? Front Psychol 11. https://doi.org/10.3389/fpsyg.2020.00294

  • Peng Z-R, Lu K-F, Liu Y, Zhai W (2023) The pathway of urban planning AI: from planning support to plan-making. J Plan Educ Res 0. https://doi.org/10.1177/0739456X231180568

  • Phuangsuwan P, Siripipatthanakul S, Limna P, Pariwongkhuntorn N (2024) The impact of Google Maps application on the digital economy. Corp. Bus. Strategy Rev. 5:192–203. https://doi.org/10.22495/cbsrv5i1art18

    Article 

    Google Scholar 

  • Popelka S, Narvaez Zertuche L, Beroche H (2023) Urban AI guide. Urban AI. https://urbanai.fr/

  • Prabhakaran V, Qadri R, Hutchinson B (2022) Cultural incongruencies in Artificial Intelligence. ArXiv Cornell Univ. https://doi.org/10.48550/arxiv.2211.13069

  • Raile P (2024) The usefulness of ChatGPT for psychotherapists and patients. Humanit Soc. Sci. Commun. 11:47–48. https://doi.org/10.1057/s41599-023-02567-0

    Article 

    Google Scholar 

  • Rhodes SC (2022) Filter bubbles, echo chambers, and fake news: How social media conditions individuals to be less critical of political misinformation. Polit Commun 39:1–22. https://doi.org/10.1080/10584609.2021.1910887

  • Robertson RE, Green J, Ruck DJ, et al. (2023) Users choose to engage with more partisan news than they are exposed to on Google Search. Nature 618:342–348. https://doi.org/10.1038/s41586-023-06078-5

  • Rönnblom M, Carlsson V, Öjehag‐Pettersson A (2023) Gender equality in Swedish AI policies. What’s the problem represented to be? Rev. Policy Res 40:688–704. https://doi.org/10.1111/ropr.12547

    Article 

    Google Scholar 

  • Sabherwal R, Grover V (2024) The societal impacts of generative Artificial Intelligence: a balanced perspective. J. Assoc. Inf. Syst. 25:13–22. https://doi.org/10.17705/1jais.00860

    Article 
    MATH 

    Google Scholar 

  • Sanchez TW, Shumway H, Gordner T, Lim T (2023) The prospects of Artificial Intelligence in urban planning. Int J. Urban Sci. 27:179–194. https://doi.org/10.1080/12265934.2022.2102538

    Article 
    MATH 

    Google Scholar 

  • Sareen S, Saltelli A, Rommetveit K (2020) Ethics of quantification: illumination, obfuscation and performative legitimation. Palgrave Commun 6. https://doi.org/10.1057/s41599-020-0396-5

  • Searle J (1980) Minds, brains and programs. In: Collins A, Smith EE (eds) Readings in Cognitive Science. Morgan Kaufmann, San Mateo, p 20–31

  • Sherman S (2023) The polyopticon: a diagram for urban Artificial Intelligences. AI Soc. 38:1209–1222. https://doi.org/10.1007/s00146-022-01501-3

    Article 
    MATH 

    Google Scholar 

  • Sloan RH, Warner R (2020) Beyond bias: Artificial Intelligence and social justice. Va J. Law Technol. 24:1

    MATH 

    Google Scholar 

  • Son TH, Weedon Z, Yigitcanlar T et al. (2023) Algorithmic urban planning for smart and sustainable development: Systematic review of the literature. Sustain Cities Soc. 94:104562. https://doi.org/10.1016/j.scs.2023.104562

    Article 

    Google Scholar 

  • Sun J, Song J, Jiang Y, et al. (2022) Prick the filter bubble: A novel cross domain recommendation model with adaptive diversity regularization. Electron Mark 32:101–121. https://doi.org/10.1007/s12525-021-00492-1

  • Tan L, Luhrs M (2024) Using generative AI Midjourney to enhance divergent and convergent thinking in an architect’s creative design process. Des. J. 27:677–699. https://doi.org/10.1080/14606925.2024.2353479

    Article 

    Google Scholar 

  • Tseng Y-S (2023) Assemblage thinking as a methodology for studying urban AI phenomena. AI Soc. 38:1099–1110. https://doi.org/10.1007/s00146-022-01500-4

    Article 
    MATH 

    Google Scholar 

  • Ullah Z, Al-Turjman F, Mostarda L, Gagliardi R (2020) Applications of Artificial Intelligence and Machine Learning in smart cities. Comput Commun. 154:313–323. https://doi.org/10.1016/j.comcom.2020.02.069

    Article 
    MATH 

    Google Scholar 

  • Verdegem P (2022) Dismantling AI capitalism: the commons as an alternative to the power concentration of Big Tech. AI Soc 1–11. https://doi.org/10.1007/s00146-022-01437-8

  • Weizenbaum J (1984) Computer power and human reason: from judgment to calculation. Penguin, Harmondsworth

    MATH 

    Google Scholar 

  • Wu T, He S, Liu J et al. (2023) A brief overview of ChatGPT: The history, status quo and potential future development. IEEECAA J. Autom. Sin. 10:1122–1136. https://doi.org/10.1109/JAS.2023.123618

    Article 
    MATH 

    Google Scholar 

  • Yigitcanlar T, Corchado JM, Mehmood R, Li RYM, Mossberger K, Desouza KC (2021) Responsible urban innovation with local government Artificial Intelligence (AI): A conceptual framework and research agenda. J. Open Innov. 7:1–16. https://doi.org/10.3390/joitmc7010071

    Article 

    Google Scholar 

  • Zhang Q, Lu J, Jin Y (2021) Artificial Intelligence in recommender systems. Complex Intell. Syst. 7:439–457. https://doi.org/10.1007/s40747-020-00212-w

    Article 
    MATH 

    Google Scholar 

  • Zhang J, Hu C (2023) Adaptive algorithms: users must be more vigilant. Nat Lond 618:907–907. https://doi.org/10.1038/d41586-023-02033-6

  • Continue Reading

    Noticias

    “Es una suerte y una lección de humildad” trabajar hacia la superinteligencia

    Published

    on

    Sam Altman, director ejecutivo y cofundador de OpenAI, ha compartido reflexiones sinceras sobre el viaje de la empresa en su objetivo de lograr la superinteligencia.

    Ahora que ChatGPT celebra recientemente su segundo aniversario, Altman describe los logros de OpenAI, los desafíos actuales y la visión para el futuro de la IA.

    “El segundo cumpleaños de ChatGPT fue hace sólo poco más de un mes, y ahora hemos hecho la transición al siguiente paradigma de modelos que pueden realizar razonamientos complejos”, reflexiona Altman.

    Una misión audaz para lograr AGI y superinteligencia

    OpenAI se fundó en 2015 con una misión clara, aunque audaz: desarrollar AGI y garantizar que beneficie a toda la humanidad.

    Altman y el equipo fundador creían que AGI podría convertirse en “la tecnología de mayor impacto en la historia de la humanidad”. Sin embargo, recuerda, el mundo no estaba particularmente interesado en su búsqueda en aquel entonces.

    “En ese momento, a muy pocas personas les importaba, y si les importaba, era principalmente porque pensaban que no teníamos posibilidades de éxito”, explica Altman.

    Un avance rápido hasta 2022, OpenAI todavía era una instalación de investigación relativamente silenciosa que probaba lo que entonces se conocía como “Chat With GPT-3.5”. Los desarrolladores habían estado explorando las capacidades de su API y el entusiasmo despertó la idea de lanzar una demostración lista para el usuario.

    Esta demostración condujo a la creación de ChatGPT, que Altman reconoce que se benefició de una marca “afortunadamente” mejor que su nombre inicial. Cuando se lanzó el 30 de noviembre de 2022, ChatGPT demostró ser un punto de inflexión.

    “El lanzamiento de ChatGPT inició una curva de crecimiento como nunca antes habíamos visto, en nuestra empresa, nuestra industria y el mundo en general”, dice.

    Desde entonces, OpenAI ha sido testigo de una evolución marcada por un interés asombroso, no sólo en sus herramientas sino en las posibilidades más amplias de la IA.

    Construyendo a una velocidad vertiginosa

    Altman admite que convertir OpenAI en una potencia tecnológica global planteó desafíos importantes.

    “En los últimos dos años, tuvimos que construir una empresa entera, casi desde cero, en torno a esta nueva tecnología”, señala, y añade: “No hay manera de formar a la gente para esto excepto haciéndolo”.

    Al operar en aguas desconocidas, el equipo de OpenAI a menudo se enfrentó a la ambigüedad: tomó decisiones sobre la marcha y tuvo que lidiar con los inevitables pasos en falso.

    “Construir una empresa a tan alta velocidad con tan poca capacitación es un proceso complicado”, explica Altman. “A menudo son dos pasos hacia adelante y uno hacia atrás (y, a veces, un paso hacia adelante y dos hacia atrás)”.

    Sin embargo, a pesar del caos, Altman atribuye el mérito a la resiliencia y la capacidad de adaptación del equipo.

    OpenAI ahora cuenta con más de 300 millones de usuarios activos semanales, un fuerte aumento con respecto a los 100 millones reportados hace apenas un año. Gran parte de este éxito radica en el espíritu de la organización de aprender haciendo, combinado con el compromiso de llevar al mundo “tecnología que la gente realmente parece amar y que resuelve problemas reales”.

    “Un gran fracaso de la gobernanza”

    Por supuesto, el viaje hasta ahora no ha estado exento de turbulencias. Altman relata un capítulo particularmente difícil de noviembre de 2023, cuando fue repentinamente destituido como director ejecutivo, contratado brevemente por Microsoft, para ser reinstalado por OpenAI días después en medio de una reacción violenta de la industria y protestas del personal.

    Hablando abiertamente, Altman destaca la necesidad de mejores estructuras de gobernanza en las organizaciones que abordan tecnologías críticas como la IA.

    “Todo el evento fue, en mi opinión, un gran fracaso de la gobernanza por parte de personas bien intencionadas, incluido yo mismo”, admite. “Mirando hacia atrás, ciertamente desearía haber hecho las cosas de manera diferente y me gustaría creer que hoy soy un líder mejor y más reflexivo que hace un año”.

    El episodio sirvió como un crudo recordatorio de la complejidad de gestionar el rápido crecimiento y lo que está en juego en el desarrollo de la IA. También impulsó a OpenAI a forjar nuevas estructuras de gobernanza “que nos permitan llevar a cabo nuestra misión de garantizar que AGI beneficie a toda la humanidad”.

    Altman expresó su profunda gratitud por el apoyo que OpenAI recibió durante la crisis por parte de empleados, socios y clientes. “Lo más importante que aprendí es cuánto tengo que estar agradecido y a cuántas personas debo gratitud”, enfatiza.

    Pivotando hacia la superinteligencia

    De cara al futuro, Altman dice que OpenAI está empezando a apuntar más allá de la AGI hacia el desarrollo de “superinteligencia”, sistemas de IA que superan con creces las capacidades cognitivas humanas.

    “Ahora estamos seguros de que sabemos cómo construir AGI como lo hemos entendido tradicionalmente”, comparte Altman. OpenAI predice que para finales de este año, los agentes de IA se “unirán significativamente a la fuerza laboral”, revolucionando las industrias con sistemas complementarios y de automatización más inteligentes.

    Lograr la superinteligencia sería especialmente transformador para la sociedad, con el potencial de acelerar los descubrimientos científicos, pero también plantea los peligros más importantes.

    “Creemos en la importancia de ser líderes mundiales en investigación de seguridad y alineación… OpenAI no puede ser una empresa normal”, señala, subrayando la necesidad de abordar la innovación de manera responsable.

    La estrategia de OpenAI incluye introducir gradualmente avances en el mundo, permitiendo que la sociedad se adapte a la rápida evolución de la IA. “Poner de forma iterativa excelentes herramientas en manos de las personas conduce a resultados excelentes y ampliamente distribuidos”, argumenta Altman.

    Al reflexionar sobre la trayectoria de la organización, Altman admite que el camino de OpenAI ha estado definido tanto por avances extraordinarios como por desafíos importantes, desde ampliar equipos hasta navegar el escrutinio público.

    “Hace nueve años, realmente no teníamos idea de en qué nos íbamos a convertir; Incluso ahora, sólo lo sabemos en cierto modo”, afirma.

    Lo que queda claro es su compromiso inquebrantable con la visión de OpenAI. “Nuestra visión no cambiará; Nuestras tácticas seguirán evolucionando”, afirma Altman, atribuyendo el notable progreso de la empresa a la voluntad del equipo de repensar los procesos y aceptar los desafíos.

    A medida que la IA continúa remodelando las industrias y la vida diaria, el mensaje central de Altman es evidente: si bien el viaje no ha sido nada sencillo, OpenAI se mantiene firme en su misión de desbloquear los beneficios de la IA para todos.

    “Qué suerte y qué humildad es poder desempeñar un papel en este trabajo”, concluye Altman.

    Ver también: OpenAI financia un estudio de 1 millón de dólares sobre IA y moralidad en la Universidad de Duke

    ¿Quiere aprender más sobre IA y big data de la mano de los líderes de la industria? Echa un vistazo a AI & Big Data Expo que se lleva a cabo en Ámsterdam, California y Londres. El evento integral comparte ubicación con otros eventos líderes, como la Conferencia de Automatización Inteligente, BlockX, la Semana de Transformación Digital y la Cyber ​​Security & Cloud Expo.

    Explore otros próximos eventos y seminarios web de tecnología empresarial impulsados ​​por TechForge aquí.

    Etiquetas: agi, ai, inteligencia artificial, desarrollo, ética, openai, sam altman, superinteligencia

    Continue Reading

    Noticias

    Mientras OpenAI busca ganancias, el activista busca vengarse del público

    Published

    on

    Se está gestando una batalla por la reestructuración de OpenAI, el creador del chatbot pionero de inteligencia artificial ChatGPT. Se fundó como una organización sin fines de lucro en 2015 con el objetivo de desarrollar IA para beneficiar a la humanidad, no a los inversores. Pero la IA avanzada requiere una potencia de procesamiento masiva, lo que se vuelve costoso, lo que influye en la decisión de la empresa de contratar grandes inversores. Recientemente, OpenAI dio a conocer un plan para la transición a una corporación de beneficio público con fines de lucro.

    Ese plan ha generado objeciones de personas como Elon Musk, Meta y Robert Weissman, copresidente del grupo de defensa del consumidor Public Citizen, que instó a las autoridades de California a garantizar que, a medida que OpenAI se reorganice, devolverá gran parte de los beneficios que recibió como organización sin fines de lucro. .

    La siguiente es una transcripción editada de la conversación de Weissman con Meghan McCarty Carino de Marketplace.

    Robert Weissman: Ser una organización sin fines de lucro habilitada [OpenAI] aceptar donaciones, y ese fue el modelo para la organización sin fines de lucro, al igual que es el modelo para otras organizaciones sin fines de lucro. Habían donado dinero en efectivo y donaciones en especie de poder computacional para hacer su trabajo de desarrollar esta nueva tecnología. No tenían que fingir que iban a poder devolver la inversión a la gente. Les pidieron que lo hicieran con fines benéficos y pudieron recaudar una cantidad significativa de dinero y recursos a través de ese enfoque.

    Meghan McCarty Cariño: Entonces, tengo entendido que en 2019, OpenAI hizo una transición a una estructura diferente para poder recibir dinero de inversores como Microsoft. Pero el negocio, un negocio con ganancias limitadas, todavía ha sido gobernado aparentemente por la organización sin fines de lucro. Ahora, busca convertirse en una corporación de beneficio público con fines de lucro. ¿Qué dice la ley sobre lo que se requiere para realizar este tipo de pivote?

    Weissman: Esto es algo muy, muy inusual, y tal vez ninguna entidad haya seguido exactamente el camino que ha tomado OpenAI. Pero hay una historia de organizaciones sin fines de lucro que se convierten en organizaciones con fines de lucro, que no es exactamente lo que está haciendo OpenAI, pero es la historia central de lo que está haciendo OpenAI. Y para hacer esa conversión, tendrán que recibir efectivamente el visto bueno de los fiscales generales de Delaware, donde está constituida la operación, y de California, donde están registrados y donde hacen negocios.

    En la historia de este tipo de conversiones, si vas a sacar activos del sector sin fines de lucro, tienes que devolverle al sector sin fines de lucro el valor de lo que estás tomando. Si diriges una empresa benéfica, no puedes privatizarla de repente y hacerla tuya. Si eres el director ejecutivo de una organización benéfica, no puedes simplemente donarla a una corporación, o no puedes simplemente convertir tu organización sin fines de lucro en una corporación con fines de lucro porque pudiste tener éxito y evolucionar bajo el paraguas de acuerdos sin fines de lucro, apoyo deducible de impuestos, etc. Tienes que devolverle el dinero al sector caritativo. El precedente más importante de este tipo de conversiones es la conversión de compañías de seguros médicos Blue Cross sin fines de lucro en compañías de atención médica Blue Cross con fines de lucro. Esto ocurrió en todo Estados Unidos y en todos los estados donde sucedió, la entidad con fines de lucro debía devolver al sector sin fines de lucro una cantidad igual en valor a lo que estaban privatizando. Por lo general, ese dinero se dedicaba luego a una fundación benéfica de atención médica. Muchas grandes fundaciones benéficas de atención médica todavía existen, incluso en California, a partir de ese tipo de conversiones.

    McCarty Carino: Entonces, ¿qué ha sugerido OpenAI sobre cómo realizaría esta transición y qué le preocupa sobre su plan?

    Weissman: Bueno, las cosas han evolucionado rápidamente y muy recientemente. Ahora han anunciado su intención de hacerlo, aunque ya se rumoreaba desde hacía algún tiempo. Lo que OpenAI dice que van a hacer es escindir su filial con fines de lucro. En este momento, tienen una junta sin fines de lucro que controla una afiliada con fines de lucro, y la organización con fines de lucro pagaría a la organización sin fines de lucro el valor de lo que están tomando, y eso sería propiedad de la organización sin fines de lucro en forma de acciones en el nuevo OpenAI independiente y con fines de lucro. Así que proponen hacer ese pago al sector sin fines de lucro básicamente pagándose a ellos mismos, lo cual creemos que no es una buena idea.

    McCarty Carino: Así que usted escribió una carta al fiscal general de California, Rob Bonta, en septiembre defendiendo que OpenAI pagara al menos 30 mil millones de dólares y compartiera cualquier tecnología de inteligencia artificial general básicamente con una fundación benéfica, una fundación benéfica independiente. ¿Cómo se te ocurrió eso?

    Weissman: Bueno, primero comenzamos a comunicarnos con los fiscales generales de California y también de Delaware después de la muy extraña y muy publicitada reorganización de la junta directiva de OpenAI en noviembre de 2023, porque lo que vimos que sucedió allí fue que la junta sin fines de lucro intentaba ejercer control sobre la afiliada con fines de lucro. y perdiendo. Las fuerzas con fines de lucro dentro y alrededor de OpenAI básicamente abrumaron a la junta directiva de las organizaciones sin fines de lucro, las expulsaron y las reemplazaron con gente nueva. En ese momento, nos pareció que esta entidad, fuera lo que fuera, esta entidad combinada, ya no funcionaba realmente como una organización sin fines de lucro. Efectivamente se había convertido en una empresa con fines de lucro. Y empezamos a decir, bueno, si ese es el caso, entonces tienen el deber de pagar al sector sin fines de lucro el valor de lo que están retirando, tal como fue el caso con estas conversiones de Blue Cross. Ahora, poco más de un año después, OpenAI dice: sí, es cierto, en realidad ya no queremos pretender ser una organización sin fines de lucro; queremos hacer esa conversión. Entonces, antes de que esto suceda, dijimos, si va a haber una conversión, si realmente se va a hacer eso formalmente, o si se va a obligar a hacerlo, para reconocer lo que efectivamente ya sucedió, ¿cuál es el valor? que tienen que devolver el dinero?

    Ahora han desarrollado una estructura muy extraña, no toda transparente. No está claro que OpenAI, la organización sin fines de lucro, tenga muchas acciones o tenga mucha participación accionaria en OpenAI, la organización con fines de lucro. Sin embargo, según los términos de configuración de toda la operación, sí tiene control sobre OpenAI con fines de lucro. Entonces dijimos, mire, al menos se les debe la prima de control, que en el extremo inferior es el 20% del valor de una empresa adquirida en la mayoría de las transacciones que tienen lugar y en el mercado de valores y adquisiciones regulares. Bueno, el 20% de los 150.000 millones de dólares, que es el valor actual de OpenAI, supone un mínimo de 30.000 millones de dólares. Hay muchas razones para pensar que las cifras deberían ser más altas, tal vez mucho más altas que eso, pero creemos que 30.000 millones de dólares es la base de lo que hay que pagar. Y nuevamente, para nosotros, no funciona si OpenAI simplemente se paga a sí mismo, básicamente hace que las organizaciones con fines de lucro paguen a las organizaciones sin fines de lucro que son afiliadas y no verdaderamente independientes. Tiene que volver al sector caritativo independiente, lo que probablemente significaría una o más fundaciones independientes que realmente podrían promover los intereses del desarrollo de la IA para el interés público, promover las preocupaciones éticas y de seguridad de la inteligencia artificial y descubrir cómo proporcionar un mayor acceso a personas a las nuevas tecnologías que están surgiendo.

    McCarty Carino: Entonces, ¿cómo ve este posible giro que impactará la visión original de OpenAI de anteponer la humanidad a las ganancias en la búsqueda de la inteligencia artificial?

    Weissman: Bueno, creo que han abandonado eso. Creo que lo abandonaron antes de esta conversión, por lo que, desde nuestro punto de vista, parece que el genio ya está fuera de la botella. Dicen que se están convirtiendo, que quieren convertirse en una corporación de beneficio público, que tendría la capacidad de considerar tanto intereses lucrativos como no lucrativos. Pero, de hecho, lo que hemos visto con OpenAI durante el último año, en realidad con el lanzamiento de la popular versión de ChatGPT, es que ha sido la menos preocupada por la seguridad, la más agresiva en la introducción de nuevas tecnologías de todas las empresas de IA. . Entonces introdujeron la tecnología que Google más o menos tenía, pero tenía miedo de lanzarla al mercado debido a preocupaciones sobre la seguridad y tal vez debido a la responsabilidad, pero se vio obligado a ponerse al día rápidamente después de que OpenAI avanzó. Estamos viendo eso nuevamente con OpenAI introduciendo tecnologías que tienen una capacidad de voz artificial de increíblemente alta calidad, lo que realmente hace posible y probable que las personas se dejen engañar por voces con sonido humano que serán muy fácilmente implementables a través de Internet. Y esa es también una tecnología que Google había analizado, había considerado demasiado arriesgada y no iba a introducir en el mercado. Pero una vez que un competidor lo hace, los demás rápidamente le siguen. Entonces, desde nuestro punto de vista, OpenAI ha dejado atrás la idea de priorizar la seguridad y la ética y está realmente más interesado en ser el primero en actuar. Así que el viejo lema de Silicon Valley “Muévete rápido y rompe cosas” parece ser lo que OpenAI ha adoptado, a pesar de ser supuestamente una organización sin fines de lucro y una operación que prioriza los intereses de la humanidad por encima de cualquier consideración de lucro.

    McCarty Carino: ¿Qué papel podría desempeñar esta fundación benéfica independiente en ese contexto?

    Weissman: Bueno, dependiendo del tamaño de la fundación, podría apoyar todo tipo de programas de investigación, promoción, educación y acceso. Podría apoyar la investigación y la innovación diseñadas para garantizar la seguridad. Podría apoyar a empresas emergentes más pequeñas que se estuvieran moviendo en calidad de organización sin fines de lucro y comprometidas a permanecer en esa capacidad. Podría prestar más atención a la seguridad, por un lado, y al acceso a las nuevas tecnologías, por el otro. Podría apoyar los esfuerzos para garantizar que las personas de bajos ingresos tuvieran acceso a nuevas tecnologías a medida que estuvieran disponibles. Podría formar a nuevas personas para que se conviertan en programadores y desarrolladores. Podría respaldar la promoción para hacer retroceder el poder monopólico de empresas como OpenAI. Hay muchísimo que podría hacer, e idealmente, en realidad sería más de una sola base. A esa escala de recursos, creo que sería mejor distribuirlo entre muchos. Pero pase lo que pase, realmente podría convertirse en un actor poderoso para compensar el desafortunado movimiento que estamos viendo de desarrollar estas tecnologías realmente fascinantes mucho antes de considerar la seguridad, la ética y el acceso.

    Continue Reading

    Trending