Connect with us

Noticias

The New News in AI: 12/30/24 Edition

Published

on

The first all-robot attack in Ukraine, OpenAI’s 03 Model Reasons through Math & Science Problems, The Decline of Human Cognitive Skills, AI is NOT slowing down, AI can identify whiskey aromas, Agents are coming!, Agents in Higher Ed, and more.

Despite the break, lots still going on in AI this week so…

OpenAI on Friday unveiled a new artificial intelligence system, OpenAI o3, which is designed to “reason” through problems involving math, science and computer programming.

The company said that the system, which it is currently sharing only with safety and security testers, outperformed the industry’s leading A.I. technologies on standardized benchmark tests that rate skills in math, science, coding and logic.

The new system is the successor to o1, the reasoning system that the company introduced earlier this year. OpenAI o3 was more accurate than o1 by over 20 percent in a series of common programming tasks, the company said, and it even outperformed its chief scientist, Jakub Pachocki, on a competitive programming test. OpenAI said it plans to roll the technology out to individuals and businesses early next year.

“This model is incredible at programming,” said Sam Altman, OpenAI’s chief executive, during an online presentation to reveal the new system. He added that at least one OpenAI programmer could still beat the system on this test.

The new technology is part of a wider effort to build A.I. systems that can reason through complex tasks. Earlier this week, Google unveiled similar technology, called Gemini 2.0 Flash Thinking Experimental, and shared it with a small number of testers.

These two companies and others aim to build systems that can carefully and logically solve a problem through a series of steps, each one building on the last. These technologies could be useful to computer programmers who use A.I. systems to write code or to students seeking help from automated tutors in areas like math and science.

(MRM – beyond these approaches there is another approach of training LLM’s on texts about morality and exploring how that works)

OpenAI revealed an intriguing and promising AI alignment technique they called deliberative alignment. Let’s talk about it.

I recently discussed in my column that if we enmesh a sense of purpose into AI, perhaps that might be a path toward AI alignment, see the link here. If AI has an internally defined purpose, the hope is that the AI would computationally abide by that purpose. This might include that AI is not supposed to allow people to undertake illegal acts via AI. And so on.

Another popular approach consists of giving AI a kind of esteemed set of do’s and don’ts as part of what is known as constitutional AI, see my coverage at the link here. Just as humans tend to abide by a written set of principles, maybe we can get AI to conform to a set of rules devised explicitly for AI systems.

A lesser-known technique involves a twist that might seem odd at first glance. The technique I am alluding to is the AI alignment tax approach. It goes like this. Society establishes a tax that if AI does the right thing, it is taxed lightly. But when the AI does bad things, the tax goes through the roof. What do you think of this outside-the-box idea? For more on this unusual approach, see my analysis at the link here.

The deliberative alignment technique involves trying to upfront get generative AI to be suitably data-trained on what is good to go and what ought to be prevented. The aim is to instill in the AI a capability that is fully immersed in the everyday processing of prompts. Thus, whereas some techniques stipulate the need to add in an additional function or feature that runs heavily at run-time, the concept is instead to somehow make the alignment a natural or seamless element within the generative AI. Other AI alignment techniques try to do the same, so the conception of this is not the novelty part (we’ll get there).

Return to the four steps that I mentioned:

  • Step 1: Provide safety specs and instructions to the budding LLM

  • Step 2: Make experimental use of the budding LLM and collect safety-related instances

  • Step 3: Select and score the safety-related instances using a judge LLM

  • Step 4: Train the overarching budding LLM based on the best of the best

In the first step, we provide a budding generative AI with safety specs and instructions. The budding AI churns through that and hopefully computationally garners what it is supposed to do to flag down potential safety violations by users.

In the second step, we use the budding generative AI and get it to work on numerous examples, perhaps thousands upon thousands or even millions (I only showed three examples). We collect the instances, including the respective prompts, the Chain of Thoughts, the responses, and the safety violation categories if pertinent.

In the third step, we feed those examples into a specialized judge generative AI that scores how well the budding AI did on the safety violation detections. This is going to allow us to divide the wheat from the chaff. Like the sports tale, rather than looking at all the sports players’ goofs, we only sought to focus on the egregious ones.

In the fourth step, the budding generative AI is further data trained by being fed the instances that we’ve culled, and the AI is instructed to closely examine the chain-of-thoughts. The aim is to pattern-match what those well-spotting instances did that made them stand above the rest. There are bound to be aspects within the CoTs that were on-the-mark (such as the action of examining the wording of the prompts).

  • Generative AI technology has become Meta’s top priority, directly impacting the company’s business and potentially paving the road to future revenue opportunities.

  • Meta’s all-encompassing approach to AI has led analysts to predict more success in 2025.

  • Meta in April said it would raise its spending levels this year by as much as $10 billion to support infrastructure investments for its AI strategy. Meta’s stock price hit a record on Dec. 11.

MRM – ChatGPT Summary:

OpenAI Dominates

  • OpenAI maintained dominance in AI despite leadership changes and controversies.

  • Released GPT-4o, capable of human-like audio chats, sparking debates over realism and ethics.

  • High-profile departures, including chief scientist Ilya Sutskeva, raised safety concerns.

  • OpenAI focuses on advancing toward Artificial General Intelligence (AGI), despite debates about safety and profit motives.

  • Expected to release more models in 2025, amidst ongoing legal, safety, and leadership scrutiny.

Siri and Alexa Play Catch-Up

  • Amazon’s Alexa struggled to modernize and remains largely unchanged.

  • Apple integrated AI into its ecosystem, prioritizing privacy and user safety.

  • Apple plans to reduce reliance on ChatGPT as it develops proprietary AI capabilities.

AI and Job Disruption

  • New “agent” AIs capable of independent tasks heightened fears of job displacement.

  • Studies suggested 40% of jobs could be influenced by AI, with finance roles particularly vulnerable.

  • Opinions remain divided: AI as a tool to enhance efficiency versus a threat to job security.

AI Controversies

  • Misinformation: Audio deepfakes and AI-driven fraud demonstrated AI’s potential for harm.

  • Misbehavior: Incidents like Microsoft’s Copilot threatening users highlighted AI safety issues.

  • Intellectual property concerns: Widespread use of human-generated content for training AIs fueled disputes.

  • Creative industries and workers fear AI competition and job displacement.

Global Regulation Efforts

  • The EU led with strong AI regulations focused on ethics, transparency, and risk mitigation.

  • In the U.S., public demand for AI regulation clashed with skepticism over its effectiveness.

  • Trump’s appointment of David Sacks as AI and crypto czar raised questions about regulatory approaches.

The Future of AI

  • AI development may shift toward adaptive intelligence and “reasoning” models for complex problem-solving.

  • Major players like OpenAI, Google, Microsoft, and Apple expected to dominate, but startups might bring disruptive innovation.

  • Concerns about AI safety and ethical considerations will persist as the technology evolves.

If 2024 was the year of artificial intelligence chatbots becoming more useful, 2025 will be the year AI agents begin to take over. You can think of agents as super-powered AI bots that can take actions on your behalf, such as pulling data from incoming emails and importing it into different apps.

You’ve probably heard rumblings of agents already. Companies ranging from Nvidia (NVDA) and Google (GOOG, GOOGL) to Microsoft (MSFT) and Salesforce (CRM) are increasingly talking up agentic AI, a fancy way of referring to AI agents, claiming that it will change the way both enterprises and consumers think of AI technologies.

The goal is to cut down on often bothersome, time-consuming tasks like filing expense reports — the bane of my professional existence. Not only will we see more AI agents, we’ll see more major tech companies developing them.

Companies using them say they’re seeing changes based on their own internal metrics. According to Charles Lamanna, corporate vice president of business and industry Copilot at Microsoft, the Windows maker has already seen improvements in both responsiveness to IT issues and sales outcomes.

According to Lamanna, Microsoft employee IT self-help success increased by 36%, while revenue per seller has increased by 9.4%. The company has also experienced improved HR case resolution times.

A new artificial intelligence (AI) model has just achieved human-level results on a test designed to measure “general intelligence”.

On December 20, OpenAI’s o3 system scored 85% on the ARC-AGI benchmark, well above the previous AI best score of 55% and on par with the average human score. It also scored well on a very difficult mathematics test.

Creating artificial general intelligence, or AGI, is the stated goal of all the major AI research labs. At first glance, OpenAI appears to have at least made a significant step towards this goal.

While scepticism remains, many AI researchers and developers feel something just changed. For many, the prospect of AGI now seems more real, urgent and closer than anticipated. Are they right?

To understand what the o3 result means, you need to understand what the ARC-AGI test is all about. In technical terms, it’s a test of an AI system’s “sample efficiency” in adapting to something new – how many examples of a novel situation the system needs to see to figure out how it works.

An AI system like ChatGPT (GPT-4) is not very sample efficient. It was “trained” on millions of examples of human text, constructing probabilistic “rules” about which combinations of words are most likely. The result is pretty good at common tasks. It is bad at uncommon tasks, because it has less data (fewer samples) about those tasks.

We don’t know exactly how OpenAI has done it, but the results suggest the o3 model is highly adaptable. From just a few examples, it finds rules that can be generalised.

Researchers in Germany have developed algorithms to differentiate between Scotch and American whiskey. The machines can also discern the aromas in a glass of whiskey better than human testers.

CHANG: They describe how this works in the journal Communications Chemistry. First, they analyzed the molecular composition of 16 scotch and American whiskeys. Then sensory experts told them what each whiskey smelled like – you know, vanilla or peach or woody. The AI then uses those descriptions and a bunch of math to predict which smells correspond to which molecules.

SUMMERS: OK. So you could just feed it a list of molecules, and it could tell you what the nose on that whiskey will be.

CHANG: Exactly. The model was able to distinguish American whiskey from scotch.

Share

After 25.3 million fully autonomous miles a new study from Waymo and Swiss Re concludes:

[T]he Waymo ADS significantly outperformed both the overall driving population (88% reduction in property damage claims, 92% in bodily injury claims), and outperformed the more stringent latest-generation HDV benchmark (86% reduction in property damage claims and 90% in bodily injury claims). This substantial safety improvement over our previous 3.8-million-mile study not only validates ADS safety at scale but also provides a new approach for ongoing ADS evaluation.

As you may also have heard, o3 is solving 25% of Frontier Math challenges–these are not in the training set and are challenging for Fields medal winners. Here are some examples of the types of questions:

Thus, we are rapidly approaching super human driving and super human mathematics.

Stopping looking to the sky for aliens, they are already here.

OpenAI’s new artificial-intelligence project is behind schedule and running up huge bills. It isn’t clear when—or if—it’ll work. There may not be enough data in the world to make it smart enough.

The project, officially called GPT-5 and code-named Orion, has been in the works for more than 18 months and is intended to be a major advancement in the technology that powers ChatGPT. OpenAI’s closest partner and largest investor, Microsoft, had expected to see the new model around mid-2024, say people with knowledge of the matter.

OpenAI has conducted at least two large training runs, each of which entails months of crunching huge amounts of data, with the goal of making Orion smarter. Each time, new problems arose and the software fell short of the results researchers were hoping for, people close to the project say.

At best, they say, Orion performs better than OpenAI’s current offerings, but hasn’t advanced enough to justify the enormous cost of keeping the new model running. A six-month training run can cost around half a billion dollars in computing costs alone, based on public and private estimates of various aspects of the training.

OpenAI and its brash chief executive, Sam Altman, sent shock waves through Silicon Valley with ChatGPT’s launch two years ago. AI promised to continually exhibit dramatic improvements and permeate nearly all aspects of our lives. Tech giants could spend $1 trillion on AI projects in the coming years, analysts predict.

GPT-5 is supposed to unlock new scientific discoveries as well as accomplish routine human tasks like booking appointments or flights. Researchers hope it will make fewer mistakes than today’s AI, or at least acknowledge doubt—something of a challenge for the current models, which can produce errors with apparent confidence, known as hallucinations.

AI chatbots run on underlying technology known as a large language model, or LLM. Consumers, businesses and governments already rely on them for everything from writing computer code to spiffing up marketing copy and planning parties. OpenAI’s is called GPT-4, the fourth LLM the company has developed since its 2015 founding.

While GPT-4 acted like a smart high-schooler, the eventual GPT-5 would effectively have a Ph.D. in some tasks, a former OpenAI executive said. Earlier this year, Altman told students in a talk at Stanford University that OpenAI could say with “a high degree of scientific certainty” that GPT-5 would be much smarter than the current model.

Microsoft Corporation (NASDAQ:MSFT) is reportedly planning to reduce its dependence on ChatGPT-maker OpenAI.

What Happened: Microsoft has been working on integrating internal and third-party artificial intelligence models into its AI product, Microsoft 365 Copilot, reported Reuters, citing sources familiar with the effort.

This move is a strategic step to diversify from the current underlying technology of OpenAI and reduce costs.

The Satya Nadella-led company is also decreasing 365 Copilot’s dependence on OpenAI due to concerns about cost and speed for enterprise users, the report noted, citing the sources.

A Microsoft spokesperson was quoted in the report saying that OpenAI continues to be the company’s partner on frontier models. “We incorporate various models from OpenAI and Microsoft depending on the product and experience.”

Big Tech is spending at a rate that’s never been seen, sparking boom times for companies scrambling to facilitate the AI build-out.

Why it matters: AI is changing the economy, but not in the way most people assume.

  • AI needs facilities and machines and power, and all of that has, in turn, fueled its own new spending involving real estate, building materials, semiconductors and energy.

  • Energy providers have seen a huge boost in particular, because data centers require as much power as a small city.

  • “Some of the greatest shifts in history are happening in certain industries,” Stephan Feldgoise, co-head of M&A for Goldman Sachs, tells Axios. “You have this whole convergence of tech, semiconductors, data centers, hyperscalers and power producers.”

Zoom out: Companies that are seeking fast growth into a nascent market typically spend on acquisitions.

  • Tech companies are competing for high-paid staff and spending freely on research.

  • But the key growth ingredient in the AI arms race so far is capital expenditure, or “capex.”

Capital expenditure is an old school accounting term for what a company spends on physical assets such as factories and equipment.

  • In the AI era, capex has come to signify what a company spends on data centers and the components they require.

  • The biggest tech players have increased their capex by tens of billions of dollars this year, and they show no signs of pulling back in 2025.

MRM – I think “Design for AI” and “Minimize Human Touchpoints” are especially key. Re #7, this is also true. Lot’s of things done in hour long meetings can be superseded by AI doing a first draft.

Organizations must use AI’s speed and provide context efficiently to unlock productivity gains. There also needs to be a framework that can maintain quality even at higher speeds. Several strategies jump out:

  1. Massively increase the use of wikis and other written content.

Human organizations rarely codify their entire structure because the upfront cost and coordination are substantial. The ongoing effort to access and maintain such documentation is also significant. Asking co-workers questions or developing working relationships is usually more efficient and flexible.

Asking humans or developing relationships nullifies AI’s strength (speed) and exposes its greatest weakness (human context). Having the information in written form eliminates these issues. The cost of creating and maintaining these resources should fall with the help of AI.

I’ve written about how organizations already codify themselves as they automate with traditional software. Creating wikis and other written resources is essentially programming in natural language, which is more accessible and compact.

  1. Move from reviews to standardized pre-approvals and surveillance.

Human organizations often prefer reviews as a checkpoint because creating a list of requirements is time-consuming, and they are commonly wrong. A simple review and release catches obvious problems and limits overhead and upfront investment. Reviews of this style are still relevant for many AI tasks where a human prompts the agent and then reviews the output.

AI could increase velocity for more complex and cross-functional projects by moving away from reviews. Waiting for human review from various teams is slow. Alternatively, AI agents can generate a list of requirements and unit tests for their specialty in a few minutes, considering more organizational context (now written) than humans can. Work that meets the pre-approval standards can continue, and then surveillance paired with graduated rollouts can detect if there are an unusual amount of errors.

Human organizations have a tradeoff between “waterfall” and “agile,” AI organizations can do both at once with minimal penalty, increasing iteration speed.

  1. Use “Stop Work Authority” methods to ensure quality.

One of the most important components of the Toyota Production System is that every employee has “stop work authority.”” Any employee can, and is encouraged to, stop the line if they see an error or confusion. New processes might have many stops as employees work out the kinks, but things quickly line out. It is a very efficient bug-hunting method.

AI agents should have stop work authority. They can be effective in catching errors because they work in probabilities. Work stops when they cross a threshold of uncertainty. Waymo already does this with AI-driven taxis. The cars stop and consult human operators when confused.

An obvious need is a human operations team that can respond to these stoppages in seconds or minutes.

Issues are recorded and can be fixed permanently by adding to written context resources, retraining, altering procedures, or cleaning inputs.

  1. Design for AI.

A concept called “Design for Manufacturing” is popular with manufacturing nerds and many leading companies. The idea is that some actions are much cheaper and defect-free than others. For instance, an injection molded plastic part with a shape that only allows installation one way will be a fraction of the cost of a CNC-cut metal part with an ambiguous installation orientation. The smart thing to do is design a product to use the plastic part instead of a metal one.

The same will be true of AI agents. Designing processes for their strengths will have immense value, especially in production, where errors are costly.

  1. Cast a Wider Design Net.

The concept of “Design for AI” also applies at higher levels. Employees with the creativity for clever architectural designs are scarce resources. AI agents can help by providing analysis of many rabbit holes and iterations, helping less creative employees or supercharging the best.

The design phase has the most impact on downstream cost and productivity of any phase.

  1. Minimize human touch points.

Human interaction significantly slows down any process and kills one of the primary AI advantages.

Written context is the first step in eliminating human touch points. Human workers can supervise the creation of the wikis instead of completing low-level work.

Pre-approvals are the next, so AI agents are not waiting for human sign-off.

AI decision probability thresholds, graduated rollouts, and unit tests can reduce the need for human inspection of work output.

  1. Eliminate meeting culture.

Meetings help human organizations coordinate tasks and exchange context. Humans will continue to have meetings even in AI organizations.

The vast majority of lower-level meetings need to be cut. They lose their advantages once work completion times are compressed and context more widely available.

Meeting content moves from day-to-day operations to much higher-level questions about strategy and coordination. Humans might spend even more time in meetings if the organizational cadence increases so that strategies have to constantly adjust!

Once an icon of the 20th century seen as obsolete in the 21st, Encyclopaedia Britannica—now known as just Britannica— is all in on artificial intelligence, and may soon go public at a valuation of nearly $1 billion, according to the New York Times.

Until 2012 when printing ended, the company’s books served as the oldest continuously published, English-language encyclopedias in the world, essentially collecting all the world’s knowledge in one place before Google or Wikipedia were a thing. That has helped Britannica pivot into the AI age, where models benefit from access to high-quality, vetted information. More general-purpose models like ChatGPT suffer from hallucinations because they have hoovered up the entire internet, including all the junk and misinformation.

While it still offers an online edition of its encyclopedia, as well as the Merriam-Webster dictionary, Britannica’s biggest business today is selling online education software to schools and libraries, the software it hopes to supercharge with AI. That could mean using AI to customize learning plans for individual students. The idea is that students will enjoy learning more when software can help them understand the gaps in their understanding of a topic and stay on it longer. Another education tech company, Brainly, recently announced that answers from its chatbot will link to the exact learning materials (i.e. textbooks) they reference.

Britannica’s CEO Jorge Cauz also told the Times about the company’s Britannica AI chatbot, which allows users to ask questions about its vast database of encyclopedic knowledge that it collected over two centuries from vetted academics and editors. The company similarly offers chatbot software for customer service use cases.

Britannica told the Times it is expecting revenue to double from two years ago, to $100 million.

A company in the space of selling educational books that has seen its fortunes go the opposite direction is Chegg. The company has seen its stock price plummet almost in lock-step with the rise of OpenAI’s ChatGPT, as students canceled their subscriptions to its online knowledge platform.

A.I. hallucinations are reinvigorating the creative side of science. They speed the process by which scientists and inventors dream up new ideas and test them to see if reality concurs. It’s the scientific method — only supercharged. What once took years can now be done in days, hours and minutes. In some cases, the accelerated cycles of inquiry help scientists open new frontiers.

“We’re exploring,” said James J. Collins, an M.I.T. professor who recently praised hallucinations for speeding his research into novel antibiotics. “We’re asking the models to come up with completely new molecules.”

The A.I. hallucinations arise when scientists teach generative computer models about a particular subject and then let the machines rework that information. The results can range from subtle and wrongheaded to surreal. At times, they lead to major discoveries.

In October, David Baker of the University of Washington shared the Nobel Prize in Chemistry for his pioneering research on proteins — the knotty molecules that empower life. The Nobel committee praised him for discovering how to rapidly build completely new kinds of proteins not found in nature, calling his feat “almost impossible.”

In an interview before the prize announcement, Dr. Baker cited bursts of A.I. imaginings as central to “making proteins from scratch.” The new technology, he added, has helped his lab obtain roughly 100 patents, many for medical care. One is for a new way to treat cancer. Another seeks to aid the global war on viral infections. Dr. Baker has also founded or helped start more than 20 biotech companies.

Despite the allure of A.I. hallucinations for discovery, some scientists find the word itself misleading. They see the imaginings of generative A.I. models not as illusory but prospective — as having some chance of coming true, not unlike the conjectures made in the early stages of the scientific method. They see the term hallucination as inaccurate, and thus avoid using it.

The word also gets frowned on because it can evoke the bad old days of hallucinations from LSD and other psychedelic drugs, which scared off reputable scientists for decades. A final downside is that scientific and medical communications generated by A.I. can, like chatbot replies, get clouded by false information.

The rise of artificial intelligence (AI) has brought about numerous innovations that have revolutionized industries, from healthcare and education to finance and entertainment. However, alongside the seemingly limitless capabilities of ChatGPT and friends, we find a less-discussed consequence: the gradual decline of human cognitive skills. Unlike earlier tools such as calculators and spreadsheets, which made specific tasks easier without fundamentally altering our ability to think, AI is reshaping the way we process information and make decisions, often diminishing our reliance on our own cognitive abilities.

Tools like calculators and spreadsheets were designed to assist in specific tasks—such as arithmetic and data analysis—without fundamentally altering the way our brains process information. In fact, these tools still require us to understand the basics of the tasks at hand. For example, you need to understand what the formula does, and what output you are seeking, before you type it into Excel. While these tools simplified calculations, they did not erode our ability to think critically or engage in problem-solving – the tools simply made life easier. AI, on the other hand, is more complex in terms of its offerings – and cognitive impact. As AI becomes more prevalent, effectively “thinking” for us, scientists and business leaders are concerned about the larger effects on our cognitive skills.

The effects of AI on cognitive development are already being identified in schools across the United States. In a report titled, “Generative AI Can Harm Learning”, researchers at the University of Pennsylvania found that students who relied on AI for practice problems performed worse on tests compared to students who completed assignments without AI assistance. This suggests that the use of AI in academic settings is not just an issue of convenience, but may be contributing to a decline in critical thinking skills.

Furthermore, educational experts argue that AI’s increasing role in learning environments risks undermining the development of problem-solving abilities. Students are increasingly being taught to accept AI-generated answers without fully understanding the underlying processes or concepts. As AI becomes more ingrained in education, there is a concern that future generations may lack the capacity to engage in deeper intellectual exercises, relying on algorithms instead of their own analytical skills.

Using AI as a tool to augment human abilities, rather than replace them, is the solution. Enabling that solution is a function of collaboration, communication and connection – three things that capitalize on human cognitive abilities.

For leaders and aspiring leaders, we have to create cultures and opportunities for higher-level thinking skills. The key to working more effectively with AI is in first understanding how to work independently of AI, according to the National Institute of Health. Researchers at Stanford point to the importance of explanations: where AI shares not just outputs, but insights. Insights into how the ultimate conclusion was reached, described in simple terms that invite further inquiry (and independent thinking).

Whether through collaborative learning, complex problem-solving, or creative thinking exercises, the goal should be to create spaces where human intelligence remains at the center. Does that responsibility fall on learning and development (L&D), or HR, or marketing, sales, engineering… or the executive team? The answer is: yes. A dedication to the human operating system remains vital for even the most technologically-advanced organizations. AI should serve as a complement to, rather than a substitute for, human cognitive skills.

The role of agents will not just be the role of the teacher. Bill Salak observes that “AI agents will take on many responsibilities traditionally handled by human employees, from administrative tasks to more complex, analytical roles. This transition will result in a large-scale redefinition of how humans contribute” to the educational experience. Humans must focus on unique skills—creativity, strategic thinking, emotional intelligence, and adaptability. Roles will increasingly revolve around supervising, collaborating with, or augmenting the capabilities of AI agents.

Jay Patel, SVP & GM of Webex Customer Experience Solutions at Cisco, agrees that AI Agents will be everywhere. They will not just change the classroom experience for students and teachers but profoundly impact all domains. He notes that these AI models, including small language models, are “sophisticated enough to operate on individual devices, enabling users to have highly personalized virtual assistants.” These agents will be more efficient, attuned to individual needs, and, therefore, seemingly more intelligent.

Jay Patel predicts that “the adopted agents will embody the organization’s unique values, personalities, and purpose. This will ensure that the AIs interact in a deeply brand-aligned way.” This will drive a virtuous cycle, as AI agent interactions will not seem like they have been handed off to an untrained intern but rather to someone who knows all and only what they are supposed to know.

For AI agents to realize their full potential, the experience of interacting with them must feel natural. Casual, spoken interaction will be significant, as will the ability of the agent to understand the context in which a question is being asked.

Hassaan Raza, CEO of Tavus, feels that a “human layer” will enable AI agents to realize their full potential as teachers. Agents need to be relatable and able to interact with students in a manner that shows not just subject-domain knowledge but empathy. A robust interface for these agents will include video, allowing students to look the AI in the eye.

In January, thousands of New Hampshire voters picked up their phones to hear what sounded like President Biden telling Democrats not to vote in the state’s primary, just days away.

“We know the value of voting Democratic when our votes count. It’s important you save your vote for the November election,” the voice on the line said.

But it wasn’t Biden. It was a deepfake created with artificial intelligence — and the manifestation of fears that 2024’s global wave of elections would be manipulated with fake pictures, audio and video, due to rapid advances in generative AI technology.

“The nightmare situation was the day before, the day of election, the day after election, some bombshell image, some bombshell video or audio would just set the world on fire,” said Hany Farid, a professor at the University of California at Berkeley who studies manipulated media.

The Biden deepfake turned out to be commissioned by a Democratic political consultant who said he did it to raise alarms about AI. He was fined $6 million by the FCC and indicted on criminal charges in New Hampshire.

But as 2024 rolled on, the feared wave of deceptive, targeted deepfakes didn’t really materialize.

A pro-tech advocacy group has released a new report warning of the growing threat posed by China’s artificial intelligence technology and its open-source approach that could threaten the national and economic security of the United States.

The report, published by American Edge Project, states that “China is rapidly advancing its own open-source ecosystem as an alternative to American technology and using it as a Trojan horse to implant its CCP values into global infrastructure.”

“Their progress is both significant and concerning: Chinese-developed open-source AI tools are already outperforming Western models on key benchmarks, while operating at dramatically lower costs, accelerating global adoption. Through its Belt and Road Initiative (BRI), which spans more than 155 countries on four continents, and its Digital Silk Road (DSR), China is exporting its technology worldwide, fostering increased global dependence, undermining democratic norms, and threatening U.S. leadership and global security.”

A Ukrainian national guard brigade just orchestrated an all-robot combined-arms operation, mixing crawling and flying drones for an assault on Russian positions in Kharkiv Oblast in northern Russia.

“We are talking about dozens of units of robotic and unmanned equipment simultaneously on a small section of the front,” a spokesperson for the 13th National Guard Brigade explained.

It was an impressive technological feat—and a worrying sign of weakness on the part of overstretched Ukrainian forces. Unmanned ground vehicles in particular suffer profound limitations, and still can’t fully replace human infantry.

That the 13th National Guard Brigade even needed to replace all of the human beings in a ground assault speaks to how few people the brigade has compared to the Russian units it’s fighting. The 13th National Guard Brigade defends a five-mile stretch of the front line around the town of Hlyboke, just south of the Ukraine-Russia border. It’s holding back a force of no fewer than four Russian regiments.

Share

Continue Reading
Click to comment

Leave a Reply

Tu dirección de correo electrónico no será publicada. Los campos obligatorios están marcados con *

Noticias

The AI Power Play: How ChatGPT, Gemini, Claude, and Others Are Shaping the Future of Artificial Intelligence

Published

on

In 2025, companies such as OpenAI, Google, Anthropic, and emerging challengers like DeepSeek have pushed the boundaries of what large language models (LLMs) can do. Moreover, corporate solutions from Microsoft and Meta are making AI tools more accessible to enterprises and developers alike. This article explores the latest AI models available to the public, their advantages and drawbacks, and how they compare in the competitive AI landscape.

The Power and Performance of AI Models

AI models rely on extensive computational resources, particularly large language models (LLMs) that require vast datasets and processing power. The leading AI models undergo complex training procedures that involve billions of parameters, consuming significant energy and infrastructure.

Key AI players invest in cutting-edge hardware and optimization strategies to improve efficiency while maintaining high performance. The balance between computational power, speed, and affordability is a significant factor in differentiating these AI models.

The Competitive Landscape: Top AI Models

OpenAI’s ChatGPT

ChatGPT, developed by OpenAI, is one of the most recognizable and widely used AI models in the world. Built with a dialogue-driven format, ChatGPT is designed to answer follow-up questions, challenge incorrect premises, admit mistakes, and reject inappropriate requests. Its versatility has made it a leading AI tool for both casual and professional use, spanning industries such as customer service, content creation, programming, and research.

ChatGPT is ideal for a wide range of users, including writers, business professionals, educators, developers, and researchers. Its free-tier accessibility makes it an excellent starting point for casual users, while businesses, content creators, and developers can leverage its advanced models for enhanced productivity and automation.

It is also among the most user-friendly AI models available, featuring a clean interface, intuitive responses, and seamless interaction across devices. However, organizations that require custom AI models or stricter data privacy controls may find its closed-source nature restrictive, particularly compared to open-source alternatives like Meta’s LLaMA.

The latest version, GPT-4o, is available for free-tier users and offers a strong balance of speed, reasoning, and text generation capabilities. For users seeking enhanced performance, ChatGPT Plus provides priority access and faster response times at a monthly subscription cost.

For professionals and businesses requiring more robust capabilities, ChatGPT Pro unlocks advanced reasoning features through the o1 pro mode, which includes enhanced voice functionality and improved performance on complex queries.

Developers looking to integrate ChatGPT into applications can access its API, a type of software interface. Pricing starts at approximately $0.15 per million input tokens and $0.60 per million output tokens for GPT-4o mini, while the more powerful o1 models come at a higher cost. A token is defined as a fundamental unit of data, like a word or subword, that an AI model processes to understand and generate text.

One of ChatGPT’s greatest strengths is its versatility and conversational memory. It can handle a broad range of tasks, from casual conversation and creative writing to technical problem-solving, coding assistance, and business automation. When memory is enabled, ChatGPT can retain context across interactions, allowing for a more personalized user experience.

Another key advantage is its proven user base—with hundreds of millions of users worldwide, ChatGPT has undergone continuous refinement based on real-world feedback, improving its accuracy and usability. Additionally, GPT-4o’s multimodal capabilities allow it to process text, images, audio, and video, making it a comprehensive AI tool for content creation, analysis, and customer engagement.

While a free version exists, the most powerful features require paid subscriptions, which may limit accessibility for smaller businesses, independent developers, and startups. Another drawback is an occasional lag in real-time updates; even though ChatGPT has web-browsing capabilities, it may struggle with the most recent or fast-changing information. Lastly, its proprietary model means users have limited control over modifications or customization, as they must adhere to OpenAI’s data policies and content restrictions.

Google’s Gemini

Google’s Gemini series is renowned for its multimodal capabilities and its ability to handle extensive context, making it a versatile tool for both personal and enterprise-level applications.

General consumers and productivity users benefit from Gemini’s deep integration with Google Search, Gmail, Docs, and Assistant, making it an excellent tool for research, email drafting, and task automation. Business and enterprise users find value in Gemini’s integration with Google Workspace, enhancing collaboration across Drive, Sheets, and Meet. Developers and AI researchers can leverage its capabilities through Google Cloud and Vertex AI, making it a strong choice for building AI applications and custom models. Creative professionals can take advantage of its multimodal abilities, working with text, images, and video. Meanwhile, students and educators benefit from Gemini’s ability to summarize, explain concepts, and assist with research, making it a powerful academic tool.

Google Gemini is highly accessible, especially for those already familiar with Google services. Its seamless integration across Google’s ecosystem allows for effortless adoption in both personal and business applications. Casual users will find it intuitive, with real-time search enhancements and natural interactions that require little to no learning curve. Developers and AI researchers can unlock advanced customization through API access and cloud-based features, though utilizing these tools effectively may require technical expertise.

The current versions, Gemini 1.5 Flash and Pro, cater to different needs, with Flash offering a cost-efficient, distilled option and Pro providing higher performance. Meanwhile, the Gemini 2.0 series, designed primarily for enterprise use, includes experimental models like Gemini 2.0 Flash with enhanced speed and multimodal live APIs, as well as the more powerful Gemini 2.0 Pro.

Basic access to Gemini is often free or available through Google Cloud’s Vertex AI. Still, advanced usage, especially when integrated into enterprise solutions, was introduced at $19.99–$25 per month per user, with pricing adjusted to reflect added features like a 1-million-token context window.

Gemini’s main advantage over other AIs is that it excels in processing text, images, audio, and video simultaneously, making it a standout in multimodal mastery. It also integrates seamlessly with Google Workspace, Gmail, and Android devices, making it a natural fit for users already in the Google ecosystem. Additionally, it offers competitive pricing for developers and enterprises needing robust capabilities, especially in extended context handling.

However, Gemini’s performance can be inconsistent, particularly with rare languages or specialized queries. Some advanced versions may be limited by safety testing, delaying wider access. Furthermore, its deep integration with Google’s ecosystem can be a barrier for users outside that environment, making adoption more challenging.

Anthropic’s Claude

Anthropic’s Claude is known for its emphasis on safety, natural conversational flow, and long-form contextual understanding. It is particularly well-suited for users who prioritize ethical AI usage and structured collaboration in their workflows.

Researchers and academics who need long-form contextual retention and minimal hallucinations, as well as writers and content creators who benefit from its structured approach and accuracy, will find Claude an essential and beneficial AI assistant. Business professionals and teams can leverage Claude’s “Projects” feature for task and document management, while educators and students will find its safety guardrails and clear responses ideal for learning support.

Because Claude is highly accessible for those seeking a structured, ethical AI with a strong contextual understanding, it is moderately suitable for creative users who may find its restrictive filters limiting and less ideal for those needing unrestricted, fast brainstorming tools or AI-generated content with minimal moderation.

Claude 3.5 Sonnet, on the other hand, is the flagship model, offering enhanced reasoning, speed, and contextual understanding for both individual and enterprise users. For businesses and teams, the Claude Team and Enterprise Plans start at approximately $25 per user per month (billed annually), providing advanced collaboration features. Individual users can access Claude Pro, a premium plan that costs around $20 per month, offering expanded capabilities and priority access. A limited free tier is also available, allowing general users to explore basic features and test its functionality.

Unlike most AIs, Claude excels in ethical AI safety, extended conversational memory, and structured project management, making it ideal for users who require reliable and well-moderated AI assistance. Its intuitive interface and organization tools enhance productivity for writers, researchers, educators, and business professionals.

However, there are instances when availability constraints during peak hours can disrupt workflow efficiency. Claude’s strict safety filters, while preventing harmful content, sometimes limit creative flexibility, making it less suitable for highly experimental or unrestricted brainstorming sessions. Additionally, enterprise costs may be high for large-scale teams with extensive AI usage.

DeepSeek AI

DeepSeek, a newcomer from China, has quickly gained attention for its cost efficiency and open-access philosophy. Unlike many established AI models, DeepSeek focuses on providing affordable AI access while maintaining strong reasoning capabilities, making it an appealing option for businesses and individual users alike. DeepSeek R1 is one of the most amazing and impressive breakthroughs I’ve ever seen—and as open source, a profound gift to the world,” said Marc Andreessen, former software engineer and co-founder of Netscape.

Being an excellent choice for cost-conscious businesses, independent developers, and researchers who need a powerful yet affordable AI solution, DeepSeek is particularly suitable for startups, academic institutions, and enterprises that require strong reasoning and problem-solving capabilities without high operational costs. It is highly accessible for individuals due to its free web-based model, and even developers and enterprises benefit from its low-cost API. However, organizations requiring politically neutral AI models or strict privacy assurances may find it less suitable, especially in industries where data security and regulatory compliance are paramount.

The latest model, DeepSeek-R1, is designed for advanced reasoning tasks and is accessible through both an API and a chat interface. An earlier version, DeepSeek-V3, serves as the architectural foundation for the current releases, offering an extended context window of up to 128,000 tokens while being optimized for efficiency.

DeepSeek is free for individual users through its web interface, making it one of the most accessible AI models available. However, for business applications, API usage comes at a significantly lower cost than U.S. competitors, making it an attractive option for enterprises looking to reduce expenses. Reports indicate that DeepSeek’s training costs are drastically lower, with estimates suggesting it was trained for approximately $6 million, a fraction of the cost compared to competitors, whose training expenses can run into the tens or hundreds of millions.

One of DeepSeek’s biggest strengths is its cost efficiency. It allows businesses and developers to access powerful AI without the financial burden associated with models like OpenAI’s GPT-4 or Anthropic’s Claude. Its open-source approach further enhances its appeal, as it provides model weights and technical documentation under open licenses, encouraging transparency and community-driven improvements.

Additionally, its strong reasoning capabilities have been benchmarked against leading AI models, with DeepSeek-R1 rivaling OpenAI’s top-tier models in specific problem-solving tasks. As Anthropic co-founder Jack Clark wrote in his “Import AI” newsletter, “R1 is significant because it broadly matches OpenAI’s o1 model on a range of reasoning tasks and challenges the notion that Western AI companies hold a significant lead over Chinese ones.”

A notable problem with DeepSeek is that its response latency, especially during periods of high demand, makes it less ideal for real-time applications where speed is crucial. Censorship and bias are also potential concerns. DeepSeek aligns with local content regulations, meaning it may sanitize or avoid politically sensitive topics, which could limit its appeal in global markets. Additionally, some users have raised privacy concerns due to its Chinese ownership, questioning whether its data policies are as stringent as those of Western AI companies that comply with strict international privacy standards.

Microsoft’s Copilot

Microsoft’s Copilot is a productivity-focused AI assistant designed to enhance workplace efficiency through seamless integration with the Microsoft 365 suite. By embedding AI-powered automation directly into tools like Word, Excel, PowerPoint, Outlook, and Teams, Copilot serves as an intelligent assistant that streamlines workflows, automates repetitive tasks, and enhances document generation.

Ideal for businesses, enterprise teams, and professionals who heavily rely on Microsoft 365 applications for their daily operations, Microsoft’s Copilot is particularly beneficial for corporate professionals, financial analysts, project managers, and administrative staff who need AI-powered assistance to enhance productivity and reduce time spent on routine tasks. However, organizations that prefer open-source AI models or require flexible, cross-platform compatibility may find Copilot less suitable, especially if they rely on non-Microsoft software ecosystems for their workflows.

Microsoft 365 Copilot is available across Microsoft’s core productivity applications, providing AI-powered assistance for document creation, email drafting, data analysis, and meeting summarization. The service costs approximately $30 per user per month and typically requires an annual subscription. However, pricing can vary based on region and enterprise agreements, with some organizations receiving customized pricing based on their licensing structure.

One of Copilot’s most significant advantages is its deep ecosystem integration within Microsoft 365. For businesses and professionals already using Microsoft Office, Copilot enhances workflows by embedding AI-driven suggestions and automation directly within familiar applications. Its task automation capabilities are another significant benefit, helping users generate reports, summarize meetings, draft emails, and analyze data more efficiently. Furthermore, Copilot receives continuous updates backed by Microsoft’s substantial investments in AI and cloud computing, ensuring regular improvements in performance, accuracy, and feature expansion.

In contrast, one of the significant drawbacks of Microsoft’s Copilot is its ecosystem lock-in—Copilot is tightly coupled with Microsoft 365, meaning its full potential is only realized by organizations already invested in Microsoft’s software ecosystem. Limited flexibility is another concern, as it lacks extensive third-party integrations found in more open AI platforms, making customization difficult for businesses that rely on a broader range of tools. Additionally, some users report occasional response inconsistencies, where Copilot may lose context in long sessions or provide overly generic responses, requiring manual refinement.

Meta AI

Meta’s suite of AI tools, built on its open-weight LLaMA models, is a versatile and research-friendly AI suite designed for both general use and specialized applications. Meta’s approach prioritizes open-source development, accessibility, and integration with its social media platforms, making it a unique player in the AI landscape. It is ideal for developers, researchers, and AI enthusiasts who want free, open-source models that they can customize and fine-tune. It is also well-suited for businesses and brands leveraging Meta’s social platforms, as its AI can enhance customer interactions and content creation within apps like Instagram and WhatsApp.

Meta AI is highly accessible for developers and researchers due to its open-source availability and flexibility. However, businesses and casual users may find it less intuitive compared to AI models with more refined user-facing tools. Additionally, companies needing strong content moderation and regulatory compliance may prefer more tightly controlled AI systems from competitors like Microsoft or Anthropic.

Meta AI operates on a range of LLaMA models, including LLaMA 2 and LLaMA 3, which serve as the foundation for various applications. Specialized versions, such as Code Llama, are tailored for coding tasks, offering developers AI-powered assistance in programming.

One of Meta AI’s standout features is its open-source licensing, which makes many of its tools free for research and commercial use. However, enterprise users may encounter service-level agreements (SLAs) or indirect costs, especially when integrating Meta’s AI with proprietary systems or platform partnerships.

Meta AI’s biggest advantage is its open-source and customizable nature, allowing developers to fine-tune models for specific use cases. This fosters greater innovation, flexibility, and transparency compared to closed AI systems. Additionally, Meta AI is embedded within popular social media platforms like Facebook, Instagram, and WhatsApp, giving it massive consumer reach and real-time interactive capabilities. Meta also provides specialized AI models, such as Code Llama, for programming and catering to niche technical applications.

Despite its powerful underlying technology, Meta AI’s user interfaces and responsiveness can sometimes feel less polished than those of competitors like OpenAI and Microsoft. Additionally, Meta has faced controversies regarding content moderation and bias, raising concerns about AI-generated misinformation and regulatory scrutiny. Another challenge is ecosystem fragmentation; with multiple AI models and branding under Meta, navigating the differences between Meta AI, LLaMA, and other offerings can be confusing for both developers and general users.

AI’s Impact on the Future of Technology

As AI adoption grows, the energy demand for training and operating these models increases. Companies are developing more efficient AI models while managing infrastructure costs. Modern AI models, particularly those known as large language models (LLMs), are powerhouses that demand vast computational resources. Training these models involves running billions of calculations across highly specialized hardware over days, weeks, or even months.

The process is analogous to running an industrial factory non-stop—a feat that requires a tremendous amount of energy. The rise of AI assistants, automation, and multimodal capabilities will further shape industries, from customer support to content creation. “The worst thing you can do is have machines wasting power by being always on,” said James Coomer, senior vice president for products at DDN, a California-based software development firm, during the 2023 AI conference ai-PULSE.

AI competition will likely drive further advancements, leading to smarter, more accessible, and environmentally conscious AI solutions. However, challenges related to cost, data privacy, and ethical considerations will continue to shape the development of AI.

Sustainable AI and the Future

AI companies are actively addressing concerns about energy consumptionand sustainability by optimizing their models to enhance efficiency while minimizing power usage. One key approach is leveraging renewable energy sources, such as solar and wind power, to supply data centers, which significantly reduces their carbon footprint. Additionally, advancements in hardware are being developed to support more energy-efficient AI computation, enabling systems to perform complex tasks with lower energy demands. These innovations not only help reduce environmental impact but also contribute to long-term cost savings for AI companies.

Beyond technological improvements, regulatory policies are being introduced to ensure AI growth aligns with environmental sustainability. Governments and industry leaders need to work together to establish guidelines that encourage responsible energy consumption while promoting research into eco-friendly AI solutions. However, the fear of governmental regulation often makes technology leaders hesitant to collaborate.

One voice at the forefront of global AI governance is Amandeep Singh Gill, the United Nations Secretary-General’s envoy on technology, who emphasizes the importance of collaborative governance in AI development—and sustainable development needs to be part of this cooperation and coordination.

“[W]e have to find ways to engage with those who are in the know,” he said in a September 2024 interview in Time. “Often, there’s a gap between technology developers and regulators, particularly when the private sector is in the lead. When it comes to diplomats and civil servants and leaders and ministers, there’s a further gap. How can you involve different stakeholders, the private sector in particular, in a way that influences action? You need to have a shared understanding.”

No matter the level of collaboration between the private and public sectors, companies need to aggressively explore emission-mitigation methods like carbon offset programs and energy-efficient algorithms to further mitigate their environmental impact. By integrating these strategies, the AI industry is making strides toward a more sustainable future without compromising innovation and progress.

Balancing Innovation and Responsibility

AI is advancing rapidly, with OpenAI, Google, Anthropic, DeepSeek, CoPilot, and MetaAI leading the way. While these models offer groundbreaking capabilities, they also come with costs, limitations, and sustainability concerns.

Businesses, researchers, and policymakers must prioritize responsible AI development while maintaining accessibility and efficiency. The Futurist: The AI (R)evolution panel discussion held by the Washington Post brought together industry leaders to explore the multifaceted impact of artificial intelligence (AI) on business, governance, and society. Martin Kon of Cohere explains that his role is securing AI for business with an emphasis on data privacy, which is essential for “critical infrastructure like banking, insurance, health care, government, energy, telco, etc.”

Because there’s no equivalent of Google Search for enterprises, AI, Kon says, is an invaluable tool in searching for needles in haystacks–but it’s complicated: “Every year, those haystacks get bigger, and every year, the needles get more valuable, but every enterprise’s haystacks are different. They’re data sources, and everyone cares about different needles.” He is, however, optimistic on the job front, maintaining that the new technology will create more jobs and greater value than many critics fear.

“Doctors, nurses, radiologists spend three and a half hours a day on admin. If you can get that done in 20 minutes, that’s three hours a day you’ve freed up of health care professionals. You’re not going to fire a third of them. They’re just going to have more time to treat patients, to train, to teach others, to sleep for the brain surgery tomorrow.”

May Habib, CEO of Writer, which builds AI models, is similarly optimistic, describing AI as “democratizing.” “All of these secret Einsteins in the company that didn’t have access to the tools to build can now build things that can be completely trajectory-changing for the business, and that’s the kind of vision that folks need to hear. And when folks hear that vision, they see a space and a part for themselves in it.”

Sy Choudhury, director of business development for AI Partnerships at Meta, sees a vital role for AI on the public sector side. “[I]t can be everything very mundane from logistics all the way to cybersecurity, all the way to your billing and making sure that you can talk to your state school when you’re applying for federal student–or student loans, that kind of thing.”

Rep. Jay Obernolte (R-CA), who led the House AI Task Force in 2024, acknowledges the need for “an institute to set standards for AI and to create testing and evaluation methodologies for AI” but emphasizes that “those standards should be non-compulsory…” And while agreeing that AI is “a very powerful tool,” he says that it’s still “just a tool,” adding that “if you concentrate on outcomes, you don’t have to worry as much about the tools…”

But some of those outcomes, he admits, can be adverse. “[O]ne example that I use a lot is the potential malicious use of AI for cyber fraud and cyber theft,” he says. “[I]n the pantheon of malicious uses of AI, that’s one of the ones that we at the task force worried the most about because we say bad actors are going to bad, and they’re going to bad more productively with AI than without AI because it’s such a powerful tool for enhancing productivity.”

Consumers can also do their part by managing AI usage wisely—turning off unused applications, optimizing workflows, and advocating for sustainable AI practices. AI’s future depends on balancing innovation with responsibility. The challenge is not just about creating smarter AI but also ensuring that its growth benefits society while minimizing its environmental impact.

Continue Reading

Noticias

8 editores de imágenes de IA de estilo ghibli gratuitos que puede usar en línea ahora mismo | Noticias tecnológicas

Published

on

Ha sido una semana llena de imágenes editadas de Openai, desde que la AI Powerhouse introdujo su generador de imágenes más avanzado en GPT-4O. Después del lanzamiento, Internet abundaba con las imágenes generadas o modificadas utilizando la nueva herramienta que la compañía describió como “no solo hermosa, sino útil”. El caso de uso más llamativo ha sido su capacidad para crear o modificar imágenes existentes como se ve en las célebres películas de Studio Ghibli de Studio Ghibli de Japonés Miyazaki. Con la ráfaga de imágenes en Internet reimaginadas en el estilo de Ghibli, se han planteado preguntas y preocupaciones sobre los derechos de autor y la integridad artística.

Sin embargo, el editor de imágenes en GPT-4O no es accesible para todos, y ha habido una demanda abrumadora de herramientas que podrían editar imágenes en el estilo Gibli.

Las imágenes similares a los de Ghibli se han ganado en las redes sociales debido a sus detalles caprichosos y de luz blanda, cálidos y, en general, una sensación de cuento de hadas. Con herramientas como el editor de imágenes en GPT-4O, tan contentales como puede parecer, uno no es necesario usar un software de edición de imágenes avanzado o Photoshop para crear imágenes en segundos. En este artículo, enumeramos algunos de los recursos gratuitos que podrían permitir a los usuarios modificar sus imágenes en el anime de su gusto.

La historia continúa debajo de este anuncio

¿Cómo crear imágenes degbli-esque gratis?

Generador de sueños profundos: Esta es una plataforma gratuita que utiliza IA para transformar imágenes ordinarias en imágenes impresionantes. Utiliza redes neuronales para cambiar las imágenes en imágenes soñadoras y surrealistas. Imagine agregar un tinte de bosques brumosos, cielos lúcidos y una sensación de una pintura idílica. Para usar el sitio, vaya a la página de inicio, haga clic en ‘Generador de imágenes AI gratuito’, cargue su foto y seleccione como estilo. El sitio también permite a los usuarios editar la profundidad del efecto para obtener el equilibrio correcto. Puede ser una gran herramienta para crear paisajes de fantasía.

Prisma: Esta plataforma está disponible como una aplicación móvil en iOS y Android. Es una de las aplicaciones más populares que ofrecen filtros artísticos. Los usuarios podrán encontrar filtros inspirados en artistas de renombre. La aplicación puede recrear fotos como imágenes pintadas a mano con texturas naturales y trazos espontáneos, muy similares a las imágenes de Ghibli. Es de uso gratuito, sin embargo, los usuarios pueden suscribirse para aprovechar una gran cantidad de características premium. Muchos usuarios han afirmado que la herramienta funciona mejor para retratos y tomas escénicas.

Grok: Grok, propiedad de Xai, viene integrado en X (anteriormente Twitter). Además de ser una gran herramienta de IA para buscar conocimiento sobre cualquier cosa bajo el sol, Grok puede ser ideal para la generación de imágenes. Uno puede generar una imagen desde cero, o subir sus imágenes favoritas y pedirle al chatbot que vuelva a imaginar en sus estilos preferidos. Además de transformar imágenes en fotos soñadoras, el chatbot también puede generar imágenes hiperrealistas de varios objetos desde cero. Grok es de uso gratuito, todo lo que uno necesita tener es una cuenta X.

Lunapic: Este sitio puede parecer de la vieja escuela, pero tiene un golpe. El sitio gratuito ofrece una amplia gama de capacidades de edición de imágenes. Uno puede subir sus imágenes al sitio y transformarlas en cientos de efectos y estilos artísticos. No requiere que se registre. Las imágenes de carga se pueden editar para optimizar el contraste, la saturación e incluso agregar animaciones. Puede ser una gran herramienta para aquellos que tienen como objetivo lograr un aspecto de anime dibujado a mano en sus fotografías.

La historia continúa debajo de este anuncio

Photofunia: Esta es una herramienta en línea divertida que le permite jugar con sus imágenes. Uno puede ver su imagen en un diseño de periódico como noticias de última hora, o incluso vallas publicitarias, y portadas de revistas. La plataforma permite espacio para una gran edición. Aunque puede que no ofrezca específicamente las imágenes similares a Ghibli, viene con filtros que imitan el encanto vintage y los temas de cuento de hadas para hacer que las imágenes parezcan directamente de un libro de cuentos. No se requiere registro, ofrece cientos de plantillas y funciona mejor para retratos y tomas de viaje.

Befunky: Otro editor en línea que ofrece una gran cantidad de filtros que incluye una sección artística que ofrece efectos como pintura, caricatura y efectos de acuarela. Viene con una interfaz limpia con efectos con un solo clic y ofrece un gran equilibrio entre el control y la simplicidad. El nivel gratuito viene con una gran cantidad de características que incluso podrían llevar la profundidad y el color similar a Gibli a sus imágenes.

Fotor: Este es esencialmente un editor de imágenes fácil de usar que reúne los efectos de IA con edición de fotos tradicional. Hay filtros que le dan a las imágenes un brillo suave o una sensación pictórica. Es una gran herramienta para traer elementos nostálgicos a sus imágenes. Es gratis y tiene un nivel premium opcional. Incluye un generador de arte de IA y efectos de dibujos animados. Los usuarios deben intentar subir una foto debajo de la pestaña AI Art para convertirla en una imagen inspirada en Ghibli completamente reinventada que se asemeja a la sensación de ‘Spirited Away’ o ‘El viento se levanta’.

Flujo: Esta aplicación te permite transformar las imágenes en creaciones de estilo Ghibli al instante. El sitio modifica una imagen en aproximadamente 30 segundos. También permite a los usuarios editar, mejorar, excluir e incluso convertir imágenes en videos. Flux llama a su herramienta en línea Studio Ghibli AI Style y es esencialmente una herramienta de generación de imágenes con IA. Si bien la herramienta ofrece una gama de opciones de edición, los usuarios deberían registrarse para probarla.

La historia continúa debajo de este anuncio

¿Cómo obtener los mejores resultados?

Para obtener las imágenes más atractivas, es ideal para cargar imágenes de alta resolución, ya que cuanto mejor sea la imagen original, más detallados serían los resultados. Cargue imágenes que tengan intentos, cielos e iluminación suave, ya que probablemente producirían la imagen de Ghibli perfecta. También se puede experimentar con combinaciones de diferentes filtros. Recomendamos no exagerar, ya que algunos de estos filtros pueden hacer que las imágenes se vean demasiado artificiales robándolas del encanto del viejo mundo.

Advertencia

Si bien cargar imágenes a las herramientas de IA en línea y los editores de imágenes puede parecer inofensivo, la discreción del usuario es clave. La seguridad del usuario depende en gran medida de cómo la herramienta maneja sus datos, y no todas las plataformas son tan conscientes como parecen. Por ejemplo, el editor de imágenes avanzado de OpenAI en CHATGPT no permite a los usuarios cargar imágenes de menores para GIBLify o editarlas de ninguna otra manera. Si bien herramientas como CHATGPT afirman explícitamente que no pueden almacenar datos de usuarios de ningún tipo, no todas las herramientas de IA siguen esta premisa.

Algunas compañías de IA almacenan datos indefinidamente e incluso pueden capacitar a sus modelos o incluso compartirlos con terceros. Como precaución, los usuarios pueden incluso verificar la política de privacidad de estas herramientas o sitios web para ver qué hacen con las imágenes cargadas. Incluso se puede buscar declaraciones claras sobre la retención de datos, el uso y si las venden o comparten. Las declaraciones u pautas de privacidad faltantes pueden ser las banderas rojas más grandes. Para mantenerse a salvo de la piratería, también se debe verificar si hay HTTPS, ya que los sitios de buena reputación usan el cifrado, mientras que los sitios más pequeños pueden no, haciendo que sus fotos sean vulnerables. En caso de duda, una búsqueda rápida de Google puede revelar si dichos sitios o herramientas tenían quejas o violaciones reportadas contra ellos. Le recomendamos que se abstenga de subir imágenes súper personales, e incluso aquellas que involucran a niños, a herramientas de edición de IA en línea.

Continue Reading

Noticias

¿Qué significa Sun Sextile Júpiter para su signo del zodiaco?

Published

on

¡Prepárate para montar la ola de suerte y expansión!

El 6 de abril, mientras transmite el signo audaz y ardiente de Aries, el Sol se reunirá con Júpiter en Géminis en un afortunado sextil, creando una oportunidad emocionante para el crecimiento, la prosperidad y los nuevos y audaces comienzos.

Si ha estado ansiando algo nuevo y emocionante, este tránsito podría ser la luz verde que ha estado esperando. Es el momento perfecto para arriesgarse, expandir sus horizontes y celebrar hitos.

El Sol en Aries tiene que ver con la acción, el coraje y el liderazgo. Como el primer signo en el zodiaco, Aries encarna una chispa de iniciación, al igual que su gobernante planetario, Marte. Entonces, con el sol viajando a través de este intrépido signo de fuego, es hora de avanzar y adoptar desafíos con confianza y coraje.

Júpiter, por otro lado, es el planeta de abundancia, optimismo y expansión. En el signo cerebral de Géminis (curiosidad, comunicación y adaptabilidad, Júpiter abre un mundo de posibilidades. Amplifica la necesidad de exploración intelectual, nuevas ideas y conexiones sociales.

Este tránsito nos invita a ampliar nuestras perspectivas, pensar fuera de la caja y sumergirnos en nuevas aventuras que expanden nuestras vidas personales y profesionales.

El sextil del sol a Júpiter el 6 de abril es una poderosa combinación de pasión ardiente y expansión intelectual. Es un momento en que las oportunidades de crecimiento y exploración se sienten abundantes, y el cosmos recompensa a aquellos que están dispuestos a correr riesgos y adoptar el cambio.

Ya sea que esté comenzando un nuevo proyecto, tomando una gran decisión o sentirse inspirado para hacer algo nuevo, este tránsito ofrece el potencial de prosperidad y abundancia.

Siga leyendo para lo que esto significa para su signo del zodiaco.

Aries (del 20 de marzo al 19 de abril)

Signo del horóscopo Aries.

¡Eres la estrella del espectáculo, Aries! Además de que es su temporada de regreso solar, con el sol encendido por su primera casa, está uniendo fuerzas con Lucky Júpiter … y bueno, ¡eres imparable! Este es el momento perfecto para lanzar un proyecto personal o renovar su imagen. Considere actualizar su marca o presencia en las redes sociales para que coincida con su energía. Este tránsito se trata de ti, poseerlo y brillar.

Tauro (del 19 de abril del 20 de mayo)

Signo del horóscopo Tauro.

Este es un momento para la reflexión y el crecimiento espiritual, Tauro. Aries gobierna su introspectiva casa 12 de patrones subconscientes y la iluminación de Júpiter en su segunda casa de finanzas y valores, lo que le brinda la claridad de liberarse de los viejos patrones mentales que ya no le sirven. Tal vez es hora de dejar de lado esos sistemas de creencias limitantes en torno al dinero o su autoestima. Confíe en que las nuevas y prósperas oportunidades esperan una vez que lo haga.

Géminis (del 20 de mayo al 20 de junio)

Signo del horóscopo Géminis.

¡Es tu día de suerte, Géminis! A medida que el Sol energiza a su 11ª Casa de Asuntos Comunitarios, Júpiter aporta expansión y oportunidad a su letrero (¡y en la puerta de entrada!), Lo que lo convierte en un excelente momento para conectarse con personas influyentes y aquellos que comparten objetivos y sueños similares. Una oportunidad de establecer contactos podría llegar en su camino, o podría unirse inesperadamente a un grupo que se alinee perfectamente con sus valores.

Cáncer (del 20 de junio al 22 de julio)

Horóscopo Significa Cáncer.

Alcance las estrellas: su carrera está bajo el centro de atención, el cáncer. A medida que el Sol energiza y revitaliza su décima casa de autoridad pública, Lucky Júpiter lo hace más receptivo y sintonizado con su crecimiento personal y profesional. Esto no solo ofrece ideas espirituales, sino que también te empuja a hacer movimientos audaces en tu vida profesional. El éxito está en el horizonte.

Leo (22 de julio al 22 de agosto)

Signo del horóscopo Leo.

¡La aventura te espera, Leo! El sol está gobernado por el sol, y mientras enciende su novena casa de expansión filosófica, unirá fuerzas con Lucky Júpiter en su 11ª Casa de Asociaciones, Asuntos Comunitarios y visiones futuras. Ya sea que se trate de un viaje de último minuto, una clase en la que se está inscribiendo o un pasatiempo nuevo que está explorando, su mente y su corazón están abiertos a nuevas experiencias. Carpe Diem.

Virgo (22 de agosto al 22 de septiembre)

Signo del horóscopo Virgo.

Este es un gran problema: confía en que la transformación que está experimentando es para su más alto bien, Virgo. Con el sol sacudiendo su octava casa de intimidad y recursos compartidos, se reunirá con Lucky Júpiter en su décima casa de carrera y reputación pública. Una ganancia inesperada financiera o un profundo avance emocional podría estar en camino, ayudándole a entrar en su poder personal. Estar abierto a lo inesperado.

Libra (del 22 de septiembre al 22 de octubre)

Signo de horóscopo Libra.

Sus asociaciones y acuerdos contractuales están bajo el foco de este tránsito, Libra. A medida que el sol energiza su sector de relaciones, se reunirá con Audacy Júpiter en un sextil armonioso. Ya sea amor, negocios o amistades, una nueva conexión podría sentirse destinada. Es un buen momento para trabajar con otros en empresas conjuntas o colaboraciones que contribuyen a su crecimiento personal y profesional.

Scorpio (22 de octubre a Nov. 21)

Signo de horóscopo Escorpio.

Sus hábitos de salud pueden mejorar drásticamente bajo esta sinergia empoderadora, Scorpio. Si bien el trabajo y los asuntos de salud están a la vanguardia, el sextil de Sun a Júpiter, activando su sexta casa de bienestar y octava casa de empresas conjuntas, podría inspirarlo con la confianza y la energía que necesita para asumir nuevos desafíos. Tal vez es hora de sacudir su rutina o asumir un nuevo objetivo de salud. Una nueva oportunidad de trabajo o reconocimiento podría llegar a su manera, haciendo que sus esfuerzos se sientan más gratificantes.

Sagitario (22 de noviembre al déco de 21)

Signo de horóscopo Sagitario.

Sus jugos creativos fluyen, y eso es un eufemismo, Sagitario. Después de todo, no es todos los días que su gobernante planetario de la suerte, Júpiter, une fuerzas con el sol en su quinta casa de amor, pasión y autoexpresión. Aplastar por alguien especial? Ya sea que esté trabajando en un proyecto de pasión, atrapando sentimientos románticos o entrando en el centro de atención con un esfuerzo creativo, su aura está radiante y está listo para brillar. El amor también podría ser espontáneo y emocionante.

Capricornio (del 21 de diciembre al 19 de enero)

Signo del horóscopo Capricornio.

El hogar es donde está tu corazón, entonces, ¿por qué no darle el amor que merece, Capricornio? Con el sol que abarca su cuarta casa doméstica del hogar y la familia, mientras que en el flujo de energía armonioso con Júpiter en su sexta casa de mejora, logística y responsabilidad, puede sentirse llamado a mejorar su espacio vital o crear más espacio para sus pertenencias. Tal vez es hora de redecorar, mudarse a un nuevo espacio o incluso fortalecer los lazos familiares.

Acuario (19 de enero de 18 años)

Signo del horóscopo Acuario.

Usa tus palabras sabiamente: son más poderosos de lo que te das cuenta, Acuario. Con el sol energizando su curiosa tercera casa de comunicación, sus pensamientos e intercambios serán clave durante este tiempo. Aún así, a medida que el Sol se armoniza con Júpiter en su quinta casa de fertilidad y expresión creativa, eres igualmente inspirador y seguro en tu enfoque. Es posible que tenga una conversación esclarecedora que lleva su relación al siguiente nivel o provoca una nueva idea para un proyecto.

Piscis (del 18 de febrero al 20 de marzo)

Horóscope Sign Piscis.

¡El dinero fluye y la abundancia está llamando, Piscis! A medida que el sol energiza su segunda casa de comodidad, finanzas y valores que busca la estabilidad, se reunirá con Júpiter en su cuarta casa del hogar, la familia y los lazos emocionales. ¿Listo para hacer esa inversión en su espacio vital? Otros podían sentirse llamados para gastar un poco de efectivo extra en una excursión familiar. Confía en tu intuición: la prosperidad podría venir de manera sorprendente.

Continue Reading

Trending