Last week, we talked about how agentic AI is finally getting to work.
AI agents are now starting to plan, reason and carry out digital tasks without constant prompting.
Coders are using them to look for bugs and rewrite broken code. Sellers on Amazon are using them to help manage their inventories. And agentic AI is even being used to take on more complex issues.
For example, last month researchers published a paper on HealthFlow, a self-evolving research agent built to tackle medical research challenges.
Instead of waiting for a human prompt at every step, HealthFlow plans its own approach to research. It tests different strategies, learns from the results and improves its methods over time.
It’s like a junior researcher who gets smarter with every experiment. And in benchmark tests, HealthFlow beat top AI systems on some of the hardest health data challenges.
Yet as exciting as that is, these AI agents are still software. They’re trapped inside the digital world.
Or are they?
Robots Are Getting an Upgrade
On September 25, Google’s DeepMind introduced Gemini Robotics 1.5.
And with this release, agentic AI has become part of the physical world.
Gemini Robotics 1.5 is actually two models that work in tandem. Gemini Robotics ER 1.5 is a reasoning model. It can use tools like Google Search to break big goals into smaller steps and decide what needs to happen next.
Gemini Robotics 1.5 is a vision-language-action (VLA) model. It takes the subgoals from ER 1.5 and translates them into concrete movements like grasping, pointing and manipulating objects.
The combination of the two models is something new in robotics…
A system that thinks before it moves.
DeepMind says these models are designed for multi-step, everyday tasks like sorting laundry, packing for the weather or recycling items based on local rules.
This kind of adaptability has been the missing piece in robotics for decades.
Factories are full of rigid machines that perform a single action, over and over again. But the moment the product changes, the robot has to be reprogrammed from scratch.
What DeepMind is developing is a robot that can generalize and make changes on the fly.
Equally as important, they’ve introduced motion transfer, the ability to teach a skill once and share it across different robot bodies.
In one video, they showed a robot arm in the lab learning how to perform specific tasks. Gemini Robotics 1.5 then enabled Apptronik’s humanoid Apollo robot to reuse that knowledge without starting from scratch.

Image: DeepMind on YouTube
This will allow robots to rapidly scale the kinds of jobs they can do in the real world.
And it’s why DeepMind isn’t alone in these ambitions.
Nvidia has been racing down the same path. At its GTC conference in March, Nvidia’s CEO Jensen Huang showed off something called GR00T that’s like a “brain” for humanoid robots.
It’s a foundation model trained to help them see, understand and move more like people.
A few months later, Nvidia added the “muscle” when it introduced Jetson Thor, a powerful computer that sits inside the robot itself. Instead of sending every decision back to the cloud, it allows robots to think and act on the spot in real-time.
Together, GR00T and Jetson Thor give robots both the intelligence and the reflexes they’ve been missing.
Amazon has also been moving in this direction. Last year, the company began testing Digit, a humanoid robot from Agility Robotics, inside its warehouses.
Image: Agility Robotics
The trials were limited, but Amazon’s goal is obvious. A fleet of humanoid robots would not only never tire, they would never unionize.
Then there’s Covariant, a startup that launched its own robotics foundation model, RFM-1, earlier this year.
Covariant’s robots can follow natural language instructions, learn new tasks on the fly and even ask for clarification when they’re not sure what to do. In other words, RFM-1 gives robots human-like reasoning capabilities.
That’s a huge leap from the mindless machines we’ve been used to.
Sanctuary AI is building robots equipped with tactile sensors. Their goal is to make machines that can feel what they’re touching.
It’s an ability humans take for granted, but it’s one that robots have always struggled with. Combine touch with reasoning and you can see how robots could soon handle the kind of unpredictable, delicate tasks that fill our daily lives.
But what do all these advances in robotics add up to?
Nothing less than what I’ve been pounding the table about for years.
The line between software and hardware is blurring as the digital intelligence of AI agents is being fused with the physical capabilities of robots.
Once that line disappears, the opportunities are endless…
And the market potential is staggering.
Goldman Sachs projects the humanoid market alone could reach $38 billion by 2035.
While the global robotics industry is projected to hit $375 billion in a decade — more than 5X its size today.
Here’s My Take
As always, there are reasons to measure optimism with caution.
After all, real-world environments aren’t the same as digital environments. Lighting changes, objects overlap and things break.
Dexterity and agility are still issues for robots, yet safety is non-negotiable. A clumsy robot could injure someone.
What’s more, the costs of building and maintaining these systems remain high.
But if history tells us anything, it’s that breakthroughs rarely arrive fully polished.
I’m sure you remember the slow, unreliable dial-up internet of the 1990s. But that didn’t stop it from becoming the backbone of the global economy.
I believe that’s where we are with the convergence of agentic AI and robotics today…
But I expect things will move a lot faster from here.
Going forward, we’re going to start dealing with machines that can think and act in the same world we live in.
And the disruption that follows has the potential to dwarf anything we’ve seen so far.
Regards,
Ian KingChief Strategist, Banyan Hill Publishing
Editor’s Note: We’d love to hear from you!
If you want to share your thoughts or suggestions about the Daily Disruptor, or if there are any specific topics you’d like us to cover, just send an email to [email protected].
Don’t worry, we won’t reveal your full name in the event we publish a response. So feel free to comment away!