Just one year ago, I wrote that Jensen Huang might have to eat his words.
At the time, Nvidia’s CEO had poured cold water on quantum computing by saying “very useful” quantum computers were probably about 20 years away.
More specifically, Huang said 15 years would be early, 30 would be late and 20 years was a timeline “a whole bunch of us would believe.”
Image: Wikimedia Commons
That comment hit quantum stocks hard and became a kind of shorthand for the industry’s biggest problem.
You see, the idea of quantum computing is incredibly powerful. It has the potential to break modern encryption, transform drug discovery and solve optimization problems that today’s computers can’t even begin to approach.
But quantum computers today are too fragile and error-prone to deliver real-world results at scale.
That’s what makes Nvidia’s latest move so surprising.
Because instead of sitting back and waiting for that 20-year timeline to play out, Jensen Huang is now actively trying to bring the quantum future a lot closer.
The Quantum Bottleneck
Last week, Nvidia unveiled “Ising,” a new open family of AI models built for two of quantum computing’s hardest problems: calibration and error correction.
Image: Nvidia
As I’ve written before, quantum computers rely on qubits. A traditional computer uses bits, which can only be a 0 or a 1. But qubits are different because they can exist in multiple states at once.
This is what gives quantum machines their potential to solve problems that classical systems can’t.
But it also makes them extremely fragile.
That’s because qubits are extremely sensitive to their environment. Heat, vibration and electromagnetic noise can all disrupt them.
Even reading a qubit can introduce errors.
Nvidia says the best quantum systems today still fail roughly once every thousand operations. And that becomes a huge problem when useful computations might require millions or even billions of steps.
So the challenge isn’t just building more qubits. It’s keeping them accurate long enough to complete a calculation.
That breaks down into two problems.
The first is calibration.
Quantum processors have to be constantly tuned so qubits behave the way engineers expect them to. That usually involves running repeated test circuits, measuring the results and adjusting control signals by hand or with basic optimization software.
Nvidia’s Ising models change that.
They’re trained on quantum system data and can learn how a processor behaves under different conditions.
Instead of trial-and-error tuning, the AI can predict the adjustments needed and apply them automatically. That reduces the time it takes to stabilize a system and keeps it operating closer to optimal performance.
So that addresses the first problem. But not the second problem, which is error correction.
Even with perfect calibration, errors don’t disappear. They accumulate. To deal with that, quantum systems encode information across multiple physical qubits and use classical computers to detect and fix mistakes as they happen.
Today, that process is extremely inefficient. In many cases, it can require hundreds or even thousands of physical qubits just to produce a single reliable “logical” qubit.
That process generates massive amounts of data that has to be analyzed in real time. In fact, those signals often need to be processed in microseconds. If corrections come too late, the computation is already lost.
Nvidia’s Ising models are designed to handle that workload too.
They can decode error signals faster and more efficiently, which is essential if quantum systems are going to scale beyond small experiments.
Early results suggest Ising can improve accuracy on key quantum tasks by up to 3X, which is meaningful when quantum systems operate right on the edge of failure.
It doesn’t mean Huang has done a complete 180 and Nvidia is now building its own quantum computer.
But the company is building the intelligence layer that helps quantum hardware function. In that sense, it’s following the same playbook it used in AI, positioning itself as the platform that everything else runs on.
And the market certainly noticed.
After Nvidia announced Ising during its Quantum Day event, shares of IonQ and Rigetti jumped sharply.
IonQ (NYSE: IONQ) was up 17%:

While Rigetti (Nasdaq: RGTI) shot up 11%.

Quantum computing companies D-Wave (NYSE: QBTS) and Quantum Computing (Nasdaq: QUBT) also rallied as investors interpreted Nvidia’s move as a meaningful vote of confidence in the sector.
Those kinds of moves are rare for a single announcement in a sector this early, which tells you how closely investors are watching for any signal that the timeline is shortening.
But their reaction makes sense to me.
A year ago, Huang’s words helped knock the air out of quantum stocks.
But today Nvidia is effectively telling the market that the path to useful quantum computing could run through AI.
Here’s My Take
This is one of the clearest examples yet of what George Gilder and I call Convergence X.
The next great technology wave won’t come from one breakthrough in isolation. It will come from multiple frontier technologies advancing simultaneously and then reinforcing one another.
Huang’s prediction might still be right that large-scale quantum systems will take years to mature. But I believe Nvidia will help compress the quantum timeline because of the feedback loop the company is helping to create with its Ising models.
Better AI will improve quantum systems, and better quantum systems should eventually unlock new kinds of computing power.
That, in turn, will feed back into improving AI.
That’s how separate breakthroughs converge into a technological revolution.
Regards,
Ian KingChief Strategist, Banyan Hill Publishing
Editor’s Note: We’d love to hear from you!
If you want to share your thoughts or suggestions about the Daily Disruptor, or if there are any specific topics you’d like us to cover, just send an email to [email protected].
Don’t worry, we won’t reveal your full name in the event we publish a response. So feel free to comment away!

















