I’m the father of two young girls, and I’m fascinated by how they learn and how their behaviors change as they get older.
Last week, we talked about how artificial intelligence is starting to behave differently too. It can now stay with a task long enough to act, adapt and make decisions without a human guiding every step.
We saw how this is playing out in tools like Claude Code and in public experiments like Moltbook, where AI agents are allowed to interact, remember context and build on prior behavior without constant human supervision.
If you follow Dario Amodei, Anthopic’s CEO, this shouldn’t come as much of a surprise. He’s been talking about this trajectory for years.
Recently, he pulled his thoughts together into a long essay that helps explain why this moment feels different.
And why the next phase of AI will test more than just the technology itself.
AI’s Awkward Adolescent Phase
Dario’s essay is called The Adolescence of Technology. And he uses the metaphor of adolescence deliberately.
Adolescence is about uneven growth, when you gain the ability to act before you fully understand when – and whether — you should or shouldn’t. That’s the situation we’re in with AI right now.
In the essay, Dario frames this moment as: “entering a rite of passage, both turbulent and inevitable, which will test who we are as a species.”
As he puts it: “Humanity is about to be handed almost unimaginable power, and it is deeply unclear whether our social, political and technological systems possess the maturity to wield it.”
That framing lines up closely with what we’ve been talking about recently in the Daily Disruptor.
Persistence is giving AI systems the ability to work longer, remember more and operate across more tools and environments than they could even a year ago. That’s making the potential of AI much easier for us to see.
In Dario’s view: “it cannot possibly be more than a few years before AI is better than humans at essentially everything.”
And the effects are already visible inside the labs building these systems. He notes: “AI is now writing much of the code at Anthropic, it is already substantially accelerating the rate of our progress in building the next generation of AI systems.”
I’ve seen that echoed across the industry. Developers are openly describing workflows where AI handles large portions of coding, testing and iteration.
One recent example comes from Andrej Karpathy, a founding member of OpenAI:
And it represents the awkward side of technological adolescence.
As AI takes on more of the hands-on work, people will find themselves spending less time doing it themselves. That means humans will increasingly rely on AI to plan, execute and iterate on complex tasks that once required direct expertise.
At the same time, this persistence is already creating a powerful feedback loop in development.
AI helps build better software. Better software produces better AI. And better AI speeds up the next round of progress.
Each improvement shortens the distance to the next one.
Today, what once took years now unfolds in months. And we’re getting close to the point when AI systems begin contributing meaningfully to the creation of their own successors.
How impactful will this next phase of AI be?
Dario asks readers to imagine something like a “country of geniuses in a data center” as a way to think about scale. If you can spin up millions of capable AI workers at low cost, the impact will be felt well outside the tech sector.
It will have a huge impact on markets and labor. It will affect geopolitics. And ultimately, it will influence who holds power in the first place.
That’s why the risks surrounding AI deserve attention now, not later.
Dario lays out several broad categories, including misuse, concentration of power, economic disruption and the challenge of maintaining control over systems that operate autonomously at scale.
But he’s not only worried about bad actors. He’s concerned about momentum.
Once a technology proves useful, incentives take over. Companies deploy it to stay competitive, and governments deploy it to stay relevant.
But regulation rarely keeps pace with adoption.
AI works, so it spreads. As it spreads, pausing its progress becomes a lot harder. And by the time society decides to debate the implications, the infrastructure is already embedded.
That dynamic, more than any individual risk category, is what makes this phase of technological adolescence worth watching.
After all, we saw this same thing play out with the internet, and I’ve written about those consequences before. But AI’s economic impact promises to be more far-reaching.
Analysts estimate that AI could add nearly $20 trillion to the global economy by 2030, more than the GDP of any single country today. McKinsey outlines the scale of that potential impact in the chart below.

In the U.S., AI-related investment already contributed over 1% to GDP growth in early 2025, rivaling the impact of the dotcom boom at its peak.
But economic scale doesn’t guarantee technical maturity.
And the increased persistence of newer AI models might not be the simple path to better outcomes one might expect.
In recent studies, Anthropic found that giving AI models more time to reason doesn’t always improve performance. In some cases, it actually makes answers worse.
As tasks stretch out, models can fixate on irrelevant details, overcomplicate simple problems or reinforce flawed assumptions that compound over time.
In other words, longer thinking isn’t the same thing as better thinking.

Source: Anthropic
We’re now learning how to manage systems that can work longer without letting them drift. Because persistence raises the ceiling on what AI can do. But without careful context management, it also raises the cost of mistakes.
That’s why Dario spends so much time on institutional readiness in his essay.
His argument isn’t that AI progress should stop. It’s that society needs to grow up alongside it.
He’s explicit that the upside of AI is massive, writing that it could drive: “enormous advances in biology, neuroscience, economic development, global peace, and work and meaning.”
But getting there requires execution. Society must build rules, incentives and safeguards that keep pace with AI capability.
Because the adolescence he describes isn’t a distant phase ahead of us.
We’re already in it.
Here’s My Take
Dario Amodei isn’t predicting an AI apocalypse.
He’s pointing out that its capabilities are arriving faster than the structures meant to guide it, and that ignoring that gap would be a huge mistake.
Adolescence is a fragile stage. It can lead to growth, or it can lead to problems that take years to undo.
The last few weeks have given us a glimpse of where AI is headed.
We need to make sure this technology grows up the right way, with the right guardrails, incentives and expectations guiding it.
Regards,
Ian KingChief Strategist, Banyan Hill Publishing
Editor’s Note: We’d love to hear from you!
If you want to share your thoughts or suggestions about the Daily Disruptor, or if there are any specific topics you’d like us to cover, just send an email to [email protected].
Don’t worry, we won’t reveal your full name in the event we publish a response. So feel free to comment away!



















