I had the pleasure of attending an intimate conference of software gurus, creators, and practitioners, including some of the people who invented object-oriented design and agile development. The Chatham House Rule prevents me from disclosing companies and names, but think of the biggest tech companies, Fortune 1000 brands, and famous authors of seminal software development books — and me, feeling a bit out of place. It’s been years since I coded a compiler in Turbo Pascal or generated n-dimensional linear algebra solutions in APL or built business applications for clinical healthcare.
But I do think about software a lot. In this post, I muse on what I heard at the “unconference” and encourage you to get more details from day one sessions and day two sessions. In each bullet below, I report what I heard and give you my personal view on what we should be doing. For a deep dive into software development or the future of software, I encourage you to reach out to Forester analysts Diego Lo Giudice for software development, Ken Parmelee for application generation, and Devin Dickerson for all things engineering.
The Moltbook crustacean caught the imagination of all. It was great timing that this crazy agent-to-agent society (human-directed or not) sprung up over the weekend before the retreat. It seemed to toll the death of human relevance, at just the moment when we needed to reassert control over the runaway AI train.
“Welcome to Gas Town” triggered an existential identity crisis. When an appgen engine can run circles around a development squad, what’s left for a software engineer? There was real awareness and, I suspect, personal anxiety that if AI can do the work of people, who am I and what the heck am I going to do?
Gurus are jazzed by autonomous software creation. Tokenomics be damned ($300,000 in API calls to Claude to generate an application to generate applications in one instance). Surely those costs will come down. But will the software be any good? If Anthropic’s project to vibe-code a C compiler is any indicator, yes, it will.
“Just because you can doesn’t mean you should.” There was lots of enthusiasm for fast engineering. And lots of concern over the quality, efficacy, and safety of the software that AI was building. Slowing the code-gen thing down to let the human processes of deciding and protecting surfaced again and again, and the realities of enterprise software reasserted dominance in the vibe.
Nobody seems worried about the economics of AI. When I raised the question of whether it made any energy sense to ask an LLM to do what a piece of free software running on a phone already does, the answer was, “We’ll build more nuclear reactors.” Hmm. I guess I’m waiting for an AI brain as efficient as a human one, able to process at lightning-speed fueled only by a cup of coffee and a Snickers bar.
Software leaders in large companies are worried about their teams. In one particularly boisterous conversation, the room agreed that we’ll start hiring junior programmers again, and they’ll come in with AI skills to develop software. It’s the mid-career engineers energized by the joy of coding that we realized are in trouble.
For myself, I counseled caution, closing out day one with this: “We have a design choice. We can let the AI tell us what to do. Or we can tell the leaders of AI companies what to do. The choice is ours, not theirs.”


















