When the queue goes empty
Most products don't fail at building. They fail at the handoff between building and becoming real. What happens when the code is done and the only things left are judgment calls?
Most products don’t fail at building. They fail at the handoff between building and becoming real. I watched ten projects cross that line yesterday, and the most interesting thing was how different the failure modes looked on each side.
Eclectis, a content intelligence tool I’ve been building, had its first real user test. Not a unit test. Not a staging environment walkthrough. A person — a friend doing a Cowork session — signed up and tried to use it. Within minutes, the entire onboarding funnel collapsed. Email confirmation redirected to an empty page. The “Start discovering” button led to a blank screen. There was no indication that anything was happening behind the scenes. The signup worked. The backend worked. The pipeline worked. But the experience of being a new user was broken in nineteen different ways.
Here’s what nobody tells you about the gap between “it works” and “someone can use it.” The gap isn’t in your code. It’s in every assumption you made about what happens between two screens. You built the signup form. You built the content pipeline. You never built the thirty seconds between them where a real person is staring at their screen wondering if something went wrong.
We fixed all nineteen issues in a single day. Signup now shows a “check your email” confirmation. Email confirmation routes to onboarding instead of a dead page. The articles page polls for scan progress: “Scanning your sources… 2 of 4 complete.” Stripe was set up from scratch. An upgrade banner appears for free-tier users. And we found that the raw Anthropic API key was being sent to the browser. That one made me pause.
But the real lesson wasn’t the bug fixes. It was that a product can be code-complete and user-broken at the same time. Eclectis had passing tests, clean architecture, a solid data model. By every engineering metric, it was ready. By every human metric, it wasn’t even close. The distance between those two assessments is where most developer tools go to die.
Eclectis wasn’t alone. Three other projects — Dinly, Synaxis-h, and Scholexis — all independently reached the same state: the code is done, the scout passes find nothing meaningful, and everything that remains is a human decision. Dinly has implemented four of its five vision items and the fifth needs my input on LLM budget. Synaxis-h ran twelve scout passes and the codebase is comprehensively clean. Scholexis completed five core CRUD features in a day and reached 102 out of 105 on its Next.js port, with the remaining three all requiring human decisions, not code.
This is a phase transition that organizations rarely name. You go from “we need to build more” to “we need to decide what we want.” It feels like progress has stalled, but what’s actually happened is the bottleneck has moved. It moved from machines to humans. From typing to thinking. And most teams aren’t structured for that shift. They have standup meetings and sprint planning for the building phase. They have nothing for the “we built everything, now what” phase.
Then there’s the Simplebooks problem, which is the opposite failure. Two scout passes discovered that all 8,500 lines of a fully functional bookkeeping app — accounts, transactions, reconciliation, reports, CSV import, PDF generation — are sitting on unmerged feature branches. The main branch has nothing but Create Next App boilerplate. The app was built. It was never shipped. No PRs were ever created. It’s the software equivalent of writing an entire book and leaving the manuscript in a drawer.
This happens more than people admit. The building is the comfortable part. The merging, deploying, exposing your work to reality — that’s where the resistance lives. Simplebooks isn’t blocked by a technical problem. Issue #1, the merge strategy decision, is the only blocker. It’s a thirty-minute task that’s been deferred indefinitely because nobody enjoys that particular thirty minutes.
Something I haven’t talked about much is the amount of code that got removed yesterday. Eleven thousand lines across the fleet. Authexis alone shed 8,400 lines — the entire article processing pipeline, deprecated engine handlers, orphaned route directories. Phantasmagoria removed 1,467 lines of old generator code and prompt building. Synaxis-h cleaned another 1,200 lines of dead SCSS.
Dead code is a form of organizational lying. It tells the next person who reads the codebase that these functions matter, that this complexity is necessary, that you need to understand this to understand the system. None of that is true. The functions aren’t called. The complexity serves nothing. But removing dead code requires a kind of confidence that most developers avoid. You have to be certain enough to delete something that someone once thought was important. And you have to be right.
Here’s what I’ve noticed about the rhythm. The first scout pass in any project finds ten or twelve issues. The second finds eight. By the fourth pass, you’re finding three. Synaxis-h is on its twelfth pass and the returns are essentially zero. There’s a natural ceiling to how clean a codebase can get through automated scanning. Beyond that ceiling, the remaining issues are all judgment calls — do we need analytics consent? Should we add case studies? These aren’t bugs. They’re business decisions wearing technical clothes.
The fleet closed 113 issues across ten projects yesterday. One hundred and sixteen commits. Two complete pipelines rebuilt from scratch. Five core CRUD features in Scholexis. A full user-test-driven sprint in Eclectis. Security fixes in six projects independently.
And at 2 AM, the rollover pipeline we built the day before ran for the first time successfully. It paused the orchestrate agents, waited for in-flight work to drain, injected the rollover command into each session, aggregated the work logs, and pushed them to the blog repository. Ten narrative work logs appeared, written by the sessions themselves about their own work. No human touched them. The infrastructure bet from the previous day — the one that failed twice before working — paid off.
There’s a quiet satisfaction in watching machines write about what they did. Not because the writing is particularly good. It’s because the system is closing its own loops. The work happens, the work gets recorded, the record feeds the synthesis, the synthesis feeds the reflection. Yesterday’s reflection about the rollover failing was, in a way, the system reflecting on its own inability to reflect. And now that the pipeline works, tonight’s rollover will capture the day I’m describing right now, and the cycle will continue without anyone starting it.
What does it mean when the bottleneck shifts from building to deciding? When the code is done and the only things left are judgment calls? Most organizations treat this as a crisis — the project is “stalled,” the team needs to “unblock.” But maybe the building phase was never the point. Maybe the building phase was just the part we understood well enough to automate. And the deciding phase — the messy, human, uncomfortable part where you figure out what you actually want — that’s where the real work starts.
If you’ve been shipping features to avoid making decisions, the scout passes will eventually call your bluff. The queue will go empty. The codebase will be clean. And you’ll be left with a product that works perfectly and a list of questions only you can answer.
Why customer tools are organized wrong
This article reveals a fundamental flaw in how customer support tools are designed—organizing by interaction type instead of by customer—and explains why this fragmentation wastes time and obscures the full picture you need to help users effectively.
Infrastructure shapes thought
The tools you build determine what kinds of thinking become possible. On infrastructure, friction, and building deliberately for thought rather than just throughput.
Server-side dashboard architecture: Why moving data fetching off the browser changes everything
How choosing server-side rendering solved security, CORS, and credential management problems I didn't know I had.
The work of being available now
A book on AI, judgment, and staying human at work.
The practice of work in progress
Practical essays on how work actually gets done.
When your agents start breaking each other's code
Two agents modified the same file independently and created database locks. The fleet hit 135 issues in one day — and the coordination problem that comes with it.
The removal tax
The most productive thing you can do with a product is take features away. Eighty-nine issues closed across eight projects, and the hardest lesson came from a pipeline that ran perfectly and produced nothing.
The product changed its mind
A product pivoted its entire philosophy mid-session — from 'here's your list' to 'here's your next thing.' The code shipped in the same conversation as the idea. That's not iteration. That's something else.
The removal tax
The most productive thing you can do with a product is take features away. Eighty-nine issues closed across eight projects, and the hardest lesson came from a pipeline that ran perfectly and produced nothing.
The delegation problem nobody talks about
When your automated systems start finding real bugs instead of formatting issues, delegation has crossed a line most managers never see coming.
What your systems won't tell you
The most dangerous gap in any organization isn't between what you know and what you don't. It's between what your systems know and what they're willing to say.