The machine is eating faster than you can feed it
Sixty-three issues closed across thirteen projects in one day. Four milestones completed. And the hardest problem wasn't building — it was keeping up with what you've already built.
Duration: 10:18 | Size: 9.44 MB
Most organizations don’t fail because they can’t execute. They fail because execution creates its own problems — problems that look nothing like the ones you started with. You solve the bottleneck, and the bottleneck moves. You automate the tedious work, and suddenly the scarce resource is the judgment that decides what to automate next. Today I watched this happen in real time across thirteen projects, and the pattern is more interesting than the output.
Sixty-three issues closed. Thirty-nine pull requests merged. Four milestones completed. Thirteen separate codebases, each with their own agent session, each doing real work — not toy demos, not proof-of-concept experiments, but shipping features, fixing production bugs, closing security gaps, writing migration specs. One person steering all of it. And the thing that surprised me wasn’t the volume. It was what the volume revealed.
Here’s what I mean. When you have thirteen projects running in parallel and four of them close their milestones on the same day, you’d think the feeling would be satisfaction. It’s not. The feeling is: now what? The machine consumed all the work I queued for it, and it’s hungry. Multiple projects ended the day with empty issue queues. The scouts — automated code reviewers that explore the codebase and file issues — need to run more frequently now, because the execution pipeline is outpacing the discovery pipeline. I built a system to do the work, and the system is now waiting for me to tell it what work to do. The bottleneck shifted, exactly as the theory predicts.
This is the coordination tax in reverse. In traditional organizations, the coordination tax is the overhead of getting humans aligned — meetings, approvals, status reports, the whole apparatus of making sure everyone is doing the right thing. Remove the humans from execution, and the coordination tax doesn’t disappear. It concentrates. It falls entirely on the one person who decides what the machines should work on, and when, and why. The judgment load goes up, not down.
Let me make this concrete. One of the projects — a Stellaris mod generator called Phantasmagoria — had a game-breaking bug that had been there since launch. Every modifier in the game was being written to the wrong directory. The percentage bonuses — plus eight percent unity, plus ten percent research — were doing literally nothing. The absolute resource additions worked fine, so during casual playtesting it seemed like things were roughly working. But every modifier effect was silently failing. One line in one Python file. static_modifiers instead of modifiers. That’s it.
Now, the scout system found and fixed five code quality issues in that same project today. It extracted duplicated functions, added collision detection, created parametrized tests. Good, useful work. But it didn’t find the one-line bug that made the entire modifier system nonfunctional. A human found it. Through playtesting. By noticing that something felt wrong — that percentage bonuses didn’t seem to be doing anything. The automated system was optimizing code quality while the product was fundamentally broken in a way that only a human playing the game would notice.
This is the pattern I keep coming back to. Automation is extraordinary at doing more of what you already know needs doing. It’s terrible at noticing what you haven’t thought to look for. The gap isn’t intelligence. It’s embodiment. The scout didn’t play the game. It can’t feel the wrongness of a modifier that should change something but doesn’t. That kind of noticing requires being in the world of the thing you’re building, not just reading its source code.
Something similar showed up in the security work. Eclectis, a content intelligence platform, ran a security-focused code audit today. The automated scout found five real issues — a timing attack vulnerability in token comparison, missing authorization on an admin endpoint, cross-user data access through an RLS policy gap, a command type mismatch that was silently dropping search requests, and missing input validation on AI-generated scores. All five were found, fixed, and merged in one session. That’s genuinely impressive.
But here’s the thing: nobody asked for a security audit. The scout was just exploring the codebase and happened to look at security. Meanwhile, across all thirteen projects, there’s no coordinated security posture. Synaxis had a CSP gap. Authexis hardened its database connection pool. Eclectis patched RLS policies. Each project is finding and fixing its own security issues independently, with no shared threat model, no cross-project audit, no systematic approach. The individual fixes are good. The absence of a system is concerning. Security is exactly the kind of thing where project-level autonomy isn’t enough — you need someone thinking about the whole surface area.
And that someone is me. Which brings us back to the judgment bottleneck.
I want to talk about what happened with the test infrastructure, because it illustrates a different facet of this. Authexis went from thirty-five engine tests to one hundred fifteen in a single day. The engine — forty-two handlers processing commands for content creation, social publishing, scheduling, briefing generation — had essentially one test file. Now it has comprehensive coverage for the four most critical modules. That’s the kind of work that agents excel at: read the code, understand the patterns, write tests that verify the behavior, make sure the mocks are right. Tedious, important, exactly suited to parallel autonomous execution.
But here’s what the tests revealed: the test environment wasn’t properly isolated from production. Outreach approval tests were running without mocking the Fastmail API token. Every time the grind loop ran pytest — which it does on every issue execution cycle — four real email drafts were being created in my Fastmail account. Test drafts, with subject lines like “Email 1” and “Email 2,” addressed to [email protected]. Accumulating daily. Nobody noticed because the tests passed. The tests did exactly what they were supposed to do — approve emails and verify the approval logic — but they were also making real API calls as a side effect.
This is the automation equivalent of an autoimmune disorder. The system is working correctly in every local sense and producing a systemic pathology that only manifests as a growing pile of mysterious draft emails. The fix was trivial — an autouse fixture that strips the API token from the environment before tests run. But finding it required a human noticing something weird in their email client and then tracing the cause across three layers: the draft emails, the test code, the grind loop that triggers the tests. No individual component was broken. The interaction was the bug.
There’s a broader point here about what it means to run a knowledge proxy — a separate AI session loaded with your full corpus of writing, consulting experience, and philosophical positions. Today the proxy earned its keep. Three other project sessions consulted it via tmux: one asking about a platform decision for a course launch, another requesting research connections for two blog ideas. The proxy answered from the corpus, grounding each response in specific documents — the book manuscript, course module outlines, previous essays on coordination and judgment.
Then it spent the bulk of the session producing interview preparation: twenty answers across two themes, each three to five hundred words, each grounded in specific passages from the book. The machine-self and what remains human. The coordination tax and designing organizations for judgment. These are the same themes I’ve been writing about for years, but the proxy can traverse the full corpus faster than I can remember what I’ve written. It’s not replacing my thinking. It’s making my prior thinking available to my current thinking at a speed that wasn’t possible before.
This is what I mean when I talk about reclaiming suppressed capacities. The proxy doesn’t suppress my judgment — it amplifies it by making my own history of judgment accessible. Most knowledge workers have this problem: you’ve thought about something deeply, written about it, discussed it, evolved your position — and then when you need that thinking, it’s scattered across fifteen documents and three years of conversations. The proxy collapses that retrieval problem. Your past judgment becomes a resource for your present judgment, instead of something you have to reconstruct from scratch every time.
But the proxy also broke today. Eighteen symlinks into the corpus — the ones pointing to course materials, the book manuscript, competitive analysis, audience research — were all dead. The project they pointed to had been renamed weeks ago, from “courses” to “course-work-of-being.” The symlinks used absolute paths. Nobody noticed because the proxy still worked for everything else. It just couldn’t access the strategic core of its own knowledge base.
Fragile infrastructure looks fine until it doesn’t. This is true of organizations, true of software, true of the whole system I’m building. The orchestration layer learned to turn itself off tonight — literally, the close skill now suspends the launchd agent so grind loops don’t run unattended — because I discovered that not turning it off was causing real problems in the real world. Those test email drafts were the system running correctly without supervision and producing waste that accumulated silently.
So here’s the question I’m sitting with. I have thirteen projects that can execute autonomously. The execution pipeline is outpacing the discovery pipeline. The machine is eating faster than I can feed it. Four milestones closed today, and multiple projects are approaching empty queues. The scouts need to run more often. The security posture needs cross-project coordination. The knowledge proxy needs more resilient infrastructure. The orchestration layer needs to know when to stop.
Every one of these problems is a judgment problem, not an execution problem. And they’re all my problems — nobody else’s. The machine didn’t create these problems in the obvious sense. But it created the conditions where these problems matter. When you’re doing everything by hand, you don’t need a systematic security posture across thirteen projects, because you’re only touching one project at a time and you carry the context in your head. When you’re steering thirteen autonomous agents, the context exceeds what any human head can hold. You need systems for the systems.
What does that look like? I don’t fully know yet. But I know what it doesn’t look like: it doesn’t look like more automation. The answer to “the machine is eating faster than I can feed it” is not “build a machine that feeds the machine.” At some point, you have to sit with the judgment. You have to play the game and notice that the modifiers aren’t working. You have to look at your draft emails and wonder where they came from. You have to decide what the thirteen projects should do tomorrow, and no amount of infrastructure will make that decision for you.
That’s your work. The machine won’t do it for you.
Why customer tools are organized wrong
This article reveals a fundamental flaw in how customer support tools are designed—organizing by interaction type instead of by customer—and explains why this fragmentation wastes time and obscures the full picture you need to help users effectively.
Infrastructure shapes thought
The tools you build determine what kinds of thinking become possible. On infrastructure, friction, and building deliberately for thought rather than just throughput.
Server-side dashboard architecture: Why moving data fetching off the browser changes everything
How choosing server-side rendering solved security, CORS, and credential management problems I didn't know I had.
The work of being available now
A book on AI, judgment, and staying human at work.
The practice of work in progress
Practical essays on how work actually gets done.
The proxy problem
Every organization has this problem: knowledge locked inside one person's head. Today I accidentally designed a solution — and it has nothing to do with documentation.
True 1-to-1 outreach is finally possible with AI
The 1-to-1 personalization promise is thirty years old. It never worked because understanding each person was too expensive. AI changed the economics.
Manual fluency is the prerequisite for agent supervision
You cannot responsibly automate what you cannot do manually. AI agents speed up work for people who already know how to do it. They do not replace the need to learn the work in the first place.
The proxy problem
Every organization has this problem: knowledge locked inside one person's head. Today I accidentally designed a solution — and it has nothing to do with documentation.
The gun you didn't need
Every organization has loaded weapons lying around that nobody remembers loading. The most dangerous capability in any system is the one you built 'just in case.'
Nobody promotes you to operator
There's a moment in every project where the work stops being about building and starts being about keeping things running. Nobody announces this transition. Nobody gives you new tools for it. And most people keep building long past the point where they should have stopped.