Dev reflection - February 24, 2026
I want to talk about what happens when the thing that runs the factory needs more maintenance than the factory itself.
Duration: 9:19 | Size: 10.7 MB
Hey, it’s Paul. Monday, February 24, 2026.
I want to talk about what happens when the thing that runs the factory needs more maintenance than the factory itself.
There’s a concept in manufacturing that most knowledge workers have never heard of but live inside every day. It’s called the maintenance-to-production ratio. How much of your total effort goes toward keeping the machines running versus actually making the thing you sell? In a well-run plant, that ratio is low. The machines hum. You focus on output. In a poorly run plant, you spend all day fixing the machines, and the product barely moves.
Here’s what’s interesting. When you automate knowledge work — when you build systems that write code, generate content, ship features — nobody talks about this ratio. We talk about throughput. We talk about velocity. We celebrate the output. Thirty commits in a day! Features shipping across four projects simultaneously! Look at all that production.
But if most of those commits are fixes to the automation system itself — if the pipeline that’s supposed to reduce friction is generating its own friction at a rate that demands constant attention — you haven’t eliminated work. You’ve moved it. You’ve traded one kind of labor for another, and the new kind is harder to see because it looks like progress. Every commit looks like a commit. Every PR looks like a PR. The git log doesn’t distinguish between “built the thing customers want” and “fixed the tool that builds the thing that builds the thing customers want.”
This isn’t a complaint about automation. It’s an observation about a trap that’s easy to fall into. And I think it applies far beyond software.
Here’s the second thing I’ve been sitting with. Retries.
Imagine you hand a task to someone on your team. They come back with a draft. It’s not right. You give feedback. They try again. Not right. Feedback. Try again. Twelve times. Twelve attempts to get one thing done.
At some point, a reasonable manager asks: is this person the right fit for this task? Or more precisely — is the feedback loop working? Because twelve iterations isn’t persistence. It’s a signal that something in the communication is broken. Either the spec is ambiguous, or the reviewer’s standards are miscalibrated, or the person doing the work is missing context they don’t know they’re missing.
When a human does this, we recognize it immediately. We intervene. We say, “Let’s get in a room and talk through what’s actually needed here.”
When a machine does it, we let it run. Because compute is cheap and patience is infinite and eventually the thing lands. But “eventually lands” hides real cost. Not just in compute — in accumulated complexity. Each retry leaves sediment. Patches on patches. A solution that technically works but arrived through a path nobody would have chosen deliberately. The git history becomes an archaeological record of misunderstanding.
The deeper question this raises — and this is the one I think matters for anyone managing teams, not just automated pipelines — is: what’s your retry budget? At what point do you stop cycling and escalate? Most organizations have no explicit answer. People just keep iterating until something sticks or someone burns out. Making that threshold explicit — three attempts, then we stop and reassess — is one of those small structural decisions that changes everything downstream. It forces you to invest in clarity upfront instead of paying for ambiguity in rework.
Third thing. And this one’s the big one.
There’s a moment in the life of any platform — any tool, any product, any organization — where the bottleneck shifts. You spend months or years building capability. Can we do this? Can we ship that? Can we handle this scale? And then one day you wake up and the answer to all of those questions is yes. You can build anything. The machinery works.
And the bottleneck moves to: what should we build? What should we say? What content goes on this platform? What’s actually worth a user’s time?
This is the shift from engineering to editorial. From capability to judgment. And it’s a shift that most technical cultures are catastrophically unprepared for.
I’m watching it happen across multiple projects right now. The platforms work. The schemas are in place. The delivery mechanisms — email, podcast feeds, app views — they’re built. The question is no longer “can we get this to users?” The question is “what do we put in front of them that’s actually worth their attention?”
And here’s the thing that should make every builder uncomfortable: the tools that got you to capability are almost useless for editorial judgment. You can automate code generation. You can automate testing. You can automate deployment. You cannot automate the decision about whether this piece of content is good enough, whether it serves the person reading it, whether it respects their time. That’s judgment. That’s taste. That’s the part of you that’s irreducibly human.
The bottleneck always moves. And it always moves toward the thing machines can’t do. First it was computation. Then it was connectivity. Then it was code. Now it’s judgment about what’s worth making in the first place.
If you’re building anything right now — a product, a team, a career — the question isn’t whether you can keep up with the machines. You can’t. The question is whether you’re developing the capacities that matter once the machines handle everything else. Discernment. Editorial instinct. The willingness to say “we could build this, but we shouldn’t.”
That’s not a technical skill. That’s a human one. And the window for developing it is right now, while the bottleneck is still in transit.
So — where’s your bottleneck today? And are you sure it’s still where you think it is?
Why customer tools are organized wrong
This article reveals a fundamental flaw in how customer support tools are designed—organizing by interaction type instead of by customer—and explains why this fragmentation wastes time and obscures the full picture you need to help users effectively.
Infrastructure shapes thought
The tools you build determine what kinds of thinking become possible. On infrastructure, friction, and building deliberately for thought rather than just throughput.
Server-side dashboard architecture: Why moving data fetching off the browser changes everything
How choosing server-side rendering solved security, CORS, and credential management problems I didn't know I had.
The work of being available now
A book on AI, judgment, and staying human at work.
The practice of work in progress
Practical essays on how work actually gets done.
The bottleneck moved
The constraint in knowledge work used to be execution. Now it's specification. Most organizations haven't noticed.
Dev reflection - February 23, 2026
I want to talk about pacing. Not productivity, not velocity — pacing. Because I think we're about to discover that a lot of what we called 'workflow' was actually a rhythm our brains depended on, a...
Universities missed the window to own AI literacy
In 2023 the question of who would own AI literacy was wide open. Universities spent two years forming committees while everyone else claimed the territory. Then a federal agency published the guidance higher education should have written.
Dev reflection - February 23, 2026
I want to talk about pacing. Not productivity, not velocity — pacing. Because I think we're about to discover that a lot of what we called 'workflow' was actually a rhythm our brains depended on, a...
Dev reflection - February 22, 2026
I want to talk about what happens when the thing you built to help you work starts working faster than you can think.
Dev reflection - February 21, 2026
I want to talk about invisible problems. Not the kind you ignore — the kind you literally cannot see until you change how you're looking.