The difference between persistence and stubbornness
I want to talk about persistence. Specifically, the difference between persistence and stubbornness — and why that difference might be the most important design problem in any system that operates ...
Duration: 9:19 | Size: 10.7 MB
Hey, it’s Paul. Monday, February 24th, 2026.
I want to talk about persistence. Specifically, the difference between persistence and stubbornness — and why that difference might be the most important design problem in any system that operates on your behalf.
Here’s the setup. Imagine you’ve delegated a task. Not to a person — to a process. A system. And the system tries, fails, tries again, fails again, tries a third time. By the fifth attempt, you might check in. By the eighth, you’re probably intervening. By the twelfth attempt at the same task, something has gone wrong. Not with any single attempt — each one might be perfectly reasonable in isolation — but with the pattern. Twelve attempts means the system doesn’t know the difference between “almost there” and “fundamentally stuck.”
This happened in my work today. An automated pipeline took twelve passes at a single feature before getting it right. Each pass was technically sound — the system caught errors, requested changes, tried again. It was doing exactly what it was designed to do. And that’s the problem.
Because persistence without reflection is just expensive repetition.
Now, if you’ve ever worked inside a large organization, this should sound familiar. Think about the initiative that keeps getting relaunched. The quarterly strategy refresh that produces a new slide deck but the same outcomes. The team that reruns the same retrospective format every two weeks and wonders why nothing changes. The system is persistent. It follows the loop. But it never steps outside the loop to ask whether the loop itself is the issue.
This is different from what we usually diagnose as organizational failure. We’re comfortable talking about inertia — systems that won’t move. We understand resistance to change. But this is the opposite problem: a system that moves constantly and still doesn’t get anywhere. It’s motion without learning. And it’s harder to spot because it looks like work. It generates artifacts. It produces PRs, reports, deliverables. The dashboard says things are happening. But the thing that needed to happen isn’t happening.
The fix, at least in my system, is something I’m calling a circuit breaker. After a set number of failed attempts, stop. Don’t retry. Escalate. Flag it for a human. Say: I’ve tried this several times and I can’t tell if I’m close or lost.
That sounds simple. It’s not. Because building a circuit breaker means the system has to admit it might be stuck. And that requires a kind of self-awareness that most systems — technical or organizational — are not designed to have.
Here’s the second thing I’ve been thinking about. Some work flows through automated systems beautifully, and some doesn’t. Today I watched two very different patterns play out. One project had a clean sequence of small, well-defined tasks — set up this database table, then this one, then wire them together. Each task built on the last. The pipeline moved through them like water. Another project had a single complex feature with ambiguous boundaries, and that’s where the twelve-attempt saga happened.
The difference wasn’t the difficulty of the work. It was the decomposition of the work.
This maps onto something I’ve seen in every consulting engagement I’ve ever done. The teams that struggle with execution rarely have an execution problem. They have a scoping problem. The work isn’t broken into pieces that match the capabilities of the system doing the work. And “the system” here could be an automated pipeline, or it could be a cross-functional team of humans. The principle is the same.
When you hand someone — or something — a task that’s too large, too ambiguous, or has unclear completion criteria, you get churn. You get twelve attempts. You get the weekly status meeting where everyone reports “in progress” on the same item for a month. The system isn’t failing. The input to the system is wrong.
This is a design skill that doesn’t get enough respect. The ability to take a fuzzy goal and decompose it into units that a given system can actually execute — that’s not project management busywork. That’s the highest-use intervention available. And it’s becoming more important, not less, as the systems doing the execution get more capable. A more powerful engine doesn’t help if you’re feeding it the wrong-shaped fuel.
Third idea, and this one’s been nagging at me. I shipped a newsletter today. One commit. One piece of content. Meanwhile, across my other projects, the pipeline generated dozens of commits, PRs, database migrations, infrastructure changes. The newsletter was the smallest output by every quantitative measure. And it might have been the most important thing that happened.
Because the newsletter is where the reflection lives. It’s the layer that looks at everything else and asks: what does this mean? What pattern is emerging? What should I pay attention to next?
Without that layer, you have a system that produces. With it, you have a system that learns.
This is the part that’s hard to automate and dangerous to skip. I’ve watched organizations optimize their execution machinery — better tools, faster cycles, more throughput — while starving the reflection function. No one has time to write the post-mortem. The retrospective gets cut because there’s too much work to do. The strategy offsite gets shortened to a half-day because Q3 is busy.
And then they wonder why they’re efficient at doing the wrong things.
The reflection layer is not overhead. It’s the steering mechanism. You wouldn’t build a car that goes faster and faster but remove the steering wheel to save weight. But that’s what happens when organizations — or individuals — treat synthesis and reflection as luxuries.
Here’s what I actually do: I build the reflection into the system itself. The pipeline that runs my projects also generates the synthesis that feeds the newsletter that forces me to articulate what I’m learning. It’s not separate from the work. It’s part of the work. And the fact that I can automate around the reflection — handle the logistics, the formatting, the publishing — means I spend more time on the thinking, not less.
Last thing. Today I made my pipeline run four projects simultaneously instead of one at a time. Faster, obviously. But here’s what happened immediately: every failure mode got worse. Bugs that were invisible when one thing ran at a time suddenly surfaced under concurrency. A logging error that was harmless in sequence became a data corruption risk in parallel. The kill switch — the ability to stop everything with a keystroke — broke in certain timing windows.
Parallelization is not just “do more things.” It’s “discover new ways things can go wrong.”
You see this in organizations that scale. The startup that worked beautifully with eight people hits twenty and suddenly nothing flows. The processes that were invisible — because one person just handled it — now need to be explicit, documented, defended. The failure modes aren’t bigger versions of the old failure modes. They’re new failure modes that didn’t exist at the previous scale.
And the most dangerous ones are the ones that compromise your ability to intervene. My kill switch breaking under concurrency is the equivalent of a management team that can’t course-correct because the organization is moving in too many directions simultaneously. You need the override to work every time. Not most of the time. Every time.
So the question I’m sitting with tonight isn’t “how do I go faster.” It’s: at what point does the monitoring burden of a parallel system exceed the throughput gain? When does the cost of keeping your hands on the wheel outweigh the benefit of having more wheels spinning?
I don’t have the answer. But I know the answer isn’t “just add more wheels.”
That’s it for today. Talk to you tomorrow.
Why customer tools are organized wrong
This article reveals a fundamental flaw in how customer support tools are designed—organizing by interaction type instead of by customer—and explains why this fragmentation wastes time and obscures the full picture you need to help users effectively.
Infrastructure shapes thought
The tools you build determine what kinds of thinking become possible. On infrastructure, friction, and building deliberately for thought rather than just throughput.
Server-side dashboard architecture: Why moving data fetching off the browser changes everything
How choosing server-side rendering solved security, CORS, and credential management problems I didn't know I had.
The work of being available now
A book on AI, judgment, and staying human at work.
The practice of work in progress
Practical essays on how work actually gets done.
Junior engineers didn't become profitable overnight. The work did.
We've been celebrating that AI made junior engineers profitable. That's not what happened. AI made it economically viable to give them access to work that actually builds judgment, work we always knew
What happens when the pipeline doesn't need you
So here's something I noticed today that I want to sit with. I run several projects that use autonomous pipelines — AI systems that pick up tasks, write code, open pull requests, ship changes. One ...
The bottleneck moved
The constraint in knowledge work used to be execution. Now it's specification. Most organizations haven't noticed.
What happens when the pipeline doesn't need you
So here's something I noticed today that I want to sit with. I run several projects that use autonomous pipelines — AI systems that pick up tasks, write code, open pull requests, ship changes. One ...
Dev reflection - February 24, 2026
I want to talk about what happens when the thing that runs the factory needs more maintenance than the factory itself.
The rhythm your brain depended on
I want to talk about pacing. Not productivity, not velocity — pacing. Because I think we're about to discover that a lot of what we called 'workflow' was actually a rhythm our brains depended on, a...