Dev reflection - February 23, 2026
I want to talk about pacing. Not productivity, not velocity — pacing. Because I think we're about to discover that a lot of what we called 'workflow' was actually a rhythm our brains depended on, a...
Duration: 8:52 | Size: 10.2 MB
Hey, it’s Paul. Monday, February 23, 2026.
I want to talk about pacing. Not productivity, not velocity — pacing. Because I think we’re about to discover that a lot of what we called “workflow” was actually a rhythm our brains depended on, and we didn’t know it until it was gone.
Here’s something I noticed this weekend. I had two very different projects running simultaneously. One is a SaaS product — payment systems, security hardening, admin tooling. The other is a newsletter — editorial work, formatting, scheduling across platforms. Completely different domains. But both hit the same wall on the same day, and the wall wasn’t what I expected.
The wall wasn’t implementation. Implementation was fast. Scary fast. Six features merged in a single day on the software side. A full essay drafted, edited, reformatted for multiple channels, and scheduled on the editorial side. The AI handled execution beautifully.
The wall was me.
Specifically: my ability to make decisions fast enough to keep up with the thing that was building what I’d decided.
And that’s a different problem than any of us were trained for.
Let me make this concrete. On the software side, I’m running a two-tab workflow. One tab is grinding — an AI agent writing code from detailed specifications. The other tab is me, reviewing what it produced and writing the next batch of specs. Tab one implements. Tab two thinks. Simple enough.
But here’s what actually happened. Six pull requests land in quick succession. I’m reviewing them. And the review process catches three bugs that would have shipped broken. A missing cron job that would have let trial accounts run forever — just… never expire. Checkout sessions that didn’t verify whether a user actually belonged to the workspace they were subscribing to. Hardcoded subscription statuses that completely ignored what Stripe was actually reporting.
None of these are exotic bugs. They’re the kind of thing you’d catch naturally if you were writing the code yourself, because you’d think through the edge cases as you typed. The act of implementation was the review. Your fingers on the keyboard, your eyes on the logic — that slowness was doing double duty. It was building the thing and stress-testing it at the same time.
Remove the slowness, and you remove the built-in quality check.
So now review isn’t a secondary activity. Review is the primary activity. And it requires a completely different kind of attention than implementation ever did. You’re not building. You’re inspecting. You’re reading diffs instead of writing code. You’re trying to spot what’s missing from something someone — something — else wrote. That’s cognitively harder than writing it yourself, in some ways. Absence is always harder to detect than presence.
The editorial side showed the same pattern in miniature. The AI drafted a perfectly good essay. Clean argument, well-structured, solid build to a strong conclusion. One problem: it was structured for a reader who was already committed. Slow build, payoff at the end.
But this was going out as a newsletter. Email. LinkedIn. Channels where you have about three seconds before someone scrolls past. The punch needed to be at the top, not the bottom.
The AI didn’t know that. Why would it? It wrote a good essay. I had to recognize that the distribution context required a different structure. That’s not a writing skill. That’s a medium-awareness skill. It’s editorial judgment about where and how something will be read.
And I caught it during review. Not during drafting. The drafting was already done. The AI had moved on. I was the one who had to pull it back and restructure.
So here’s the question this raises, and it’s not a small one: what happens to organizations when the review layer becomes the bottleneck?
Think about how most companies are structured. You have people who do work and people who check work. Managers, QA teams, editors, compliance officers. The ratio has always been weighted toward the doers, because doing was slow. You needed a lot of people implementing to keep the pipeline full, and a smaller number reviewing to keep quality up.
Flip that. Make implementation nearly free. Now you need more reviewers than doers. Or more precisely, you need the same people to shift from doing to reviewing, and reviewing is a fundamentally different skill. It’s not just slower doing. It’s a different cognitive mode.
I’ve spent twenty years in consulting watching organizations struggle with exactly this kind of role shift. When you tell someone their job has changed, they hear “your job is going away.” And sometimes they’re right. But more often, the job hasn’t disappeared — it’s moved up one level of abstraction. You’re not writing code anymore. You’re writing specifications detailed enough that something else can write code from them. You’re not drafting anymore. You’re editing and making structural decisions about medium and audience.
That sounds like a promotion. It feels like a loss.
Because here’s what nobody talks about: implementation is satisfying. There’s a rhythm to writing code, to drafting prose, to building a thing with your hands — even if your hands are on a keyboard. You think, you type, you see the result, you adjust. It’s a feedback loop that operates at human speed. It gives your brain time to process. Time to notice things. Time to wander into adjacent ideas that turn out to be important.
Take that away, and you’re left with pure decision-making. All day. Every decision. No breaks disguised as implementation. No natural pacing.
I don’t think we know yet what that does to people.
There’s another piece of this that’s been nagging at me, and it has to do with trust.
When I write code myself, I know where the boundaries are because I drew them. I know what’s validated and what’s assumed safe because I made those decisions as I went. When I’m reviewing AI-generated code, I’m trying to reconstruct those decisions from the output. It’s like reading someone else’s proof — you can follow the logic, but you don’t have the intuition that produced it.
This weekend I caught a security issue where the system was trusting admin-submitted HTML without escaping it. The assumption was that sanitization happened upstream, in the admin UI. Fine — unless someone ever hits that API directly. Then it’s an XSS vector. I also found missing SSRF protection in a feed discovery feature, and a failure to reject javascript: URIs in email links.
These aren’t obscure vulnerabilities. They’re standard security hygiene. But they require you to think about trust boundaries — where does trusted input end and untrusted input begin? When you’re writing the code, you think about that naturally. When you’re reviewing diffs, you have to actively look for it. You have to ask: what assumptions is this code making, and are those assumptions safe?
That’s a skill. A specific, trainable, important skill. And almost nobody is teaching it, because until recently, it wasn’t a separate skill. It was embedded in the act of writing code.
So where does this leave us?
I think we’re going to need something I’d call artificial pacing. Deliberate slowdowns built into fast processes. Not because we can’t go faster, but because human cognition needs rhythm. Needs time to notice what’s missing. Needs the space that implementation used to provide for free.
The old bottleneck — slow implementation — was annoying, but it was also protective. It gave you time to think. It forced review to happen incrementally, naturally, as part of the work. Remove that bottleneck and you don’t just get speed. You get a new failure mode: decisions made faster than they can be verified.
And the debt compounds. Every manual step you skip, every verification you defer, every trust boundary you don’t document — it piles up. Not linearly. Exponentially. Because each new feature interacts with every previous feature, and if you didn’t verify the previous ones thoroughly, the interactions are unknown.
You’re not just moving fast. You’re accumulating uncertainty.
Here’s what I keep coming back to. The shift isn’t from manual work to automated work. The shift is from building to specifying and inspecting. From doing to deciding. And deciding is exhausting in a way that doing never was, because doing gave you natural rest periods disguised as work.
If you’re leading a team right now, or managing your own workflow, pay attention to this. The constraint isn’t your tools. The constraint is your capacity for sustained, high-quality judgment — without the pacing that manual work used to provide.
Nobody’s going to solve that for you. You have to build the rhythm yourself.
Why customer tools are organized wrong
This article reveals a fundamental flaw in how customer support tools are designed—organizing by interaction type instead of by customer—and explains why this fragmentation wastes time and obscures the full picture you need to help users effectively.
Infrastructure shapes thought
The tools you build determine what kinds of thinking become possible. On infrastructure, friction, and building deliberately for thought rather than just throughput.
Server-side dashboard architecture: Why moving data fetching off the browser changes everything
How choosing server-side rendering solved security, CORS, and credential management problems I didn't know I had.
The work of being available now
A book on AI, judgment, and staying human at work.
The practice of work in progress
Practical essays on how work actually gets done.
Universities missed the window to own AI literacy
In 2023 the question of who would own AI literacy was wide open. Universities spent two years forming committees while everyone else claimed the territory. Then a federal agency published the guidance higher education should have written.
Dev reflection - February 22, 2026
I want to talk about what happens when the thing you built to help you work starts working faster than you can think.
Dev reflection - February 21, 2026
I want to talk about invisible problems. Not the kind you ignore — the kind you literally cannot see until you change how you're looking.
Dev reflection - February 22, 2026
I want to talk about what happens when the thing you built to help you work starts working faster than you can think.
Dev reflection - February 21, 2026
I want to talk about invisible problems. Not the kind you ignore — the kind you literally cannot see until you change how you're looking.
Dev reflection - February 20, 2026
I want to talk about the difference between execution and verification. Because something happened this week that made the distinction painfully clear, and I think it matters far beyond software.