Paul Welty, PhD AI, WORK, AND STAYING HUMAN

Dev reflection - February 08, 2026

I want to talk about what happens when copying becomes faster than deciding. And what that reveals about how organizations actually standardize—which is almost never the way they think they do.

Duration: 5:55 | Size: 5.42 MB


Hey, it’s Paul. Saturday, February 8th, 2026.

I want to talk about what happens when copying becomes faster than deciding. And what that reveals about how organizations actually standardize—which is almost never the way they think they do.

Here’s the setup. I’ve got six different projects, all at different stages, all doing different things. And yesterday, they all added the same analytics infrastructure. Same tool, same implementation pattern, same instrumentation approach. Six projects, one day, identical solution.

Now, the obvious explanation is coordination. Someone decided “we’re using PostHog,” wrote a memo, and everyone complied. But that’s not what happened. What happened is simpler and more interesting: one project integrated it, it worked, and then copying that implementation became faster than evaluating alternatives. No mandate. No architecture review. Just velocity optimization that happened to produce uniformity.

This is how standardization actually works in most organizations. Not through governance, but through gravity. The path of least resistance becomes the standard because resistance is expensive and people are busy. The interesting question isn’t whether this is good or bad—it’s what it reveals about decision-making under time pressure.

When copying is faster than deciding, you’re not really making a choice. You’re inheriting one. And inherited choices carry inherited assumptions. The analytics implementation works, but does it capture the right events? Does it answer the questions that matter for each project? Nobody asked, because asking would have been slower than copying.

This is the trade every organization makes when it optimizes for speed. You’re not eliminating decisions—you’re concentrating them. One person decides once, and that decision echoes through everything that follows. Which is fine when the decision is right. But when it’s wrong, you’ve locked in the mistake at scale.


Second thing I’ve been noticing: the distance between prototype and product has collapsed into something you can execute in a day. Not because the work got easier, but because the work got predictable.

One project shipped eight commits yesterday closing ten issues: billing integration, marketing site, demo signup, transactional email, content publishing. Another matched that—eleven commits, fourteen issues, from payment processing through data import to a full test suite. These aren’t rough prototypes. They’re architecturally complete applications with multi-tenant billing, error tracking, and marketing funnels.

The work logs don’t read like discovery. They read like execution of a known template.

And here’s what that means for how we think about expertise. Traditional expertise is knowing what to do when you don’t know what to do. It’s judgment under uncertainty. But when the path from idea to product becomes a checklist, expertise shifts. It becomes knowing which checklist applies and when to deviate from it.

This is the future of a lot of knowledge work. Not the elimination of skill, but the compression of it. The hard part isn’t building the billing system—it’s recognizing that this situation calls for the standard billing pattern versus something custom. The hard part isn’t writing the marketing copy—it’s knowing when the template fits and when it doesn’t.

The risk is obvious: when you’re executing a template, you stop questioning whether the template is right. One project is running production data in development—not because someone made a bad call, but because that’s how the template was set up, and templates don’t get questioned, they get applied. Speed creates consistency debt. You’re moving fast, but you’re also accumulating assumptions you haven’t examined.


Third observation: the tooling is learning from how the work actually happens, and then rebuilding itself to match.

Yesterday, a migration script moved thirty-three issues and twenty-two milestones from one project management system to another. Then the configuration got consolidated—sixty-one per-project files collapsed into one global config. The tools are watching the workflows and adapting.

This sounds like automation, but it’s something subtler. It’s the infrastructure developing opinions about how work should flow. The tooling isn’t neutral—it encodes assumptions about what matters, what comes first, what connects to what.

Every organization has this, whether they recognize it or not. The way your tools are configured shapes the work that’s easy to do and the work that’s hard. When the tooling learns from observed behavior and optimizes for it, you get a feedback loop: the way you work shapes the tools, which shapes the way you work, which shapes the tools.

The question is who’s steering. If nobody’s consciously deciding which workflows are worth codifying versus which are one-off adaptations, you end up with infrastructure that optimizes for how things happened to go, not how they should go. The tools become a fossil record of past decisions, and that fossil record constrains future possibilities.


Last thing. Feature work and infrastructure work are happening simultaneously now—in the same session, sometimes in the same commit. Rails upgrades alongside new AI integrations. Analytics instrumentation alongside content publishing. This used to be separated: refactoring sprints, then feature sprints. Now they’re interleaved.

That only works if your tests catch problems immediately. And mostly they do—test suites are green, QA checks pass, deployments succeed. But one project notes that “full system test suite flakiness may mask real regressions.” Which is the tax on speed: when tests run fast and infrastructure changes frequently, flaky tests become the unknown risk you’re trading for velocity.

This is the bargain every fast-moving team makes. You can interleave infrastructure and feature work if your safety nets are reliable. But reliability is expensive to maintain, and the faster you move, the less time you have to maintain it. Eventually the nets develop holes. And you won’t know where the holes are until something falls through.


So here’s the question I’m sitting with: when infrastructure becomes a template and templates become the default, how do you preserve the capacity to question whether the template fits? Copying is faster than deciding. But deciding is how you learn whether you’re solving the right problem.

The velocity is real. The risk is that you’re moving fast in a direction nobody consciously chose.


Featured writing

Why customer tools are organized wrong

This article reveals a fundamental flaw in how customer support tools are designed—organizing by interaction type instead of by customer—and explains why this fragmentation wastes time and obscures the full picture you need to help users effectively.

Infrastructure shapes thought

The tools you build determine what kinds of thinking become possible. On infrastructure, friction, and building deliberately for thought rather than just throughput.

Server-Side Dashboard Architecture: Why Moving Data Fetching Off the Browser Changes Everything

How choosing server-side rendering solved security, CORS, and credential management problems I didn't know I had.

Books

The Work of Being (in progress)

A book on AI, judgment, and staying human at work.

The Practice of Work (in progress)

Practical essays on how work actually gets done.

Recent writing

Dev reflection - February 07, 2026

I've been thinking about friction. Not the dramatic kind—not the system crash, not the project that fails spectacularly. I mean the quiet kind. The accumulation of small things that don't quite wor...

Dev reflection - February 06, 2026

I want to talk about the difference between a system that works and a system that's ready. These aren't the same thing. The gap between them is where most projects stall out—not from failure, but f...

Dev reflection - February 05, 2026

I want to talk about something I keep running into: the moment when you realize the outside of something no longer matches the inside. And what that actually costs.

Notes and related thinking

Dev reflection - February 07, 2026

I've been thinking about friction. Not the dramatic kind—not the system crash, not the project that fails spectacularly. I mean the quiet kind. The accumulation of small things that don't quite wor...

Dev reflection - February 06, 2026

I want to talk about the difference between a system that works and a system that's ready. These aren't the same thing. The gap between them is where most projects stall out—not from failure, but f...

Dev reflection - February 05, 2026

I want to talk about something I keep running into: the moment when you realize the outside of something no longer matches the inside. And what that actually costs.