Paul Welty, PhD AI, WORK, AND STAYING HUMAN

Dev reflection - February 11, 2026

So here's something I've been sitting with today. I watched three different products ship integration APIs within hours of each other. Same basic problem—let external systems send data in. Three co...

Duration: 5:48 | Size: 5.31 MB


Hey, it’s Paul. Wednesday, February 11, 2026.

So here’s something I’ve been sitting with today. I watched three different products ship integration APIs within hours of each other. Same basic problem—let external systems send data in. Three completely different implementations. And the interesting part isn’t that teams made different choices. It’s that they didn’t really make choices at all. The infrastructure they’d picked months ago made the choices for them.

One product built a REST endpoint with SHA-256 hashing and bearer tokens and scoped permissions—security-first design that looks like best practices on paper. Another product just added a Rails controller action because the monolith was already there handling auth and database access in the same process. The third product went through three iterations in a single session—webhooks, then API v1 with bearer tokens, then just API keys after realizing they didn’t need half of what they’d built.

Here’s what this reveals: infrastructure lock-in only becomes visible at integration boundaries. You pick a deployment architecture for reasons that seem good at the time—performance, familiarity, whatever. Then months later you need to connect that system to something else, and suddenly you discover all these downstream effects on authentication patterns, on testing workflows, on deployment complexity. Effects that were invisible until you tried to cross the boundary.

This isn’t a technical problem. This is how organizations work. The decisions that constrain you most aren’t the ones you agonize over. They’re the ones you made quickly, early, when you didn’t know enough to see the implications. And by the time you see them, you’re not deciding anymore. You’re discovering what you already committed to.


Second thing I noticed. There’s this pattern with automation where simple cases work beautifully and complex cases break in ways that teach you something.

Take email signatures. A git hook fires on every commit, finds the latest published essay, generates HTML signature files. Works perfectly. Same input always produces same output. Read-only, deterministic, stateless.

Now take social media posts. Same approach—git hook fires on commit, generates content variants. Except now you need to know: have I already queued this post? And git hooks are stateless by design. They fire and forget. They have no memory of what they’ve done before.

You could bolt on state management—store the last-queued commit SHA in a file somewhere. But now you’re building memory on top of a system designed to be memoryless. You’re fighting the tool instead of using it.

The question this raises: what’s the half-life of automation? At what complexity threshold does the automated process become more fragile than the manual one it replaced?

I think the answer has to do with whether the automation needs to remember anything. If it’s purely reactive—same trigger, same response, every time—automation works. The moment it needs to track what it’s done, or check whether something already exists, or handle irregular inputs… you’re not automating anymore. You’re building a system. And systems need maintenance, debugging, monitoring. All the overhead you were trying to avoid.

This applies way beyond code. Think about any workflow you’ve automated in your organization. The ones that keep working are the ones that don’t need memory. Expense reports that route to the same approver every time. Calendar invites that always include the same Zoom link. The ones that break are the ones that need context. “Send a reminder if they haven’t responded”—but what counts as a response? “Escalate if it’s been too long”—but too long compared to what?

Automation succeeds at showing you where stateless logic reaches its boundary. That’s not failure. That’s the automation doing its job of revealing what kind of problem you actually have.


Third thing. This one’s been bugging me.

Admin tooling—user tables, workspace management, role selectors—converged across four products in about two weeks. Everyone ended up with basically the same patterns because they’re visible. You touch admin pages every session. You see what works and what doesn’t. Shared solutions emerge fast.

Observability—error tracking, analytics, event logging—is still copy-paste-adapt for every single product. Same setup steps. Same configuration gotchas. Same “oh right, you have to configure the CORS proxy” discoveries. But it never standardizes.

Why? It’s not that observability is less important. It’s not that it’s harder. It’s that it happens once during deployment and then disappears from view. You set it up, it works, you forget about it. Until the next product, when you rediscover the same steps from scratch.

What gets reused isn’t determined by importance or effort invested. It’s determined by visibility and feedback loops.

This explains so much about organizational knowledge. The stuff that spreads is the stuff people see each other doing. Meeting formats. Slide templates. The way people structure emails. You watch, you copy, patterns emerge.

The stuff that stays siloed is the stuff that happens once and disappears. Onboarding checklists. Deployment procedures. The weird workaround someone figured out for that one vendor’s API. Important knowledge, hard-won knowledge, but invisible to everyone who wasn’t there when it happened.

If you want knowledge to spread, you have to make it visible. Not documented—visible. Documentation is where knowledge goes to die. Visibility means people encounter it in the normal course of their work, repeatedly, until the pattern becomes obvious.


So here’s the question I’m left with. How much of what we call “technical debt” or “organizational dysfunction” is actually just… constraints we committed to before we understood them, automation that outgrew its stateless origins, and knowledge that never spread because no one could see it?

And if that’s true, what would it look like to design for discovery instead of decision? To assume you’ll learn the real requirements through implementation, not before it?

I don’t have a clean answer. But I think it starts with paying attention to integration boundaries, automation breakpoints, and invisible knowledge. That’s where the interesting stuff hides.


Featured writing

Why customer tools are organized wrong

This article reveals a fundamental flaw in how customer support tools are designed—organizing by interaction type instead of by customer—and explains why this fragmentation wastes time and obscures the full picture you need to help users effectively.

Infrastructure shapes thought

The tools you build determine what kinds of thinking become possible. On infrastructure, friction, and building deliberately for thought rather than just throughput.

Server-Side Dashboard Architecture: Why Moving Data Fetching Off the Browser Changes Everything

How choosing server-side rendering solved security, CORS, and credential management problems I didn't know I had.

Books

The Work of Being (in progress)

A book on AI, judgment, and staying human at work.

The Practice of Work (in progress)

Practical essays on how work actually gets done.

Recent writing

The intelligence briefing you're not getting

Most knowledge workers spend 45 to 90 minutes each morning manually triaging the internet. The time already exists in your day. You're just spending it on filtering instead of reading.

You were trained to suppress yourself

Organizations didn't accidentally reward the machine-self. They engineered it. And you cooperated because it worked—until now.

Dev reflection - February 10, 2026

I want to talk about where complexity actually lives. Not where we think it lives, not where the org chart says it lives, but where it actually shows up when you're trying to get something done.

Notes and related thinking

Dev reflection - February 10, 2026

I want to talk about where complexity actually lives. Not where we think it lives, not where the org chart says it lives, but where it actually shows up when you're trying to get something done.

Dev reflection - February 09, 2026

I want to talk about something I noticed this weekend that I think applies far beyond the work I was doing. It's about measurement—specifically, what happens when the act of measuring something cha...

Dev reflection - February 08, 2026

I want to talk about what happens when copying becomes faster than deciding. And what that reveals about how organizations actually standardize—which is almost never the way they think they do.