Paul Welty, PhD AI, WORK, AND STAYING HUMAN

· essays

When your work moves faster than your rules can keep up, governance quietly becomes theater

I want to talk about something that happened this week that looks like a technical problem but is actually a management problem. And I think it maps onto something most organizations are going to f...

Duration: 7:28 | Size: 6.8 MB


Hey, it’s Paul. Friday, February 27, 2026.

I want to talk about something that happened this week that looks like a technical problem but is actually a management problem. And I think it maps onto something most organizations are going to face in the next couple of years, whether they use AI or not.

Here’s the setup. I run a portfolio of software projects. Some of them share infrastructure, some don’t. Two of them — completely independently, with no coordination between them — hit the exact same failure this week. Both projects had their database migration tracking fall out of sync. Both teams routed around the problem the same way: they applied changes directly, bypassing the system that’s supposed to track those changes. Both added safety guards after the fact. Neither told the other.

Now, if you’re not a developer, database migrations are basically the record of every structural change you’ve made to your data. They’re supposed to be sequential, tracked, reproducible. When they fall out of sync, you’re flying with an outdated map. You might be fine. You might also overwrite something critical the next time you try to update.

But here’s what’s actually interesting. The reason both projects fell out of sync is the same: the work is moving faster than the tracking system was designed to handle. When you’re shipping nineteen changes overnight through automated pipelines, or grinding through six parallel workstreams in a single session, the migration file becomes a bottleneck. And people — and AI agents — route around bottlenecks. Always. That’s not a bug in human nature. That’s a feature.

The governance system assumed a certain tempo of work. The tempo changed. The governance didn’t.

If you’ve ever worked inside a large organization during a transformation — digital, operational, cultural, whatever — you’ve seen this exact pattern. The compliance framework was built for quarterly releases. Now the team ships weekly. So people start doing things off-book. Not because they’re reckless, but because the official process literally cannot absorb the pace. They add their own safety checks. They document in Slack instead of Jira. They build shadow systems.

And for a while, it works. Until it doesn’t.

The question I’m sitting with isn’t “how do I fix my migration tracking.” That’s solvable. The question is: when the velocity of your work changes by an order of magnitude, which governance structures survive and which ones quietly become theater? Because most organizations are about to find out.


There’s a related pattern I’ve been watching, and it has to do with what happens when your production system runs dry.

One of my projects had a great day this week. Shipped a key deliverable, closed out a milestone. And then hit a wall — not because anything broke, but because there was nothing left to do. Zero open issues. Empty backlog. No queued work for the next session. The entire plan for the following day was “figure out what to work on.”

Meanwhile, two other projects were humming. They had well-specified issues ready to go, generated through a scouting process that identifies what needs attention next. Those projects ground through their backlogs efficiently. The constraint was never execution speed. It was having the right work defined and ready.

This is one of those things that’s obvious once you see it and invisible until you do. We spend enormous energy optimizing how fast we execute. Faster pipelines, better tools, more parallel capacity. But the actual bottleneck in most knowledge work isn’t execution. It’s knowing what to do next — and having that answer ready before you need it.

Think about your own work for a second. How often have you started a Monday morning, or a new sprint, or a planning session, and spent the first meaningful chunk of time just figuring out what the priorities are? Not doing the work. Not even deciding between options. Just generating the options in the first place.

In manufacturing, this is a solved problem. You don’t wait until the assembly line is idle to order parts. You have inventory management, demand forecasting, supply chain logistics — all designed to ensure materials arrive before they’re needed. But in knowledge work, we treat the “what should we build next” question as something that can be answered on the fly. Ad hoc. In the meeting. On the whiteboard.

That works when execution is slow. When it takes two weeks to ship a feature, you have two weeks to figure out what comes after it. But when execution compresses to hours — when a well-specified issue goes from “open” to “merged” in forty-five minutes — the planning cadence has to compress too. Otherwise you have an incredibly powerful engine with no fuel in the tank.

The project that stalled this week didn’t have a productivity problem. It had a pipeline problem. And I’d bet most teams adopting AI-accelerated workflows are going to discover the same thing: the tools are fast, but the work definition process is still running at human speed. That mismatch is where the waste lives.


One more thing, and this one’s more personal.

I’ve been building a content pipeline — the system that takes my daily work, turns it into reflections, generates audio, and publishes it. As of this week, it touches three separate codebases, calls an external API for voice synthesis, authenticates against a different project’s database to push content ideas, and runs a pre-commit hook that will block me from saving my work if the audio service is down.

Read that last part again. An external service I don’t control can prevent me from committing my own work log at 11pm.

I built this. I’m proud of it. And I’m also aware that I’ve crossed a line. This isn’t a script anymore. It’s a system. And systems have failure modes that scripts don’t. Scripts fail and you shrug and run them again. Systems fail and other systems notice and cascade.

There’s a pattern here that I think applies well beyond software. When you’re building something new — a process, a workflow, a way of operating — there’s a moment where it crosses from “clever hack” to “infrastructure.” And that crossing point is almost never announced. You don’t get a notification that says “congratulations, this thing you built for convenience is now a critical dependency.” You find out when it breaks at the worst possible time.

I see this in organizations constantly. Someone builds a spreadsheet to track a process. It works great. Other people start depending on it. Someone builds a macro on top of it. Now it’s load-bearing. Now it’s infrastructure. But nobody treats it that way — no backup, no documentation, no fallback plan — because in everyone’s mind it’s still “just a spreadsheet.”

The discipline isn’t in building the thing. The discipline is in recognizing when the thing has changed categories. When your “quick automation” is now blocking commits. When your “temporary workaround” has been in production for six months. When your “experiment” has users.

You have to look at your own systems — not just the software ones, the human ones too — and ask honestly: what started as a convenience and is now a dependency? What breaks if it goes down? And does anyone besides you know how it works?


So here’s where I’ll leave it. Three patterns from one week: governance systems that can’t keep pace with the work they’re supposed to govern. Execution capacity that outruns the ability to define what to execute. And tools that quietly cross from optional to critical without anyone deciding they should.

None of these are technology problems. They’re management problems, organizational problems, human problems. The technology just makes them visible faster.

The window for getting ahead of this is smaller than you think. Not because AI is moving fast — though it is — but because these patterns compound. An out-of-sync governance system doesn’t just cause one failure. It trains everyone to ignore governance entirely. An empty backlog doesn’t just waste one day. It breaks the rhythm that makes the next ten days productive. A brittle dependency doesn’t just fail once. It fails at the moment you can least afford it.

You’re building these systems right now, whether you know it or not. The question is whether you’re going to notice before they break.

Why customer tools are organized wrong

This article reveals a fundamental flaw in how customer support tools are designed—organizing by interaction type instead of by customer—and explains why this fragmentation wastes time and obscures the full picture you need to help users effectively.

Infrastructure shapes thought

The tools you build determine what kinds of thinking become possible. On infrastructure, friction, and building deliberately for thought rather than just throughput.

Server-side dashboard architecture: Why moving data fetching off the browser changes everything

How choosing server-side rendering solved security, CORS, and credential management problems I didn't know I had.

The work of being available now

A book on AI, judgment, and staying human at work.

The practice of work in progress

Practical essays on how work actually gets done.

The silence that ships

Three projects independently discovered the same bug pattern today — code that reports success when something important didn't happen. The most dangerous failures don't look like failures at all.

Junior engineers didn't become profitable overnight. The work did.

We've been celebrating that AI made junior engineers profitable. That's not what happened. AI made it economically viable to give them access to work that actually builds judgment, work we always knew

Three projects, three opposite methods, all monster output days: what that taught me about when process helps and when it's just comfort

I've been running a portfolio of software projects using a mix of autonomous AI pipelines and human-led parallel agent sessions. Yesterday, three different projects had monster output days — and th...

Three projects, three opposite methods, all monster output days: what that taught me about when process helps and when it's just comfort

I've been running a portfolio of software projects using a mix of autonomous AI pipelines and human-led parallel agent sessions. Yesterday, three different projects had monster output days — and th...

The silence that ships

Three projects independently discovered the same bug pattern today — code that reports success when something important didn't happen. The most dangerous failures don't look like failures at all.

What happens when the pipeline doesn't need you

So here's something I noticed today that I want to sit with. I run several projects that use autonomous pipelines — AI systems that pick up tasks, write code, open pull requests, ship changes. One ...