Your process was built for a different speed
When work changes velocity, governance systems don't just fall behind. They become theater. And theater is worse than nothing—it gives you the feeling of control without any of the substance.
Two of my projects — completely independent, different codebases, different purposes — hit the exact same failure this week. Both had their database change tracking fall out of sync. Both teams routed around the problem the same way: apply the changes directly, skip the system that’s supposed to track them, add guardrails after the fact. Neither knew the other had done it.
The reason was identical in both cases. The work was moving faster than the tracking system was designed to handle. When you’re shipping nineteen changes overnight through automated pipelines, the migration file becomes a bottleneck. And people route around bottlenecks. That’s not a character flaw. That’s physics.
Governance at a tempo it wasn’t designed for
If you’ve been inside a large organization during any kind of transformation — digital, operational, cultural — you’ve seen this pattern. The compliance framework was built for quarterly releases. The team now ships weekly. So people start doing things off-book. Not because they’re reckless. Because the official process literally cannot absorb the pace.
They document in Slack instead of Jira. They build their own tracking spreadsheets. They add peer reviews that aren’t in the official checklist but actually work better. Shadow systems. Functional, fast, invisible to anyone looking at the org chart.
And for a while, this works fine. The shadow process is often better than the official one — it was built by the people doing the work, for the work they’re actually doing, at the speed they’re actually moving. The official process was built by someone three reorg cycles ago for a pace that no longer exists.
The problem is what happens next. When the shadow process fails — and it will, because nobody designed it to be durable — there’s no fallback. The official process everyone was supposed to be following? Nobody followed it. Nobody remembers how. The governance didn’t degrade gradually. It evaporated. And you don’t find out until something breaks.
The question nobody’s asking
Most organizations responding to AI adoption are focused on tools. Which models, which platforms, which workflows. Reasonable questions. But the question that matters more is one almost nobody’s asking: which of your governance structures survive a 10x change in work velocity?
Not “do we have governance.” You have governance. You have change management processes and approval workflows and compliance checklists and architecture review boards. The question is whether any of that was designed for the pace you’re about to operate at.
Some governance structures are tempo-independent. Code review, for instance. It takes about as long to review a change whether you generated it in five minutes or five days. The ratio of review effort to generation effort changes, but the review itself scales.
Others are deeply tempo-dependent. Weekly status meetings made sense when a week’s worth of changes fit in someone’s head. When the pipeline ships forty items between standups, the meeting becomes a performance. Nobody can actually report what happened. They summarize, which means they editorialize, which means the governance function — “does everyone understand what’s changing?” — quietly disappears. The meeting still happens. The governance doesn’t.
That’s the distinction. Not whether your process exists, but whether it still functions at the speed you’re now working.
When tools cross a line nobody drew
There’s a related pattern that’s more personal, and I think it applies broadly.
I’ve been building a content pipeline — the system that takes my daily work, turns it into reflections, generates audio, and publishes it. As of this week, it touches three codebases, calls an external API for voice synthesis, and runs a pre-commit hook that blocks me from saving my work if the audio service is down.
Read that again. An external service I don’t control can prevent me from committing my own work at 11pm.
I built this. It works well. And I’ve also crossed a line that nobody drew. This isn’t a script anymore. It’s infrastructure. And I didn’t decide it should be infrastructure. It just got there, one useful addition at a time.
You know this pattern. Someone builds a spreadsheet to track a process. It works great. Other people start depending on it. Someone adds a macro. Now it’s load-bearing. Now it’s infrastructure. But nobody treats it that way — no backup, no documentation, no fallback plan — because in everyone’s mind it’s still “just a spreadsheet.”
The discipline isn’t in building the thing. It’s in recognizing when the thing has changed categories. When your “quick automation” is now blocking commits. When your “temporary workaround” has been in production for six months. When your “experiment” has users.
This happens faster with AI-accelerated workflows because the tools get powerful faster. A script that took two weeks to write, you’d probably notice when it became critical. A script that took twenty minutes to generate? That one sneaks into your infrastructure before you’ve thought about what happens when it fails.
What breaks isn’t what you expect
The instinct is to think about catastrophic failures — the system goes down, data gets corrupted, something visibly breaks. But that’s not what usually happens when governance erodes. What happens is quieter and worse.
People stop trusting the process. Not dramatically — they don’t announce it. They just start adding their own checks. Their own workarounds. Their own shadow systems. The official process becomes something you perform for compliance, not something you rely on for quality. The governance becomes theater.
Theater is worse than having no process at all. No process, you know you’re exposed. Theater gives you the feeling of control without any of the substance. You look at your dashboard and see green. You look at your checklist and see checkmarks. Everything looks governed. Nothing actually is.
The organizations that will navigate this well aren’t the ones with the best AI tools or the most aggressive adoption timelines. They’re the ones willing to look at their own processes and ask honestly: is this still governing anything, or is it just a habit we perform? What was this built for? Does that still exist?
Those aren’t comfortable questions. They’re the right ones.
Why customer tools are organized wrong
This article reveals a fundamental flaw in how customer support tools are designed—organizing by interaction type instead of by customer—and explains why this fragmentation wastes time and obscures the full picture you need to help users effectively.
Infrastructure shapes thought
The tools you build determine what kinds of thinking become possible. On infrastructure, friction, and building deliberately for thought rather than just throughput.
Server-side dashboard architecture: Why moving data fetching off the browser changes everything
How choosing server-side rendering solved security, CORS, and credential management problems I didn't know I had.
The work of being available now
A book on AI, judgment, and staying human at work.
The practice of work in progress
Practical essays on how work actually gets done.
The difference between shipping and finishing
Shipping is mechanical. Finishing is a judgment call. And most organizations have quietly made it impossible to tell the difference.
Nothing is finished until you say it is
Continuous delivery removed the endings from work. That felt like progress. But without formal completion, you lose the ability to say what you actually accomplished — and more importantly, what you're done thinking about.
Your biggest problems are the ones running fine
The most dangerous failures in any system — technical or organizational — aren't the ones throwing errors. They're the ones that appear to work perfectly. And they'll keep appearing to work perfectly right up until they don't.
The work that remains
When AI handles implementation, the human job shifts from doing the work to understanding the work. Speed without understanding is just technical debt with better commit messages.
The bottleneck moved
The constraint in knowledge work used to be execution. Now it's specification. Most organizations haven't noticed.
Universities missed the window to own AI literacy
In 2023 the question of who would own AI literacy was wide open. Universities spent two years forming committees while everyone else claimed the territory. Then a federal agency published the guidance higher education should have written.