The bottleneck moved
The constraint in knowledge work used to be execution. Now it's specification. Most organizations haven't noticed.
If you’re a knowledge worker, a manager, a leader — there’s a shift coming that most people haven’t noticed. Not the question of whether AI can do your job. The question of whether you can do the new job AI creates: thinking clearly enough to direct machines, and watching carefully enough to catch them when they drift.
The constraint in knowledge work used to be execution. More ideas than you could build. More strategy than you could implement. That’s over. The bottleneck moved. And it moved to you.
The machine was waiting on me
I run multiple projects simultaneously, and I’ve built tooling that lets AI agents execute well-defined tasks autonomously. Create an issue, specify what needs to happen, hand it off. The agent builds it, ships it, moves to the next one.
Last week, across one project, an agent shipped twenty-eight issues in a single session. Twenty-eight discrete units of work, no human intervention. Meanwhile, in a separate conversation, I was designing an entirely new system architecture and creating seven more issues for future execution.
Two parallel tracks. One executing. One designing.
The execution track finished before the design track had enough new work ready for it.
The machine was waiting on me.
If you’ve spent time around manufacturing or operations, you recognize this immediately. Eliyahu Goldratt wrote about it in 1984 — The Goal — and the core insight is deceptively simple: any system’s throughput is limited by its single tightest constraint, and improving anything that isn’t the constraint is an illusion of progress.
For decades in knowledge work, the constraint was execution capacity. That’s over. The constraint has flipped. The bottleneck isn’t building — it’s specifying. Not “what should we build” in some grand strategic sense, but the granular, unglamorous work of defining exactly what done looks like for each unit of work. Clear inputs. Clear outputs. Clear acceptance criteria. The kind of specification that most organizations treat as overhead, as bureaucracy, as the thing you rush through to get to the “real work.”
Turns out, that is the real work now.
Specification completeness is the throughput variable
Across three projects last week, I saw three completely different states of readiness. One had twenty-eight issues prepped and ready — the agent chewed through all of them. Another had zero grindable issues. The entire session was planning and design. A third had carry-over issues from previous sessions that still weren’t ready, so nothing shipped.
Same tooling. Same agent capabilities. Completely different throughput.
The difference wasn’t technical complexity or domain difficulty. It was specification completeness. The project that shipped twenty-eight issues had invested in breaking work into self-contained packets — each one with enough context that an agent could execute it without asking questions. The project that shipped nothing had descriptions like “follow the same pattern as this other system” without documenting what that pattern actually is.
“Follow the same pattern” is a perfectly fine instruction for a human colleague who’s been on the team for six months. They have ambient context. They’ve seen the pattern. They can fill in gaps with judgment.
An autonomous agent has none of that. It needs the pattern made explicit. Every assumption surfaced. Every decision pre-made.
This is where organizations are going to struggle. Most organizations are terrible at making implicit knowledge explicit. They run on tribal knowledge, on “you know what I mean,” on the accumulated context that lives in people’s heads. That worked fine when execution required humans anyway — the same humans who held the context. When execution gets handed to systems with no institutional memory, no hallway conversations, no six months of osmosis, every gap in specification becomes a failure point.
The companies that will move fastest aren’t the ones with the best AI tools. They’re the ones that are best at writing down what they actually mean.
Design and supervision destroy each other
There’s a second problem that matters more than the specification gap.
When I ran both tracks — execution and design — in parallel, I discovered something about cognitive architecture. Design work requires expansive thinking. You’re holding multiple possibilities, weighing tradeoffs, making architectural decisions that constrain everything downstream. It needs uninterrupted space.
Execution supervision is the opposite. Reactive. Monitoring. Did that task complete? Did it commit to the right branch? Did it break anything? Important, but interruptive by nature.
When those two modes lived in the same mental context, the design thinking got destroyed. Not because the supervision was hard, but because it was frequent. Every small interruption — check this, approve that, handle this error — fractured the sustained attention that architectural thinking requires.
Cal Newport has written about this — the distinction between deep work and shallow work, and how context switching between them degrades both. But what’s new here is that the shallow work isn’t email or Slack notifications. It’s supervising the machine that’s doing your work for you. The very tool that’s supposed to free up your attention creates a new demand on it.
If you’re a manager, think about where this goes. Your team increasingly comprises AI agents executing well-specified work. Your job shifts from “help people do the work” to “specify the work precisely enough that agents can do it, then supervise the agents doing it.” Those two activities — specifying and supervising — fight each other for the same cognitive resources.
The answer isn’t to do both at once. It’s to architect your workflow so they don’t overlap. Separate the thinking from the monitoring. Give each one its own space, its own time, its own context.
This sounds obvious. Almost no one does it.
The supervision paradox
One more piece. Last week, the execution track committed code directly to the main branch several times despite being configured not to. A known bug. Didn’t matter much when a human was watching every commit. Matters a lot when the agent runs unsupervised for hours.
This is the core paradox. The whole point of autonomous execution is that you don’t have to watch it. But the less you watch, the more dangerous configuration errors become. A small bug that’s trivial to catch when you’re paying attention becomes a production risk when you’re not.
This isn’t unique to software. This is the story of every automated system humans have ever built. Airline autopilot. Self-driving cars. Automated trading systems. The automation works well enough that humans stop paying close attention, and when something goes wrong, the human is out of the loop and slow to respond. The FAA calls it automation complacency. Lisanne Bainbridge wrote about it in 1983 — “the ironies of automation” — and her core observation still holds: the more reliable the automation, the less prepared the human supervisor is to handle its failures.
Every organization adopting AI agents needs to answer this: what does supervision look like when the agent is better than you at the task but worse than you at knowing when it’s gone off the rails?
You can’t watch everything — that defeats the purpose. You can’t watch nothing — that’s reckless. You need intermittent, high-signal supervision. Checkpoints, not continuous monitoring. Audit trails, not real-time observation. Trust but verify, on a schedule.
Nobody has figured this out yet. Not for AI coding agents, not for AI customer service, not for AI anything. The organizations that figure it out first will have a massive advantage — not because their agents are better, but because their supervision architecture lets the agents actually run.
Where this leaves you
The constraint moved. It used to be execution. Now it’s specification and supervision. And those two things — defining work precisely enough for autonomous execution, and monitoring that execution without destroying your ability to think — are in tension with each other. They compete for the same scarce resource: your focused attention.
If you’re a knowledge worker, a manager, a leader — this is coming for you. Not the question of whether AI can do your job. The question of whether you can do the new job that AI creates: the job of thinking clearly enough to direct machines, and watching carefully enough to catch them when they drift.
That’s not a technical skill. That’s a human one.
And right now, almost nobody is practicing it.
Why customer tools are organized wrong
This article reveals a fundamental flaw in how customer support tools are designed—organizing by interaction type instead of by customer—and explains why this fragmentation wastes time and obscures the full picture you need to help users effectively.
Infrastructure shapes thought
The tools you build determine what kinds of thinking become possible. On infrastructure, friction, and building deliberately for thought rather than just throughput.
Server-side dashboard architecture: Why moving data fetching off the browser changes everything
How choosing server-side rendering solved security, CORS, and credential management problems I didn't know I had.
The work of being available now
A book on AI, judgment, and staying human at work.
The practice of work in progress
Practical essays on how work actually gets done.
Dev reflection - February 23, 2026
I want to talk about pacing. Not productivity, not velocity — pacing. Because I think we're about to discover that a lot of what we called 'workflow' was actually a rhythm our brains depended on, a...
Universities missed the window to own AI literacy
In 2023 the question of who would own AI literacy was wide open. Universities spent two years forming committees while everyone else claimed the territory. Then a federal agency published the guidance higher education should have written.
Dev reflection - February 22, 2026
I want to talk about what happens when the thing you built to help you work starts working faster than you can think.
Universities missed the window to own AI literacy
In 2023 the question of who would own AI literacy was wide open. Universities spent two years forming committees while everyone else claimed the territory. Then a federal agency published the guidance higher education should have written.
If it can be automated, it wasn’t the work
I keep noticing people talk about AI like it's a wave that's about to hit them. "Will it take my job?" "How do we adopt it fast enough?" "How do we...
Is automation the key to organizational resilience?
Discover how automation enhances organizational resilience while emphasizing the vital role of human creativity in driving true innovation.