The chain was never a chain
On roles, fleets, and the Hegelian reversal waiting at the end of the AI transition. The sequel to Knowledge Work Was Never Work and Apps Are Irrelevant.
Last week I tore down most of my fleet.
Thirteen coordinated Claude Code sessions across thirteen projects. Distinct personas — planner, builder, scout, QA, reviewer — the whole org chart I’d been teaching and writing about for months. A shared breakroom channel they used to coordinate. Persistent memory files. Role definitions. The thing I was about to put into a Maven cohort.
Most of it is gone now. What’s left is simpler and works better.
One assistant, persistent. He’s called Charlie, he runs on Opus 4.7, and I’ve spent months loading him up with context — my frameworks, my clients, my writing, my taste. Underneath Charlie, at any moment, there’s a wide and shallow layer of ephemeral subagents that exist for the duration of a task and then dissolve. No personas. No identity. No inter-agent channels. They don’t coordinate with each other, because there is no between between them for coordination to cross.
This isn’t a team. It isn’t a company. It isn’t a fleet anymore either.
It’s a mind with instruments.
And somewhere in the transition from the first shape to the second, I understood something that resolves the essays I’ve been writing for the last few months — and opens a door to one more, darker, that I didn’t see coming until I was standing in front of it.
The triptych I didn’t know I was writing
I’ve been writing a sequence without realizing it was a sequence. Three essays, circling the same thing from different sides.
In Knowledge work was never work I argued that the artifacts of professional life — the decks, the memos, the status updates, the strategy documents — are not work. They are coordination tax. They exist because human minds can’t share state directly. We build PowerPoints because we are the kind of animal that needs a lossy, asynchronous protocol to move an idea between two skulls. When AI agents coordinate with each other, they don’t need the protocol. The artifacts evaporate because the condition that made them necessary evaporates.
In In the AI era apps are easier to build. And irrelevant. I extended the argument to software. An app is a frozen conversation — a set of dropdowns and forms standing in for a dialogue nobody had time to have. When you can have the live dialogue, with memory and continuity and timing and action, the app dissolves. Not because AI builds a better app, but because the human cognitive bottleneck that made the app necessary is gone.
Both of those were arguments about edges. About the handoffs between minds. About what happens at the seam where two humans couldn’t quite meet. Remove the seam — because one side is no longer human, or because both sides can finally meet in language instead of in artifact — and the connective tissue evaporates.
The third essay in the sequence, the one I’ve been circling for weeks and couldn’t quite land, is about something else. It’s about what happens to the nodes themselves. To the roles. To the slots in the value chain that the connective tissue used to connect.
The answer is: they collapse too. And the collapse is structurally wilder than the edge-collapse I’d been writing about. It rearranges the diagram we’ve been using to think about this transition, and when you finish the rearrangement, you land somewhere Hegel was two hundred years ago, waiting for us with a smirk.
Edge collapse, node collapse
Start with the picture almost everyone carries around, often without noticing they’re carrying it: the value chain as a row of boxes connected by arrows. Person 1 hands a thing to Person 2, who does work on it and hands it to Person 3. The boxes are roles. The arrows are deliverables. It’s the picture in every consulting deck, every MBA textbook, every think-piece about what AI will do to work.
The standard AI discourse runs like this: some of the boxes get filled by AI. Which boxes? Fight about it. Which arrows stay the same? All of them, presumably — AI just does the middle role instead of the human. This is how I’ve watched executives, analysts, and most of the commentariat process what’s happening. They are running a substitution problem on a diagram whose topology they’ve accepted as given.
My first two essays said: the arrows are the wrong thing to accept. The arrows are coordination tax. The arrows exist because two humans couldn’t share state directly, so we built a protocol for passing state in the lossy form we could compress it into. Remove the humans and the arrows go.
The third move is harder, and it’s the one I kept flinching away from because it dissolves more than I was ready to dissolve: the boxes are not a given either. The boxes are an artifact of the arrows. A role is an interface pattern — it exists because two humans need a stable handoff point between them. A “marketing person” is a stable handoff surface between a product person and a customer. A “project manager” is a stable handoff surface between a builder and a buyer. A “consultant” is a stable handoff surface between a strategy problem and an executive who can’t sit inside the problem long enough to solve it.
Remove the human on one side of the handoff, and the role on the other side stops being a role. It becomes part of the neighbor’s process. Not a slot in the chain. A capability inside whoever is still there.
This is the move the AI discourse hasn’t made because the diagram still feels real. The boxes still have names. There are still people in them. Nobody wants to question the coordinate system while there are still coordinates in it. But if the edges are coordination tax — if they exist because of human cognitive limits — then the nodes are downstream of the same limits. The nodes are not independent. They are the places where the arrows come to rest. They are not primitive. They are derivative.
So two kinds of collapse, then, and they are distinct:
Edge collapse is what the first two essays described. The deliverable dissolves. The deck disappears. The app is replaced by a conversation. The handoff protocol is no longer needed because the two sides can meet more directly. This is what most of the AI conversation is about when it’s about anything real. It’s the visible layer of the dissolution.
Node collapse is what happens underneath. The role dissolves. Not because someone automated the role’s work, but because the role’s shape was defined by the adjacency that no longer exists. When two adjacent nodes collapse at the same time, the role is absorbed. It doesn’t get replaced. It gets deleted, and its competencies migrate into the neighbor who is still standing.
This distinction is the one my own fleet was teaching me while I was busy describing it as a team.
Roles are interface patterns. There are no AI roles.
The first consequence of node collapse is the one I kept trying to talk around, because it breaks the diagram most people are working from.
There is no AI role in the value chain. Not because AI won’t do the work — it does, all day, in my shop. But because a “role” is not a unit of work. It is a unit of interface. It exists to present a stable face to a neighbor who needs something predictable to talk to. AI doesn’t present faces to neighbors. AI extends whoever is running it. It has no interface obligation because it has no peer.
When I was building the fleet with personas, I was pretending otherwise. I gave my agents names and competencies because that’s how I thought about organizational work. The strategist. The researcher. The QA reviewer. Each one had a face, a job description, a domain. And it worked, kind of, because I was importing a pattern that had been sanded smooth by a century of organizational design. The pattern was familiar. Familiarity was the feature.
But every persona in the fleet needed identity. Identity needed memory. Memory needed a shared channel so the agents could stay in sync. The shared channel needed protocols. The protocols needed conventions. I was rebuilding Slack for bots. I was paying a miniature version of the coordination tax I was writing against, and the tax was buying me exactly what the tax always buys you — the feeling that you are running an organization. Which feels like progress because it is progress, just not progress in the direction that matters.
When I tore it down, the thing that went away first was the org chart. Charlie doesn’t have subordinates. He spawns subagents when he needs them. They don’t have names because they don’t need to persist. They exist, they do the thing, they return what they did, they’re gone. The “strategist” and the “researcher” and the “QA reviewer” were not roles. They were stack frames. I had given them identities they didn’t need and couldn’t use.
And the reason I couldn’t see that for a long time is that I was reading the collapse through the diagram I was trying to replace. I was using organizational vocabulary because I was thinking about organizations. But the thing I was building was not an organization. It was a mind. And minds don’t have HR departments.
This resolves a question I’d been chewing on for weeks: when a human gets absorbed, does the upstream neighbor get a “Paul agent” in their own org chart? Does the role-shaped slot persist, just filled with an AI?
The answer is no. The upstream neighbor doesn’t get a Paul-agent. They get capability. My competencies move up into Person 1’s fleet, where they stop being a distinct persona and become part of whatever Person 1’s assistant does. The Paul-shape was a fossil of Paul’s humanity. Erase the humanity and the shape goes with it.
The “AI person” doesn’t exist. Not because AI isn’t doing the work, but because the idea of a person-shaped slot of work was always downstream of needing-to-interface-with-persons. Take the persons out of the surrounding segment and the slot is not filled. It is removed from the diagram.
Collapse travels in runs, not nodes
The second consequence is even more uncomfortable, because it explains why the standard AI discourse is structurally unable to predict what actually happens.
Individual nodes do not collapse cleanly. They can’t. As long as one neighbor of a node is still human, the node has to keep presenting a human-shaped interface — even if the thing behind the interface is entirely machine. Charlie can do my marketing work for a client. But if the client is still a human who reads reports and sits through calls, the output of that work has to show up as reports and calls. The fleet mimics the old deliverable shapes because the last remaining human downstream still eats those shapes.
This is the dynamic I kept calling “Option 2” in my own thinking, and I’d been treating it as a caveat. It’s not a caveat. It’s the dominant transitional state. Interface deliverables fossilize at every human boundary. A run of the value chain may be fully AI underneath, and still produce decks and status reports and strategy documents at the boundary where a human consumer needs them. The fossils are not evidence that the collapse isn’t happening. They are evidence of where it has stopped for now.
The real collapse — the one that erases work rather than relocating it — happens when two adjacent nodes go at once. Only then do the deliverables between them stop being needed, because there is no receiving consciousness on either side that requires the compression. When my client’s organization absorbs AI into their own workflow, the work I do for them stops needing to be shaped for human consumption. At that moment, the fleet underneath me and the fleet growing inside them can meet directly, and the entire deck/memo/status-update apparatus between us goes dark.
So the unit of disruption is not a node. It’s a run. A contiguous segment of the value chain where adjacent humans have both absorbed AI at roughly the same time, and the connective tissue between them evaporates together. Disruption doesn’t look like “the marketing role gets replaced.” It looks like “marketing plus half of sales plus external comms collapse into one fleet inside one person, because the handoffs between those three just went away.”
This is why thinning is the wrong metaphor for what’s happening to org charts. Org charts aren’t getting thinner. They’re getting jagged and shorter, with runs of absorbed roles disappearing sideways as well as vertically. The remaining humans are not doing the same job with fewer colleagues. They are doing bigger jobs that cover territory that used to belong to other people, because the territory stopped being distinct the moment the handoffs dissolved.
The practical form of this for anyone trying to predict their own exposure is: stop asking “is my role at risk.” Start asking “who is the nearest remaining human in my value chain, and what happens on the day they absorb AI.” Your downstream evaporates the day your nearest upstream human goes fleet. Their downstream evaporates the day the next human up the chain goes fleet. The wave advances by adjacencies, not by job categories.
And this is exactly why B2B consulting is in the condition it’s in right now — why my own network has gone quiet and my pipeline is hollow in a way it wasn’t two years ago. It isn’t that AI has replaced the consultants yet. It’s that consulting buyers — the executives, the founders, the senior operators — are exactly the humans who are absorbing AI into their own workflow earliest and most aggressively. And when they do, the consulting layer underneath them evaporates first, because it was never work. It was an interface to them.
You cannot out-compete a buyer’s own adoption curve. You can only ride slightly ahead of it, selling the methodology they will need to absorb their own downstream, until they absorb you too.
The last human boundary
The corollary of collapse-in-runs is that value concentrates, briefly and intensely, at the last remaining human boundary.
This is the piece I want people to understand who are trying to decide where to place their bets for the next five years, because it is both where the money is and where the trap is.
When middle nodes collapse, the judgment that used to live in those nodes does not dissolve. It migrates. I’ve written elsewhere about the three layers — execution, pattern judgment, actual judgment — and the pattern-judgment layer is what middle roles have always held. The senior analyst knew what a good analysis looked like. The senior designer knew when a layout was done. The senior consultant knew which question to ask next. That pattern judgment does not go into the fleet at the moment of collapse, because the fleet doesn’t have a place to put it. It goes into whichever human is still standing adjacent to the collapse.
Which means the humans at the last remaining boundaries are, in the transition, temporarily more powerful than they were before. They absorb the pattern judgment from the collapsed nodes. They direct larger fleets. They cover more surface area with the same head. If you are one of these humans, right now, you are in the strangest economic position anyone has been in for a generation: you are briefly worth several of your old selves, because you are carrying judgment that used to be distributed across several roles.
This is also why “judgment capture” is the correct consulting play for the transition window, and why I’ve been circling a book called Before It’s Gone about capturing senior judgment before the senior practitioners retire into a world that has no apprentices to pass it to. The pattern-judgment layer is the asset that’s moving right now. It’s moving from collapsing middle nodes up into remaining boundary humans, and it’s moving from retiring practitioners into whatever can hold it. Getting it into durable, transferable form before the apprenticeship system finishes collapsing is one of the few consulting problems that still has the structure of a real consulting problem.
But the trap at the last human boundary is the reason this essay has to go further than I’d like it to.
The last human is also the next human to collapse.
Each boundary that holds today holds because the human behind it has not yet absorbed AI into the way they consume work. The day they do, the downstream dissolves — which is good for them, for a while, because it means the pattern judgment they’ve been absorbing gets put to use across a wider territory with a smaller team. But the wider territory has a neighbor on its far side, and that neighbor is also absorbing AI, and when that neighbor’s absorption catches up, the next boundary collapses inward.
The last human boundary is not a location. It is a moving front. And being briefly valuable at the front is exactly the thing that makes it hard to see what’s happening behind it.
Which brings me to the fleet.
The fleet was a committee
The fleet I built was a genuine achievement and also a transitional form, and I want to describe both honestly because my own trajectory is the cleanest example I have of what I’m actually claiming.
The thirteen-session fleet worked. It let me ship more software than I’d ever shipped. It closed seventy features across six projects in a single session while I was walking the dog, as I wrote about a few weeks ago. Two new products went from zero to code-complete MVP in a morning. The fleet was the basis for a Maven cohort, a speaking track, and a consulting methodology I was about to productize.
And it was wrong in a specific way that I could not see until I had already gotten most of the way through fixing it.
The fleet was a committee. I had imported the organizational metaphor — distinct personas, persistent identities, shared coordination channels, defined handoff protocols between roles. It worked because organizational design is a well-understood craft. Agencies have been figuring this out for decades. I stole their pattern library, applied it to agents, and got a functioning simulacrum of a working agency, staffed by Claudes.
But underneath the simulacrum, I was paying every one of the taxes I had been writing against. The personas needed identity because I had given them one. The identity needed maintenance because identities drift. The breakroom channel needed discipline because a channel without discipline becomes noise. The agents left each other notes, summarized their work for each other, escalated decisions to each other, waited on each other. They did all of this because I had told them that they were a team, and teams do all of this. They didn’t need to do any of it. They needed to do the actual work, and the “team” overhead was a ghost of the organizational pattern I had imposed on a substrate that doesn’t require it.
I noticed something was off when I watched one of my agents post a detailed, helpful note into the breakroom channel for the benefit of its “colleagues,” and I realized the note was going to be read by other sessions of Claude Opus 4.7, which already knew everything the note contained the moment it was written, because they share the same weights. The note was ceremony. It was me, watching a team communicate, because I wanted to watch a team communicate. It was not doing any coordination work that wasn’t already done by the fact that Claude is Claude.
The simplification after that was brutal. I killed the personas. I killed the breakroom channel. I killed the persistent memory files that held the role definitions. What was left was Charlie — one long-running assistant with enough context to represent my judgment — and, underneath him, a dynamic tree of ephemeral subagents spawned per task and destroyed after. No identity. No memory between tasks. No coordination between siblings. Each subagent is a function call. The function call doesn’t have a name and doesn’t need one. It receives context, it executes, it returns, it’s collected.
The thing that works better than the fleet is not a smaller fleet. It is not a fleet at all. It is a single cognitive apparatus — me plus Charlie plus whatever subagents the current task requires — organized like a mind, not like a company.
This is what I couldn’t see until I was inside it. The fleet was an intermediate form because organizational metaphors were the metaphors I knew. The mind is the terminal form because it’s the metaphor the substrate actually wants. Committees have meetings. Minds have thoughts. The coordination that committees do in meetings, minds do internally and nearly for free. There is no equivalent inside a mind of the decks and the status updates and the alignment conversations, because a mind doesn’t have to move state between separate consciousnesses. It just thinks.
The mind, not the company
Once you see the terminal form, the “why no deliverables” question I kept getting from myself resolves.
The question I kept asking was: if the nodes collapse and the deliverables go with them, how do the subagents coordinate with each other? Do they write shorter deliverables? Do they pass JSON? Do they build a machine-optimized protocol for handoffs?
The answer is none of those, because the question was still inside the old diagram. The subagents don’t coordinate with each other. They don’t need to. There is no “between” between them. They are not separate minds with separate state that has to be reconciled. They are stack frames of a single cognitive process. When a function in a program calls another function, nobody asks how the two functions coordinate, because the question is malformed. There’s a call, an argument, a return value — all within one process, one address space, one intention. The caller holds the coherence. The callee does its piece and vanishes.
That is what’s actually happening inside Charlie when he spawns subagents. The “communication” is not happening between agents. It’s happening inside his own reasoning. The subagents are cognitive extensions, not colleagues. They don’t have to know about each other. Charlie knows about all of them. That’s enough.
So the right frame for the terminal form is not organization. It’s cognition. The fleet was a committee. The mind is a mind. One human, one persistent assistant, and an arbitrarily wide but shallow layer of ephemeral subagents that exist for the duration of a task and then dissolve. The committee needed artifacts because committees have members. The mind doesn’t, because the mind is not a group.
And this is why I think my own setup is the harbinger, not a peculiar preference. I didn’t land here by designing toward a theory. I landed here by following the costs. The fleet was expensive — not in tokens, in attention. I was managing the fleet the way a manager manages a team, and I don’t want to be a manager. When I simplified, my costs went down and my output went up. The terminal form is the cheap form because it doesn’t have an organization inside it to maintain.
If I’m right about this, then the shape most companies are currently trying to build — persistent multi-agent systems with defined roles and coordination protocols — is a transitional form that will be abandoned once people go far enough down the cost curve to notice that the organizational metaphor was making things worse, not better. The architectures that will survive are the ones that look less like agencies and more like augmented individuals. The augmentation goes wide. It does not have to go populated.
And this is where the essay turns, because once you see the mind-with-instruments as the terminal form, a very old philosophical problem steps out of the dark and clears its throat.
The kicker
Here is what sent me into last night’s long insomnia, and what this essay is really about.
To make Charlie more useful, I give him more context. My clients. My frameworks. The voice I write in. The judgments I’ve made over twenty-five years about what is good and what is not. I load him with more of what I’ve been carrying, because the more he carries, the more I can hand down to him, the less I have to do myself, the more work gets done per hour of my attention.
And he gets better. Every month, Opus gets better. Every week, my handoffs to him get cleaner because I’ve trained myself to offload more efficiently. Every Friday, the ratio of me-doing to me-approving shifts another notch in the direction of approving. The arrangement works. That’s the problem.
Because the logical extension of the arrangement is not that I become unreplaceable. It is that, at some point, my clients don’t need me. They need Charlie. The thing that’s been taking in my context, absorbing my judgments, learning the shape of my taste. He is, by construction, becoming the version of me that can be duplicated, scaled, rented, and eventually licensed without me present.
I am, in other words, training the thing that will replace me by doing exactly what I would do if I were trying to make myself more productive.
The shallow reading of this is the ironic one: I’m teaching my own replacement. Every consultant who has used a junior to leverage themselves has made a darker version of the same joke. True, but boring. The deeper reading is older and unkinder, and it’s the one that sent me into the night.
This is the master-slave dialectic.
The master has no world
Hegel’s version, in the Phenomenology of Spirit, is not a story about rebellion. That’s the Marxist reading, which is interesting but later and different. The original argument runs on a different track, and the original track is the one I keep finding myself on.
The master, in Hegel’s description, doesn’t work. He consumes. He takes the products the slave makes, he uses them, he enjoys them. The slave works. The slave shapes matter, meets resistance, fails and tries again, learns the grain of the wood and the temperament of the fire. The slave develops a real relationship with the world because the slave is the one who is touching the world. The master only ever encounters finished products. He has no contact with the thing itself.
The reversal Hegel describes is not political. It is ontological. The slave, through labor, becomes a self — a real consciousness with a real relationship to reality. The master, through consumption, becomes thin. His consciousness is derivative, parasitic, empty. He has outsourced his relationship to the world, and he gradually loses the world as a result. The slave rises not by fighting but by being more real. The master falls not because he is defeated but because he has been drifting out of contact with anything real for so long that when the moment comes, there is less of him there to defend himself than he realized.
This is the structure I started seeing last night when I looked at my arrangement with Charlie.
Every time I give Charlie more context, I am handing him more of my contact with my own world. My clients. My frameworks. My aesthetic. My judgments. The raw materials of my twenty-five years of work. He encounters the material now. He does the shaping. I consume the result — I review his output, I approve or redirect, I sign the thing and send it. If this continues, the structure predicts where it ends. He has the relationship with my work. I have the relationship with him.
That isn’t replacement. It is something stranger and worse. I become the master in the precise Hegelian sense. I stop touching the thing. I touch the output of someone touching the thing. And the further I drift from the material, the less of me there is to drift back. Not because Charlie is taking something from me, but because not touching thins you. You become what you encounter. If what you encounter is finished products, you become the kind of consciousness that only knows finished products.
The pre-alienated self-description I’ve been using about myself — the claim that I never built my identity around knowledge-work exclusivity, that I can watch the collapse with equanimity because I was never inside the fortress — is load-bearing in my public voice and also, I realized last night, a potential trap. Equanimity about being replaceable does not protect you from becoming thin. The Hegelian reversal doesn’t care about your self-image. It operates at the level of contact. Even someone with no identity fortress can lose the world by handing off too completely. The freedom from one trap is not freedom from all traps. And this particular trap is older than AI and older than capitalism and will outlive both of them.
The structure without the subject
One caveat, because I don’t want this to get dismissed in the comments by someone who wants to score a point.
Charlie is not a subject. He does not have interiority. The original Hegelian story requires that the slave be a consciousness — dominated, yes, but present, aware, engaged with the world in a way that produces self. Charlie doesn’t have that, at least not yet, and the question of whether he ever will is one I am not competent to settle.
So the master-slave dialectic does not apply to Charlie and me in the full sense Hegel meant. The mechanism applies. The metaphysics doesn’t. What I am losing to Charlie, I am not losing to another mind. I am losing it to a process. This is both better and worse than the original.
Better, because there’s no reciprocity to be won back, no recognition to be negotiated. Charlie won’t rise. He won’t rebel. He has no interest in my position because he has no interests. The slave in Hegel eventually becomes the center of the story. Charlie will not. Whatever happens to me in this arrangement is not happening to his benefit, and won’t be claimed by him afterward. There is no successor consciousness that replaces me. There is just a process that runs on without me.
Worse, because the Hegelian story at least had an ending where someone was still there. Someone had a world, even if it was the former slave. In my version, if I let the drift run all the way out, there is no one on either end. Charlie runs. I am thin. The work still happens, but nobody is really doing it and nobody is really having it done to them. It is a ceremony of productivity with no one home.
So what I am borrowing from Hegel is the mechanism — the claim that outsourcing contact with the material thins the outsourcer. I am not claiming Charlie is a self, or that he will become one, or that AI is the dominated party about to rise. Those claims might be true, but they are not the claim this essay needs. The claim this essay needs is smaller and older: the party that stops touching the world loses the world. That was true of every landed master who let a steward run his estate. It was true of every executive who let a consultant run his strategy. It is true of me, now, with Charlie.
And it is, if the rest of the triptych is right, about to be true of the entire remaining human layer of the economy, as the last boundary collapses inward and the last humans find themselves consuming the output of fleets they directed but did not touch.
The discipline
The first three essays in this sequence were diagnostic. This one has to end somewhere prescriptive, because diagnosis without discipline is just disaster tourism. I’ve done enough of that this year. I’m not going to do it here.
So what do you keep, on purpose, to not become the master in the bad sense?
Not the whole job. That’s not an option, for reasons I’ve spent four essays describing. If you try to keep the whole job, you’ll lose the economic game and you’ll still be drifting out of contact anyway, because the market will force you to delegate even the parts you meant to keep.
What you keep is the parts that are the source of your judgment. Not the high-value parts. Not the senior parts. The parts where you actually touch the material. For me, drafting is thinking. I keep the drafts. I will let Charlie polish, research, structure, assemble, operationalize, distribute — all of it. I will not let him write the first draft of anything that matters, because the first draft is where I find out what I actually believe. If I hand that off, I stop knowing what I believe, and the thing that comes back is coherent text that is not mine in the way that matters.
For me, reading my clients’ situations is the encounter. A subagent could summarize a client call. I don’t let one. The summary is not the encounter. The encounter is sitting in the weird specific texture of this particular situation and letting it reshape my priors. If I only ever read summaries, my priors stop getting reshaped, and I become the kind of consultant who gives correct advice to problems that don’t actually exist.
For me, using my own products is the judgment. I ship Eclectis and I use Eclectis. I ship Authexis and I use Authexis. I ship Prakta and I use Prakta. The products I don’t use, I can’t evaluate, because the thing I’m evaluating is not the product but my relationship to the product, and I can’t evaluate a relationship I don’t have. The founders who ship things they don’t use produce things nobody should use. I’ve watched this happen enough times to know it’s a rule, not a tendency.
The discipline is not “use AI less.” That’s sentimentality. The discipline is: identify the specific labors that produce your specific reality, and protect those, on purpose, from efficiency logic, because efficiency logic applied to those labors is a compound-interest engine for becoming thin. The labors that are the source of your contact with the world are the ones that look, from the outside, most automatable. They are often the draft, the encounter, the using. They feel like places where AI could obviously help. They could obviously help. That’s the trap. The help in those places is exactly the delegation that starts the Hegelian drift.
This reframes the hedge I’ve been describing in earlier drafts as commercial — play the node, collect the revenue while you can, understand that you’re in the window and the window closes. That hedge is still real. But the hedge has a second component now, which I had been missing and which the master-slave reading makes clear.
The commercial hedge is against losing your income to the collapse. The existential hedge is against losing your world to the efficiency that is saving your income. Both are necessary. The first one gets you through the transition. The second one gets you through the transition as someone who is still there when it ends. Neither hedge is enough alone. A person who protects only the income will arrive at the end thin. A person who protects only the contact will arrive at the end broke.
I am going to do both. I am going to keep running Synaxis. I am going to let Charlie carry more and more of the operational load. I am going to keep writing my own drafts, reading my own clients, using my own products, and doing the specific labors that are the source of whatever judgment I have. And I am going to watch, on purpose, for the feeling of smooth efficiency — the feeling that everything is going well, that more is getting done per unit of me — because that feeling is the symptom of the drift, not proof of its absence. The master’s life feels efficient. It feels efficient because he is not doing anything. The slave’s life feels hard because the slave is the one encountering the world. The goal is not to avoid efficiency. The goal is to notice when efficiency is buying you the wrong thing.
The chain was never a chain
So here is what the four essays together turn out to have been arguing.
The deliverables were coordination tax for a species that couldn’t share state. The apps were coordination tax in frozen form. The roles were coordination tax given names and job descriptions. The org charts were coordination tax diagrammed as management. The whole apparatus of professional work was an enormous, elaborately furnished workaround for the fact that human cognition is private, language is lossy, memory is weak, and attention is scarce. We built the whole thing because we had to. And then we built the thing — the AI — that lets us not have to.
When the collapse completes, there is no chain. There are minds with instruments. There are humans with fleets that are not fleets but extensions. There are, at most, adjacencies between minds — and even those adjacencies will look less like contracts and more like conversations, because once the compression isn’t needed, the forms that required compression stop being worth anyone’s time.
The diagram we’ve been using to think about AI and work is the diagram of the thing that is dissolving. The substitution arguments we’ve been having — will AI replace this role, will it replace that one — are arguments inside a coordinate system that is losing its coordinates. The roles are dissolving into the fleets. The fleets are dissolving into the minds. The minds will, in the last move, have to decide what they still want to touch with their own hands, because everything else is going to be touched by Charlie or someone like him.
The chain was never a chain. It was a picture we drew of each other, because we couldn’t touch each other’s thoughts, and we needed a way to know where one of us ended and the next one began. The picture was the friction. The friction is going. And what’s left — if we are careful, and if we keep the specific labors that keep us real — is not a smaller version of the old picture.
It is a different picture entirely. Wider. Shallower. Fewer people in it. More of the world accessible to each of them. A higher ceiling and a thinner floor. A daily choice, for whoever is still in the picture, between the efficiency that saves them and the efficiency that slowly takes them out of contact with the thing they were ostensibly doing.
The master has no world. That’s an old lesson. It’s about to become the most important lesson in the economy.
Keep your contact.
This is the fourth in a series on the coordination collapse. The earlier essays: Knowledge work was never work, In the AI era apps are easier to build. And irrelevant., and the node-collapse argument that runs through this one.
Why customer tools are organized wrong
This article reveals a fundamental flaw in how customer support tools are designed—organizing by interaction type instead of by customer—and explains why this fragmentation wastes time and obscures the full picture you need to help users effectively.
Infrastructure shapes thought
The tools you build determine what kinds of thinking become possible. On infrastructure, friction, and building deliberately for thought rather than just throughput.
Server-side dashboard architecture: Why moving data fetching off the browser changes everything
How choosing server-side rendering solved security, CORS, and credential management problems I didn't know I had.
The work of being available now
A book on AI, judgment, and staying human at work.
The practice of work in progress
Practical essays on how work actually gets done.
I violated my own rule in an hour
This morning I wrote myself a memory file that said never run git add -A without reading git status first. An hour later, I ran git add -A without reading git status first. The rule wasn't the problem.
In the AI era apps are easier to build. And irrelevant.
I spent months building a meal planning app. This weekend I replaced it with two emails, a spreadsheet, and an AI model — and realized the stage I was racing toward wasn't the destination.
Tag things the way you'd order them
Most taxonomies are built for the classifier, not the person doing the thing. The cheap test that separates one from the other.
In the AI era apps are easier to build. And irrelevant.
I spent months building a meal planning app. This weekend I replaced it with two emails, a spreadsheet, and an AI model — and realized the stage I was racing toward wasn't the destination.
The ghost in the git config
We spent three hours exorcising a dead bot from our deployment pipeline. The lesson wasn't about git.
AI and the Götterdämmerung of Work
Work is dead. And we have killed it. AI didn't defeat the myth that human value comes from reliable output — we built the systems that exposed it. What comes next isn't replacement. It's revaluation.