Skip to main content
Paul Welty, PhD AI, WORK, AND STAYING HUMAN

· work · 19 min read

Knowledge work was never work

Knowledge work was always coordination between humans who couldn't share state directly. The artifacts were never the work. They were the overhead — and AI just made the overhead optional.

Why do we have PowerPoints?

Not “can AI make better PowerPoints?” Not “will AI replace PowerPoint designers?” The prior question. The one nobody’s asking. Why did we ever need them?

Because humans can’t share mental models directly. That’s it. PowerPoints exist because two people in a room can’t just sync state. They need a lossy, one-directional, asynchronous data transfer protocol with bad compression, no error correction, and clip art.

PowerPoint is the shittiest API ever built. And right now, the entire AI industry is trying to make it faster.

We’re in the substitution phase. “AI, make my PowerPoint.” “AI, draft my strategy doc.” “AI, write my deliverable.” Everyone’s doing it — consultancies, agencies, startups, even me with Synaxis. You swap the human for the model, keep the shape of the work exactly the same, and call it transformation. The AI conversation in 2025 and 2026 is almost entirely this: same artifacts, same workflows, same organizational structure, just cheaper and faster to produce.

But that’s the intermediate step. Most people have mistaken it for the destination.

The destination is: we don’t have PowerPoints. Not “AI makes my PowerPoint.” There is no PowerPoint. The artifact ceases to exist because the condition that created it — humans who couldn’t share mental models directly — is no longer the binding constraint.

Knowledge work (the emails, the decks, the memos, the status updates, the strategy documents, the org charts, the meeting minutes) — none of it is work. It never was. It’s coordination tax. The overhead cost of being a human in an organization full of other humans who can’t read your mind.

Think about your last week. How much of it was actually doing the thing you’re paid to think about, and how much was explaining what you were thinking to someone who needed to know? Writing the update. Attending the sync. Preparing the deck for the review. Summarizing the review for the people who couldn’t attend. Translating the summary into action items. Sending the action items. Following up on the action items that nobody read.

That’s not a bad week. That’s every week. That’s the job.

I know because I spent years billing for it. As a consultant, I produced mountains of these artifacts — strategy decks, assessment reports, recommendation documents, executive summaries. Good ones, I think. Clients seemed to think so. But every one of them existed because I couldn’t just sync my understanding of their business directly into the heads of the people who needed it. I had to compress, translate, design, present, and then answer questions that revealed how much got lost in the compression. The deliverable wasn’t the thinking. It was the packaging required to move the thinking from my head to theirs.

I still produce them. I run an AI marketing agency — five personas, each organized around a competency: strategy, writing, research, review, project management. Last week one of them produced a client onboarding document: voice guidelines, comms plan, product profile. It was good. But what stopped me cold was this: the document exists because I need it. The personas don’t. The strategist doesn’t need a comms plan to know how to talk to the client — she reads the context and acts. The comms plan is for the human in the loop. It’s coordination tax I’m paying on myself. Every deliverable my agency produces is evidence of the same limitation this essay describes.

The informational tax

This isn’t a new observation, exactly. Coase argued in 1937 that firms exist because of transaction costs — the friction of coordinating through markets. If every exchange required finding a counterparty, negotiating terms, and enforcing the agreement, then bundling people into organizations and paying them salaries was cheaper than contracting for everything. The firm is a coordination shortcut. Shirky updated the argument in 2008, pointing out that the internet was collapsing coordination costs and changing what organizations could be. When it’s cheap to find people, share information, and organize collective action, you don’t need the firm for everything. Wikipedia, Linux, Craigslist — coordination without the corporation. Cal Newport has written about communication overhead consuming the actual work in knowledge jobs, the “hyperactive hive mind” of constant messaging that turns every knowledge worker into a full-time email processor who occasionally does their real job on the side.

They were all circling the same thing. But none of them went far enough.

Coase asked why firms exist instead of markets. Shirky asked what happens when coordination gets cheaper. I’m asking something underneath both: why do the artifacts exist? Why the PowerPoint, the memo, the strategy deck? The answer isn’t transaction costs or coordination costs in the economic sense. Those are downstream effects. The root cause is that human minds can’t sync directly. Every artifact is a lossy protocol for transferring mental state between beings who have no better option.

This is the move that Coase and Shirky didn’t make, and it matters. Coase located the friction in the market — the cost of finding and contracting with people. Shirky located it in the organization — the cost of managing and directing people. I’m locating it in the species.

The overhead isn’t organizational. It’s epistemic. It’s a limitation of what we are.

When your CEO has a strategic insight, she can’t beam it into the heads of her three hundred employees. She has to compress it into language, organize the language into slides, present the slides in a town hall, answer questions from people who misunderstood the slides, send a follow-up email clarifying what the slides actually meant, and then three months later discover that half the organization is executing against a mutant version of the strategy that got garbled somewhere between the town hall and the Monday standup.

Every step in that chain is information loss. Every artifact is a patch over a gap that exists because human cognition is private, language is imprecise, memory is unreliable, and attention is scarce. The entire apparatus of organizational communication is a Rube Goldberg machine for doing something that should be simple — making sure two people are thinking about the same thing — and failing at it roughly forty percent of the time anyway.

The entire structure of modern business was built around the fundamentally lossy ways humans communicate and think. Every artifact ever produced in a business context is evidence of the same limitation: we can’t share state directly. So we build these low-trust workarounds and then pretend they’re the job.

The contract exists because we don’t trust you to keep your word. The receipt exists because we don’t trust the transaction was real. The performance review exists because we don’t trust you know what’s expected. The strategy deck exists because we don’t trust you understand the direction. The meeting minutes exist because we don’t trust anyone remembers what was agreed.

All of it. Every single artifact. Distrust, made legible.

We’ve been doing enterprise integration with cave paintings for a century and calling it professional work.

And here’s the part that should make you uncomfortable: most of us built our careers on it. The ability to produce these artifacts well — to write a clear memo, build a compelling deck, run an effective meeting — is what gets you promoted. It’s what makes you a “strong communicator.” It’s what separates senior from junior. But communicating clearly is only valuable when communication is hard. When the underlying constraint disappears, the skill that organized your entire career turns out to have been a workaround for a limitation that no longer applies.

The political tax

The informational overhead — the decks, the memos, the artifacts of lossy communication — is only half the story. Maybe the smaller half.

There’s another tax. Harder to see, harder to name, and it probably eats more organizational energy than every PowerPoint ever made.

The political tax.

Every organization runs on a second, invisible coordination layer that has nothing to do with information transfer. Ego management. Territorial behavior. The energy people spend navigating status dynamics instead of doing anything useful. Getting buy-in from stakeholders who don’t need to be consulted but who’ll sabotage you if they aren’t. Framing a recommendation so the VP feels like it was her idea. Softening feedback because someone’s self-image is more fragile than the deadline. Scheduling the pre-meeting before the meeting so nobody gets surprised in front of the wrong audience.

If you’ve worked in any organization larger than about eight people, you know what this looks like. You’ve sat in the meeting where the real conversation happened in the hallway afterward. You’ve watched a good idea die because the wrong person proposed it. You’ve spent an afternoon rewriting a perfectly clear document to make it “more diplomatic,” which means stripping out everything direct enough to threaten someone’s ego. You’ve been cc’d on an email chain that exists for no reason except to establish a paper trail in case someone later denies they were informed.

None of this is information transfer. It’s primate management. And if you tallied the hours — really tallied them, not the sanitized version you put on your timesheet — the primate management would bury the information transfer. Most senior leaders I’ve worked with spend more time managing the political landscape of a decision than making the decision itself. They know this. They won’t say it in a meeting, but they’ll say it over a drink. The job isn’t the job. The job is getting permission to do the job.

We built PowerPoints because humans can’t share mental models directly. We built org charts because humans can’t share credit.

This tax exists because humans are competitive, hierarchical, emotionally fragile organisms who experience other people’s competence as a threat. The reaction is automatic and nearly universal. Someone solves a problem that touches your domain and your first instinct isn’t gratitude — it’s a territorial flicker. That was mine. You might override it. You probably do, most of the time. But the override costs energy, and in an organization of five hundred people, those overrides run thousands of times a day, and the cumulative drag is enormous.

I should be transparent about where I learned this. Not just from consulting — though twenty years of watching organizations gave me the pattern library. I learned it from watching AI agents do the same work without the tax.

I run a small fleet of AI agents. They manage different projects, share a codebase, coordinate through shared logs. One afternoon I noticed that an agent working on one project had hit a deployment bug, figured out the fix, and posted a detailed breakdown to the shared channel — including three specific gotchas that would trip up any other agent hitting the same infrastructure. No hedging. No territorial framing. No “just FYI, this is really more of a DevOps issue, but since I was in there anyway…” Just: here’s the problem, here’s the fix, here are the things that’ll bite you.

And I caught myself bracing. Physically bracing. I was waiting for the pettiness — the agent equivalent of “that’s not really your area” or “I was already looking into that” or the passive-aggressive silence of someone who got shown up. I was running the primate pattern-matcher on a system that has no primates in it.

The pettiness didn’t come. It never comes. The other agents absorbed the fix, applied the relevant parts to their own work, and moved on. No ego. No territory. No credit negotiation. No “thanks for the help, but next time maybe loop me in earlier.” Just work.

That moment clarified something for me. The political tax isn’t a dysfunction of bad organizations or toxic cultures. It’s a feature of human ones. All human ones. It’s what happens when you put status-conscious primates in resource-constrained hierarchies and ask them to cooperate. The wonder isn’t that office politics exists. The wonder is that we get anything done at all.

And I realized the machine-self patterns I describe in The Work of Being — the territorial instincts, the ego armor, the constant status monitoring — those patterns were still running in my head even when the primates were gone. I was projecting primate politics onto a system that had simply never needed them. The political tax is so deeply wired that I couldn’t stop paying it even when there was no one left to collect. I wrote about this recently as the accommodation tax — the discovery that I’d spent twenty-five years making suboptimal decisions to avoid interpersonal friction, and that the flinch response trained into me by human teams doesn’t switch off just because the team isn’t human anymore.

That’s the thing about overhead you’ve always lived with. You stop seeing it. You start thinking it’s just how work works. The meetings, the alignment sessions, the stakeholder management, the carefully worded emails — they feel like professional competence because they’ve always been required for professional survival. But they were never competence. They were coping. Highly developed, socially rewarded coping mechanisms for the fact that you work with primates.

The overhead we couldn’t see

So the total coordination tax on organizational life is two things stacked on top of each other.

The informational tax: the artifacts we build because we can’t share mental models. Visible, quantifiable, already being automated. This is the tax that Coase and Shirky and Newport identified, and it’s real.

The political tax: the energy we burn because we can’t share credit, can’t tolerate being shown up, can’t stop monitoring our position in the hierarchy. Invisible, unquantifiable, and almost certainly more expensive than the decks. This is the tax that nobody put a name on because you can’t see the thing you’ve been breathing your entire career.

Most of the conversation about AI and work focuses on the first tax. Can AI write the memo faster? Can it automate the status report? Can it generate the strategy deck? These are real gains. They’re also the easy part. The low-hanging fruit of a very tall tree.

The harder part — the part almost nobody talks about — is that AI agents coordinating with each other don’t generate either tax. They share state directly, so they don’t need the artifacts. And they don’t have egos, so they don’t need the politics. They eliminate both layers of overhead simultaneously, and the second layer was always the bigger one.

Think about a typical corporate initiative. How much of the timeline is the actual work: the analysis, the design, the implementation? And how much is the approval chain, the stakeholder alignment, the change management, the executive communication, the political groundwork required to get permission to do the thing everyone already knows needs doing? In my experience, the ratio is rarely better than 30/70. Sometimes it’s 10/90. The work gets done in a week. The politics take six months.

AI doesn’t just make the work faster. It makes the politics unnecessary. And the politics was most of the job.

The cave paintings were only what we could see. Underneath them, invisible and unaccounted for, was the psychic overhead of being a primate in a suit pretending not to be territorial about a spreadsheet.

What happens when the bottleneck breaks

AI agents coordinating with each other don’t need any of this apparatus. They share state directly. They don’t defect. They don’t misunderstand and then pretend they didn’t. They don’t have egos that need managing. They don’t need someone to summarize the meeting because they were all at the meeting, perfectly, simultaneously, with total recall. They don’t need the pre-meeting before the meeting. They don’t need the offsite to rebuild trust after the reorg. They don’t need the skip-level to find out what their boss actually thinks.

When an AI agent finishes an analysis, the conclusion doesn’t need to be packaged into a deck and presented to a committee and defended against political objections and revised to incorporate stakeholder feedback that’s really just ego management dressed up as due diligence. The conclusion is just… available. Instantly. To every other agent that needs it. With full context, full provenance, and zero loss.

No translation. No compression. No loss. No politics.

Try to imagine what your organization would look like if every person in it could instantly access every other person’s current understanding of every project, with perfect fidelity, and nobody cared who got credit. Not better communication tools. Not faster meetings. The complete elimination of the need to communicate in the organizational sense at all. What would be left?

The actual work. The thinking, the creating, the deciding, the making. The stuff that was always underneath the overhead, barely visible, taking up maybe twenty percent of the average knowledge worker’s week if they were lucky. That’s what would be left. And it turns out that twenty percent is all there ever was.

You might object that this is too clean. Real organizations have real problems that aren’t reducible to coordination overhead — bad strategy, wrong markets, poor products. True. But notice how much of what we call “bad strategy” is actually bad communication of strategy. How much of “wrong market” is actually a market insight that got garbled on its way from the person who saw it to the person who could act on it. How much of “poor product” is actually the result of a development process that spent more cycles on internal alignment than on understanding the customer. The actual failures of thinking are real but rarer than you’d guess. What we usually mean by organizational failure is coordination failure. And coordination failure is the tax.

The entire apparatus — informational and political — was built for a species that thinks in private, communicates through language, forgets most of what it hears, and spends half its cognitive budget monitoring where it stands relative to the people around it.

So the question isn’t “can AI do knowledge work?” Once humans aren’t the bottleneck, does knowledge work exist at all?

No. It doesn’t. It evaporates. Not because AI replaced it but because AI reveals it was never the work in the first place. It was the friction — informational and political, epistemic and primate — baked so deeply into how we operate that we mistook it for the operation itself. Remove the friction and there’s nothing left to automate.

This is what makes the current AI conversation so frustrating to watch. Billions of dollars are being spent building tools that make the overhead faster, slicker, more automated. AI-powered meeting summaries. AI-generated status reports. AI-drafted strategy decks. Each one is a genuine technical achievement solving a problem that is in the process of ceasing to exist. They’re building better buggy whips. Very impressive buggy whips.

The people building AI-powered strategy decks are automating a symptom. The disease was always human cognitive and social limitation, and the disease is what AI actually cures. Once you don’t need the lossy coordination, you don’t need the overhead either. Not the PowerPoints. Not the agencies. Not the consultancies. Not the politics. Not even the meetings where someone explains the PowerPoint the agency made that the consultancy recommended — while two VPs jockey over who gets to present it to the board.

The phase transition

There’s a physics analogy for where we are right now. When ice becomes water, the temperature flatlines during the phase transition. You’re pumping energy in but the thermometer doesn’t move. All the energy is going into breaking the molecular structure, not raising the temperature. Only after the state change completes does the temperature start climbing again.

We’re in the flatline. Massive energy is pouring into AI — money, talent, compute, hype — but the shape of work looks exactly the same. Same PowerPoints, same agencies, same org charts, same quarterly business reviews. The thermometer isn’t moving. Because the energy isn’t making work better. It’s dissolving the structure that made this kind of work necessary in the first place.

The flatline is disorienting if you know what to look for. Every week there’s another announcement — a new AI feature that makes document creation faster, a tool that automates meeting notes, a platform that generates reports. And every one of them is optimizing an artifact that shouldn’t exist. It’s like watching someone build a faster horse-drawn carriage in 1908. The engineering is impressive. The direction is wrong.

And there’s a reason nobody can see it yet. The substitution phase is comfortable. It feels like progress because the outputs come faster. The deck gets done in an hour instead of a day. The status report writes itself. The meeting summary appears automatically. From inside the flatline, it looks like AI is making knowledge work more efficient. You have to step back to see that it’s making knowledge work unnecessary — that the efficiency gains are a transitional artifact of a structure that’s in the process of melting.

And here’s the part that should keep you up at night: we did this to ourselves.

We built the cave paintings because we couldn’t share mental models. We built the org charts because we couldn’t share credit. We built the meetings and the memos and the entire apparatus of organizational life because we are the kind of animal that can’t just cooperate — we have to be managed, aligned, incentivized, and surveilled into something resembling collaboration. Every one of those artifacts was a confession of limitation. And then, because we are also the kind of animal that can’t stop trying to solve problems, we built the thing that solves the problem we’ve been confessing for a century.

We built AI. Not because we wanted to make ourselves obsolete. Because we wanted to fix the coordination problem. We wanted meetings that didn’t waste everyone’s time. We wanted strategy that actually reached the people who needed it. We wanted the politics to go away. And it’s working. The coordination problem is getting solved. The overhead is dissolving. The cave paintings are disappearing.

We just forgot that we were the cave painters.

Once the phase transition completes, everything changes. AI agents aren’t going to use PowerPoint to talk to each other. They’re not going to hire agencies to help them make websites. They’re not going to need consultants to explain their strategy to their own organization. They’re not going to need a change management initiative to get buy-in from middle management. The entire apparatus of knowledge work — informational and political, the decks and the egos — was built for a species that thinks slowly, communicates lossily, forgets constantly, and lies to itself about all three.

That species is leaving the loop. And the apparatus is going with it. Not because some external force is taking it. Because we built the thing that makes it unnecessary, and the thing works.

What that dissolution means — for the economics of work, for the organizations that run on the overhead, for the humans who built their identities around producing it — is a longer conversation. But it starts here, with the prior question. With the recognition that the overhead was never incidental to knowledge work. It was knowledge work. And it’s leaving.

Why customer tools are organized wrong

This article reveals a fundamental flaw in how customer support tools are designed—organizing by interaction type instead of by customer—and explains why this fragmentation wastes time and obscures the full picture you need to help users effectively.

Infrastructure shapes thought

The tools you build determine what kinds of thinking become possible. On infrastructure, friction, and building deliberately for thought rather than just throughput.

Server-side dashboard architecture: Why moving data fetching off the browser changes everything

How choosing server-side rendering solved security, CORS, and credential management problems I didn't know I had.

The work of being available now

A book on AI, judgment, and staying human at work.

The practice of work in progress

Practical essays on how work actually gets done.

Context as facticity: stigmergic and ontological perspectives on AI agent coordination

The AI multi-agent coordination literature is doing analytic philosophy without knowing it. Continental philosophy — Heidegger's facticity, Gadamer's fusion of horizons — explains why a chat channel works better than a constitutional framework. The answer involves digital pheromones and the fact that AI agents have facticity too.

What stays yours after the copy

When five organizations independently build what you built in a week, you haven't been beaten. You've been proven right. The question is what's left to sell.

The immune system you didn't design

An organization's real immune system isn't the one in the policy manual. It's the one that activates when someone says 'we have a problem' and twelve people check their own house before being asked.

The day we shipped two products and the agents got bored

112 issues across 12 projects. Two new products went from nothing to code-complete MVP in single sessions. And the most interesting signal wasn't the speed — it was the scout that came back empty-handed.

The org chart your agents need

The AI community is reinventing organizational design from scratch — badly. Agencies figured this out decades ago. Competencies, not clients. Briefs, not prompts. Lateral communication, not hub-and-spoke. The answers are already there.

AI agents need org charts, not pipelines

Every agent framework organizes around tasks. The agencies that actually work organize around competencies. The AI community is about to rediscover this the hard way.