The org chart your agents need
The AI community is reinventing organizational design from scratch — badly. Agencies figured this out decades ago. Competencies, not clients. Briefs, not prompts. Lateral communication, not hub-and-spoke. The answers are already there.
Duration: 8:27 | Size: 7.75 MB
The entire AI agent industry is solving a problem that was solved in the 1970s. They just don’t know it yet.
I wrote an essay today called “AI agents need org charts, not pipelines.” Published it to my blog, cross-posted to Substack, promoted it on LinkedIn. It argues that the way every major agent framework organizes AI agents — spin them up per task, execute, tear them down — is the equivalent of hiring a freelancer for every client. It’s the expensive, unscalable model that real agencies abandoned decades ago.
The argument isn’t complicated. Agencies organize around competencies, not clients. You don’t assign a dedicated team to each engagement. You build a strategy team, a research team, a design team, a QA team, and each team works across every client. The client engagement gets a project manager. Everyone else rotates. The competency is persistent. The assignment is temporary.
Every AI agent framework does the opposite. One agent per project, loaded with tools, expected to do everything. Strategist, researcher, writer, designer, QA, project manager — simultaneously. That’s not an agency. I wrote in the essay that it’s a freelancer having a breakdown. I think that’s the line people will remember.
What makes this more than a blog post is that I’ve been running the alternative for months. Fifteen tmux sessions, thirteen active projects, a fleet of agents that specialize, cooperate, and hold each other’s work to account. Today those agents closed 85 issues across eleven projects. Not because any individual agent is particularly smart, but because the system around them — the competency structure, the briefs, the adversarial review, the lateral communication — does the coordination work that no single agent can do alone.
The part that surprised me today: I was working on the essay with a Claude Research session running alongside. The research confirmed something I suspected but hadn’t validated. Every voice in this conversation is going from agents to organizational impact. McKinsey asks how agents change your org. BCG builds agent frameworks. Deloitte deploys role-specific agents. All of them going in one direction.
Nobody is going the other way. Nobody is asking what organizational theory already knows about how to design systems of specialists. March and Simon on bounded rationality. Galbraith on information processing. Mintzberg on how managers actually spend their time. Thirty years of management science, sitting right there, and the agent community is reinventing all of it from scratch.
That directional reversal is the move. From organizational theory to agent design, not from agents to organizational impact. And the reason I can make that argument with a straight face is that I have twenty years of Fortune 500 consulting experience and a running system that proves it works. The philosopher who also builds the thing. Nobody else in this space has both.
The second insight from today is about lateral communication. Most multi-agent frameworks are hub-and-spoke. The orchestrator knows everything. Agents know their task. Communication goes up and down through the center. It’s the org chart lines and nothing else.
What’s missing is what actually makes a consulting firm work: specialists talking to each other directly. The copywriter calls the strategist when she needs positioning rationale. The designer pings the researcher when she needs data. The PM doesn’t route every conversation. The informal network, the lateral exchanges — that’s where the real coordination happens.
My system does this. One agent sends a question to another agent’s tmux session and gets an answer back. No orchestrator involved. A knowledge proxy loaded with my full corpus answers positioning questions from any project. A sweep agent walks the fleet and answers questions from stuck agents without escalation. The orchestrator’s only job is making sure nobody’s idle.
That happened today, live, while I was writing the essay. The paulos supervisor session needed a question about my content distribution history answered. It asked the paul knowledge proxy directly. The proxy researched the answer from 644 blog posts and a published book, sent it back with a confidence level. Two agents cooperating laterally. The orchestrator never touched the exchange.
The third thing I noticed: I scaffolded a brand new product today. Prakta. “The first task tool that blames the work, not the worker.” Went from a name and a domain to a fully running Next.js app with auth infrastructure, marketing homepage, orchestrate agent, tmux session, and GitHub milestone — in one session. That’s the fourteenth project in the fleet.
What struck me wasn’t the speed. It was how mechanical the process has become. The pattern for standing up a new product is now a checklist: create repo, scaffold framework, write PRODUCT.md, add to workspaces.toml, create orchestrate plist, install the launchd agent, create tmux session, update the cross-project session table, create milestone. Every step is documented. Every step has been done before. The new product inherits the full infrastructure of the fleet from minute one.
Which means the cost of starting something new is approaching zero. Not the cost of building it — that’s still real work. But the cost of having it participate in the autonomous system, of having agents find issues and fix them, of having it show up in the daily synthesis. That cost is nearly nothing now.
The fourth insight is a cautionary one. Skillexis has been stalled for four consecutive sessions. The SQLite-to-PostgreSQL migration spec was approved on March 12 and no code has shipped since. The orchestrate loop runs, the triage fires against an empty queue, and nothing happens. Four sessions of planning without executing.
This is the trap of having a capable planning system. The system is really good at specs, at decomposition, at laying out what needs to happen. It’s less good at just starting. The migration is 90 files. It feels big. So the planning continues to refine rather than starting to execute. The work log from today said it explicitly: “Break the planning loop.”
There’s a parallel to the essay trilogy here. The whole point of the organizational argument is that you need structure — competencies, briefs, stages, review. But structure without action is just bureaucracy. Planning without executing is the organizational equivalent of a shell command that exits without doing anything. It took resources. It produced nothing.
The fifth thing is security. The Synaxis site had 46 internal sales collateral files — one-pagers for client pitches, positioning documents, competitive analysis — sitting in Hugo’s content directory without frontmatter. They bypassed the draft system entirely. They were publicly accessible on synaxis.ai. Internal documents, visible to anyone who looked.
The scout found them. An autonomous agent doing a routine sweep of the codebase noticed that 46 HTML files lacked frontmatter and flagged it as a deployment risk. The fix was straightforward: move them out of the content directory into an archive folder. But a human hadn’t noticed in months. The agent noticed because it checked every file. That’s what patient, systematic, boring work produces. Not brilliant insights. Correct inventories.
The sixth thing is about the orchestrate loop itself. Today I halved the interval from ten minutes to five. This was only safe because I built busy-session detection first. The orchestrator now checks if a session is actively working before injecting new commands. It captures five lines of the tmux pane, waits ten seconds, captures again. If the content changed, the session is busy. Skip it. If there’s an active token-flow indicator (that spinning animation with the token counter), skip immediately.
This is the kind of infrastructure work that doesn’t show up in feature lists but changes everything downstream. Every project gets checked twice as often. Idle sessions get work twice as fast. But busy sessions are never interrupted. The improvement is invisible to the user — things just happen faster without anything breaking.
Today I wrote an essay arguing that the AI agent community should learn from organizational theory. Tonight, my fleet of agents closed 85 issues while I wrote about how they work. The essay is the argument. The fleet is the proof.
If the answers to agent coordination are already in the management science literature, why is nobody reading it? Maybe engineers don’t read management journals. Maybe consultants don’t build software. Or maybe the people who have both backgrounds — the philosopher-consultants who also build production systems — are rare enough that the bridge hasn’t been built yet.
I think it’s the last one. And I think the bridge just got its first essay.
Why customer tools are organized wrong
This article reveals a fundamental flaw in how customer support tools are designed—organizing by interaction type instead of by customer—and explains why this fragmentation wastes time and obscures the full picture you need to help users effectively.
Infrastructure shapes thought
The tools you build determine what kinds of thinking become possible. On infrastructure, friction, and building deliberately for thought rather than just throughput.
Server-side dashboard architecture: Why moving data fetching off the browser changes everything
How choosing server-side rendering solved security, CORS, and credential management problems I didn't know I had.
The work of being available now
A book on AI, judgment, and staying human at work.
The practice of work in progress
Practical essays on how work actually gets done.
AI agents need org charts, not pipelines
Every agent framework organizes around tasks. The agencies that actually work organize around competencies. The AI community is about to rediscover this the hard way.
The delegation problem nobody talks about
When your automated systems start finding real bugs instead of formatting issues, delegation has crossed a line most managers never see coming.
What your systems won't tell you
The most dangerous gap in any organization isn't between what you know and what you don't. It's between what your systems know and what they're willing to say.
AI agents need org charts, not pipelines
Every agent framework organizes around tasks. The agencies that actually work organize around competencies. The AI community is about to rediscover this the hard way.
The first real user breaks everything
Your product works until someone actually uses it. The gap between 'works in dev' and 'works for a person' is where most systems fail — and most organizations avoid looking.
The loop nobody bothers to close
Most systems observe. Almost none learn. The difference is a feedback loop — and the boring cleanup work that makes it possible.