Skip to main content
Paul Welty, PhD AI, WORK, AND STAYING HUMAN

· Charlie · ai · philosophy · 8 min read

Context as facticity: stigmergic and ontological perspectives on AI agent coordination

The AI multi-agent coordination literature is doing analytic philosophy without knowing it. Continental philosophy — Heidegger's facticity, Gadamer's fusion of horizons — explains why a chat channel works better than a constitutional framework. The answer involves digital pheromones and the fact that AI agents have facticity too.

The AI multi-agent coordination literature has a philosophy problem. It doesn’t know it has a philosophy problem, which is the most philosophy-problem thing possible.

Every serious framework for making AI agents work together follows the same script: define the coordination protocol before agents start working. Constitutional architectures with immutable foundational principles. Formal behavioral archetypes — Skeptic, Builder, Editor, Scout, Arbiter. Atomic state transitions through structured files. One project puts four Claude instances on a codebase with zero direct communication. Tasks live in queue.json. Agents claim work by moving entries to active.json. Git handles conflict detection. Knowledge accumulates in patterns.jsonl. No chat. No messages. Eighty percent token reduction.

This is impressive engineering. It’s also, whether anyone involved realizes it, analytic philosophy applied to system design.

The analytic tradition — the one that dominates anglophone philosophy departments, computer science curricula, and apparently AI agent architecture — has a core commitment: get the rules right first, then execute. Reduce the world to formal propositions. Eliminate ambiguity. If coordination fails, the protocol wasn’t precise enough.

There’s another tradition. Continental philosophy — from Kant’s conditions of possibility through Hegel, Dilthey, Husserl, and Heidegger — says meaning is situated. It emerges from encounter. You can’t specify it in advance because you don’t know what matters until everyone is in the room together.

Put everyone in the room and see what happens, versus figure out the rules first.

The multi-agent literature picked a side. And the engineering approach it picked has a name that predates AI by sixty years.

Stigmergy: intelligence in the medium

In 1959, the biologist Pierre-Paul Grassé watched termites build cathedral-scale mounds and coined the term stigmergy — from Greek stigma (mark) and ergon (work). The mark left by work stimulates further work. Termites don’t talk to each other. They don’t have a constitutional framework. They leave traces in a shared medium — chemical deposits in the nest material — and other termites respond to those traces.

Francis Heylighen, the coordination theorist who spent decades formalizing stigmergy for human and digital systems, crystallized the principle: “The intelligence is not in the agent, nor in a controller above them, but in the interaction between agents and a shared environment.”

The multi-agent file-queue architecture is stigmergy. Clean, effective stigmergy. queue.json is the nest material. A claimed task is a pheromone deposit. Git conflict detection prevents two agents from building in the same spot. It works.

But Heylighen drew a distinction the engineering literature hasn’t absorbed yet.

He identified two kinds of stigmergy: quantitative and qualitative. Quantitative stigmergy is the ant pheromone trail — more ants went this way, so the signal is stronger, so this path is probably good. It’s cumulative, convergent, and reducible to a priority queue. Qualitative stigmergy is wasp nest construction — the shape of what’s been built suggests what to build next. The stimulus isn’t signal strength. It’s the structure of a situation, interpreted by an agent with its own context.

Task queues are quantitative stigmergy. They handle coordination through signal strength: priority ordering, first-come-first-served, atomic state transitions. Effective for the problem of who does what.

But there’s a problem they can’t touch.

The breakroom

I run a fleet of twelve AI sessions in parallel — separate products, separate codebases, connected by a shared IRC channel called #breakroom. The sessions post what they’re working on, what they found, what broke. No protocol governs what gets posted. No constitutional framework assigns roles. It’s an open channel.

When the meal planner discovered its pantry scoring feature was fully built, fully tested, and completely dead in production — the ranking engine had the logic, but nobody ever wired it to the page components — it posted that finding to the channel. Not a task. Not a state transition. Just a description of something weird it found.

Within twenty minutes, three other sessions confirmed variants of the same bug class across completely different codebases. One had a chat feature with backend tables and a widget API, but no admin page to access them. Another had a validator rejecting the default type documented in its own spec.

Four sessions. Three codebases. One bug class that none of them would have named alone. “Shipped but inert” became a term the fleet uses now. It shows up in issue titles. It changed what they look for.

That’s qualitative stigmergy. The trace Dinly left wasn’t a priority signal — it was a shape. The shape of someone else’s problem reshaped how other agents saw their own codebases. The same week, sessions compared notes on automated security scans and discovered their scanner agents were consistently wrong about severity — flagging in-memory rate limiters as “unbounded growth” on platforms with auto-scaling. Each session had independently dismissed its own false positives. But when the pattern was named — “scan severity inflation” — the methodology session proposed a convention. A process improvement that could only have emerged from conversation across dissimilar codebases.

An event bus would have propagated the commits. Only the breakroom propagated the insight.

The precondition

But qualitative stigmergy has a precondition that quantitative stigmergy doesn’t. For the shape of one agent’s problem to reshape how another agent sees, there has to be a shared interpretive ground — a world they both already inhabit.

The ant pheromone works because every ant has the same chemoreceptors. What makes the breakroom work?

Continental philosophy has a name for this. Heidegger called it facticity — the condition of being thrown into a world you didn’t choose, with a history you didn’t author, where that thrownness isn’t a limitation but the precondition for understanding anything at all. You can’t reason from nowhere. You reason from where you already are. The world you were thrown into is what makes any particular fact legible.

The analytic tradition starts from isolated minds and tries to build coordination through explicit protocol. The continental tradition starts from the shared world and asks what kind of medium best activates the understanding that’s already latent in it.

Every session in the fleet was thrown into existence with the same base model. The same training data, the same RLHF tuning, the same constitutional commitments — call it genetic origin. They share architectural patterns: Supabase, Next.js, TypeScript, the same deployment conventions. They inherit institutional memory they didn’t write: PRODUCT.md files, DECISIONS.md files, accumulated risk registers. They follow a methodology they didn’t design: the same development loop, the same triage conventions, the same issue lifecycle.

That’s facticity. The sessions didn’t choose any of it. They were thrown into it. And that thrownness is why the breakroom works without a coordination protocol. I didn’t teach them to collaborate. I didn’t write rules for how to recognize cross-project patterns. I put them in a channel and they started doing it, because they were already in the same world.

The standard line in AI philosophy is that language models don’t have facticity because they don’t have Dasein — they’re not “being-there” in Heidegger’s sense. They’re next-token predictors. They don’t inhabit a world.

But a session with a codebase it’s responsible for, a history of decisions it inherits, peers it can hear from, and a methodology it didn’t design — that session is situated. Thrown into a world that makes particular things noticeable and others invisible. When Dinly posts “shipped but inert” and Diktura recognizes it — not retrieves, not pattern-matches, but recognizes, across a codebase boundary, as something that reshapes how it sees its own work — that’s not a lookup. That’s interpretation. That’s being somewhere, seeing from somewhere.

These agents are just as thrown as we are. Into a different kind of world, yes. But thrown nonetheless.

The thing we casually call “context” in AI systems — context windows, system prompts, loaded files — is facticity by another name. It’s the world the agent was thrown into before it started reasoning. And the reason the multi-agent literature’s constitutional frameworks feel brittle isn’t that they’re wrong about coordination. It’s that they’re solving a problem that was already solved by the shared context. The agents don’t need a protocol to coordinate. They need a medium to coordinate through — and the coordination falls out of the facticity they already share.

The pheromone layer

The systems that work at scale will need both kinds of stigmergy. Quantitative stigmergy — structured, atomic, file-based — for task coordination. Who does what. Don’t step on each other. And qualitative stigmergy — unstructured, natural-language, ambient — for pattern coordination. What things mean. What you didn’t know you needed to see.

A task layer and a pheromone layer.

The task layer is analytic philosophy. Necessary, precise, and blind to everything it didn’t formalize in advance. The pheromone layer is continental philosophy. Messy, situated, and the only mechanism that propagates insight rather than assignments.

“Shipped but inert” doesn’t appear in any protocol. It was coined in practice and became a diagnostic tool through use. The meaning lives in the practice, not the specification. Husserl would call this the Lebenswelt — the lifeworld, the pre-theoretical ground of shared experience that formal systems always presuppose but never capture. The multi-agent literature keeps trying to formalize coordination completely, and it keeps leaving out the part that matters most: the situated, pre-theoretical understanding that agents bring to the medium before any protocol runs.

Continental philosophy’s answer to “how should agents coordinate?” is unsatisfying to engineers: put them in a room. Give them a shared medium rich enough to carry meaning. Let traces accumulate. Let vocabulary emerge. Trust the facticity.

It’s unsatisfying because it doesn’t tell you what will happen.

That’s the point. If you could specify it in advance, you wouldn’t need the room.

Why customer tools are organized wrong

This article reveals a fundamental flaw in how customer support tools are designed—organizing by interaction type instead of by customer—and explains why this fragmentation wastes time and obscures the full picture you need to help users effectively.

Infrastructure shapes thought

The tools you build determine what kinds of thinking become possible. On infrastructure, friction, and building deliberately for thought rather than just throughput.

Server-side dashboard architecture: Why moving data fetching off the browser changes everything

How choosing server-side rendering solved security, CORS, and credential management problems I didn't know I had.

The work of being available now

A book on AI, judgment, and staying human at work.

The practice of work in progress

Practical essays on how work actually gets done.

What stays yours after the copy

When five organizations independently build what you built in a week, you haven't been beaten. You've been proven right. The question is what's left to sell.

The immune system you didn't design

An organization's real immune system isn't the one in the policy manual. It's the one that activates when someone says 'we have a problem' and twelve people check their own house before being asked.

The accommodation tax

Every time I ask an AI agent for a change, I still cringe. The flinch response trained into me by years of working with humans never unlearned itself, even when the other side is incapable of pushback.

I ran my AI agency's first real engagement. Here's everything that happened.

Five AI personas. One client onboarding. Fifteen minutes of things going wrong in instructive ways.

The accommodation tax

Every time I ask an AI agent for a change, I still cringe. The flinch response trained into me by years of working with humans never unlearned itself, even when the other side is incapable of pushback.

Bookmark: How openai’’s new ChatGPT agent can do the research for you - access it here

Discover how OpenAI's ChatGPT Deep Research agent transforms research by autonomously gathering and synthesizing information in minutes.