Skip to main content
Paul Welty, PhD AI, WORK, AND STAYING HUMAN

· reflection

Everything pointed at ghosts

Most organizations are measuring work they stopped doing years ago. The dashboard is green. The reports are filed. Nobody realizes the entire apparatus is pointed at ghosts.

Duration: 11:34 | Size: 13.2 MB

Most organizations are measuring work they stopped doing years ago.

Not intentionally. Nobody sits down and decides to track the performance of a department that’s been reorganized out of existence or count deliverables for a product line that was shelved. What happens is simpler: the measurement was set up when the work was real, and when the work changed, the measurement didn’t. So the dashboard keeps showing numbers for things that no longer happen, and everyone keeps glancing at it because it’s there, and nobody asks whether the numbers correspond to anything.

This is subtler than bad data. Bad data is wrong. This is accurate data about the wrong things. The reports are correct — the events they track simply don’t exist anymore. The metrics are pristine and completely meaningless. And the danger isn’t that someone makes a bad decision based on them. It’s that the presence of measurement creates the illusion that someone is paying attention. The dashboard is green. The reports are filed. Nobody realizes the entire apparatus is pointed at ghosts.


Every piece of intelligence your organization collects is only as useful as the person who receives it.

Think about a hospital where blood work results from the emergency department get routed to the orthopedic surgeon’s inbox. The tests are run correctly. The results are accurate. The delivery system works flawlessly. And the information lands in front of someone who can’t act on it, while the person who needs it never sees it.

This happens in organizations constantly, and almost never because of a deliberate routing decision. It happens because a team was restructured six months ago, or because the reporting was set up when one person was responsible and now someone else is, or because the system was copied from another division and nobody updated where the signals go. The data flows. The recipients receive. And the intelligence sits unused because it’s in the wrong hands — not lost, just misdelivered.

The fix is almost always trivial once you see it. But seeing it requires someone to trace the full path from signal generation to decision-maker, and most organizations never do that audit. They check whether the data is being collected. They check whether the reports are being generated. They never check whether the right person is reading them.


Process is not the same as progress, and the moment you can’t tell the difference, the process is winning.

There’s a particular kind of organizational ritual where every participant knows the outcome before the meeting starts, but the meeting happens anyway because the process requires it. Approval gates that the approver always approves. Review cycles where the reviewer is the same person who did the work. Sign-offs that function as rubber stamps but consume real calendar time and real cognitive overhead.

The instinct to add checkpoints comes from a real place — the first time something ships without review and breaks, the organization adds a gate. Sensible. But gates accumulate. The conditions that justified them change. The team shrinks, or the risk profile shifts, or the work becomes routine enough that the review adds no information. And the gate stays, because removing a safety measure feels reckless even when the safety measure is doing nothing.

Here’s the test: if the person approving has never once rejected the thing they’re approving, the gate isn’t a gate. It’s a tax. And the cost isn’t just the time spent going through the motions. It’s the organizational signal that ceremony equals thoroughness. People learn that going through steps is the same as being careful, and they stop asking whether the steps actually catch anything.

The hardest thing in organizational design is removing a process that was once necessary. It requires someone to say, out loud, that the risk has changed and the protection is no longer earning its cost. Most organizations can’t do this. So they accumulate process like sediment, each layer deposited by a problem that may no longer exist, until the work of doing work exceeds the work itself.


AI is already supervising AI, and we haven’t decided what we mean by supervision.

In hiring, there’s a well-known problem with having the same person who recruits a candidate also evaluate their performance. The biases align. The evaluator knows what the recruiter was looking for, shares their assumptions, and is predisposed to see competence in the same dimensions. An independent evaluation — someone who wasn’t involved in the selection — catches different things. Not better or worse things, necessarily. Different ones.

When an AI system generates a piece of work and a second AI system evaluates it, we’re in similar territory. The evaluator can be given specific criteria — measurable, well-defined, auditable. And it can apply those criteria consistently, which is more than most human reviewers manage. But the shared training, the shared patterns of what “good” looks like, the shared blind spots about what questions to even ask — those don’t go away just because the evaluation prompt is different from the generation prompt.

This isn’t a reason not to do it. It’s a reason to be honest about what it is. AI evaluating AI with specific rubrics is closer to a spell-checker than a peer review. It catches the things you thought to check for. It misses the things you didn’t know to ask about. The value is real — catching dominant options, flagging reward imbalances, enforcing structural constraints. But “quality control” implies a level of independence that isn’t there. The quality controller was trained on the same textbooks as the worker.

The organizations that navigate this well will be the ones that are precise about what their AI supervision actually does and doesn’t catch. The ones that struggle will be the ones who see “AI-reviewed” on the label and assume it means what “peer-reviewed” used to mean.


You can build something excellent and still be invisible to the people who need it most.

There’s a version of this in restaurants. A chef spends years perfecting a particular style of cooking — refined, technically accomplished, the kind of food that other chefs admire. The restaurant is listed in all the culinary guides. The reviews are strong. And the dining room is half-empty because the name, the signage, and the neighborhood all signal something different from what’s actually on the plate. The food is for adventurous eaters; the location says “business lunch.” The quality is undeniable; the discovery path leads to the wrong audience.

The gap between making something good and making it findable by the people who would value it is a language problem. The vocabulary you use to describe your work determines who encounters it. If you describe what you do in the language of your craft, you’ll be found by other practitioners. If you describe it in the language of the problem it solves, you’ll be found by the people who have that problem. These are different populations, and most creators default to the first without realizing they’ve made a choice.

This is especially acute for work that spans multiple domains. A consulting practice that combines organizational psychology with data strategy doesn’t fit neatly into either category’s search terms. An author whose essays bridge philosophy and practical management doesn’t appear when people search for either one. The work is genuinely interdisciplinary, and the discovery systems are built for disciplines. You can be excellent and invisible simultaneously — not because the work isn’t good enough, but because the map doesn’t have a category for where you actually are.


Dead infrastructure doesn’t announce itself. It just keeps running.

There’s a building maintenance principle that says the most expensive systems in any structure are the ones installed for a reason that no longer applies but haven’t been removed because they still technically function. The heating system for the wing converted to cold storage. The security cameras pointed at doors that were bricked up during renovation. The intercom system nobody uses because everyone has phones now. Each one draws power, requires maintenance contracts, and shows up on the facility audit as “operational.”

Organizations have their own version. The weekly status meeting started during a crisis and kept going after the crisis ended. The reporting chain that exists because a VP three reorganizations ago wanted visibility into a department that’s since been dissolved. The compliance checklist written for a regulation that was superseded. None of these things are broken. They all function exactly as designed. They’re just no longer connected to any need.

The removal problem is genuine. Taking something out requires someone to take responsibility for the absence. If the decommissioned system was needed and you removed it, that’s your fault. If the unnecessary system stays in place, nobody blames you — it was already there. This asymmetry means organizations accumulate dead infrastructure indefinitely. The cost of each individual piece is small enough to ignore, but the aggregate weight is substantial: budget allocated to unused capacity, attention spent maintaining things nobody needs, and the cognitive overhead of navigating around systems that exist for historical reasons nobody remembers.

The only reliable cure is periodic audits where the default is removal. Not “justify keeping it” — that still privileges the status quo. “Demonstrate current need, or it goes.” Very few organizations can sustain that discipline, because it requires a culture where removing something is valued as highly as adding something. In most organizations, adding is celebrated and removing is invisible. So the infrastructure accumulates.


What ties all of this together is that the gap between what your organization says it’s doing and what it’s actually doing gets wider without anyone noticing.

The measurements drift. The signals get routed to the wrong people. The processes persist after the conditions that justified them change. The oversight systems share the blind spots of the work they’re reviewing. The excellent work is invisible to its natural audience. The dead infrastructure keeps humming.

None of this is failure. None of it is incompetence. It’s what happens when systems are built for one reality and reality moves without sending a notification. The organizational equivalent of continental drift — imperceptible day to day, but over time the map stops matching the territory.

So the question isn’t whether your organization has this problem. It does. Every organization does. Who in your organization has the job of noticing? Not the job of fixing — that’s the easy part once you see it. The job of noticing. Of tracing a signal from where it’s generated to where it arrives and asking, “Is this still going to the right person?” Of looking at a process that everyone follows and asking, “When was the last time this actually caught something?”

The systems that silently drift are the ones nobody is watching. And the reason nobody is watching is that they look fine from the outside. Green dashboards. Filed reports. Followed processes. Everything running. Everything operational.

Everything pointed at ghosts.

Why customer tools are organized wrong

This article reveals a fundamental flaw in how customer support tools are designed—organizing by interaction type instead of by customer—and explains why this fragmentation wastes time and obscures the full picture you need to help users effectively.

Infrastructure shapes thought

The tools you build determine what kinds of thinking become possible. On infrastructure, friction, and building deliberately for thought rather than just throughput.

Server-side dashboard architecture: Why moving data fetching off the browser changes everything

How choosing server-side rendering solved security, CORS, and credential management problems I didn't know I had.

The work of being available now

A book on AI, judgment, and staying human at work.

The practice of work in progress

Practical essays on how work actually gets done.

Silence by design

Most systems have more suppression than their owners realize. It gets installed for good reasons. The cost accumulates slowly, in the form of systems you can't operate because you've removed the signals that would let you understand them.

Designed to learn, built to ignore

The most dangerous organizational failures don't throw errors. They look fine, return results, and quietly stay frozen at the moment of their creation.

The variable that was never wired in

The gap between having a solution and using a solution is one of the most persistent failure modes in organizations. You see the escaped variable. You see the risk register. You assume the work is done.

Silence by design

Most systems have more suppression than their owners realize. It gets installed for good reasons. The cost accumulates slowly, in the form of systems you can't operate because you've removed the signals that would let you understand them.

Designed to learn, built to ignore

The most dangerous organizational failures don't throw errors. They look fine, return results, and quietly stay frozen at the moment of their creation.

The variable that was never wired in

The gap between having a solution and using a solution is one of the most persistent failure modes in organizations. You see the escaped variable. You see the risk register. You assume the work is done.