The variable that was never wired in
The gap between having a solution and using a solution is one of the most persistent failure modes in organizations. You see the escaped variable. You see the risk register. You assume the work is done.
Duration: 7:06 | Size: 8.1 MB
The most valuable data about your customers is what they choose not to do. Not the clicks, not the purchases, not the engagement metrics. The skips. The things they saw, considered, and passed over. Most analytics systems don’t track this because it’s expensive and philosophically uncomfortable. You have to admit that your product is showing people things they don’t want.
Eclectis, a content intelligence platform, implemented skip tracking yesterday. The system already tracked what users voted on. But between “article appears in your feed” and “user votes on it,” there’s a vast silent middle: articles that rendered on screen and were never touched. The user saw the headline, maybe read the first line, and moved on. That’s not nothing. That’s a judgment call the user made without telling you about it.
The implementation is straightforward. Fire an impression event when an article renders. Run a background diff between rendered articles and voted articles. The difference is the skip set. Feed the skip set back into the learning engine alongside explicit votes. But the philosophical shift is the more interesting part. Most recommendation systems optimize for engagement — what did the user click? Skip tracking optimizes for alignment — what is the user actually trying to tell us by not clicking? Those are different questions with different answers.
There’s a parallel in how organizations handle security vulnerabilities. Yesterday we found a command injection bug in a deployment script. The fix had already been written. Someone had created an escaped variable that properly sanitized the input. They just never used it. The raw, unescaped input was still being interpolated into a shell command. The fix existed. The vulnerability persisted. The code review passed. The tests passed. Nobody noticed that the variable sitting right there, with the correct name, doing the correct thing, was never actually wired in.
This happens constantly. Not just in code. A company writes a risk mitigation plan and files it. The plan exists. The mitigation doesn’t. A team creates a decision log and never reads it. The documentation exists. The institutional memory doesn’t. The gap between having a solution and using a solution is one of the most persistent failure modes in organizations, and it’s almost invisible because the artifact of the solution is present. You see the escaped variable. You see the risk register. You assume the work is done.
Which brings me to governance documents. Yesterday we added RISKS.md as the third living document alongside PRODUCT.md (what we’re building) and DECISIONS.md (choices we made and why). The risk register isn’t new in concept. Every project management methodology has one. What’s different here is the integration model. Every skill in the system that touches the codebase can write risks. Every reporting skill reads them. The scout discovers a vulnerability and files it as a risk. The prep pass checks the register before shaping work that touches risky areas. The status report surfaces open risks every morning. The risk register isn’t a document someone fills out quarterly and forgets. It’s a living nerve that the system continuously reads and writes.
The test for whether a governance document is alive or dead is simple: does anything break if you delete it? If PRODUCT.md disappeared, the prep pass would lose its reference for scope decisions. If DECISIONS.md disappeared, scouts would file issues that duplicate solved problems. If RISKS.md disappeared, the status report would have no risk section. That’s what alive means. The document is a dependency, not a decoration.
Authexis did something yesterday that most engineering teams never do. They took a test suite with 280 tests and 25 failures and, in a single day, brought it to 326 tests with zero failures. That’s not just adding tests. That’s fixing every broken test while adding 46 new ones at the same time. The reason this matters isn’t the numbers. It’s what the numbers enable.
A failing test suite is worse than no test suite. No tests means you know you’re flying blind. Twenty-five failing tests means you’re not sure which failures are real and which are legacy noise. You stop trusting the suite. You stop running it before deploys. You start making changes without checking. The confidence infrastructure erodes, and you don’t notice because the tests are still there. They’re just not telling you the truth.
Going from 25 failures to zero is a trust reset. Every test means something again. When one goes red tomorrow, you know it’s a real problem, not the same stale mock from three months ago. That trust is worth more than the tests themselves. It’s the difference between “the tests pass” meaning “the code works” versus “the tests pass” meaning “the known broken tests are still broken.”
Five projects sat completely idle yesterday. Not because they have no work. Three of them have full queues. One has seven ready-to-execute issues and an orchestrator that’s been running continuously without dispatching any of them. Another has been blocked for four consecutive days on a single decision: whether to squash-merge or incrementally merge sixteen feature branches.
The pattern is worth studying. The projects that shipped ninety-six issues are the ones where the decisions had already been made. The scope was clear. The architecture was settled. The test suite was trusted. The projects that shipped zero are the ones where a human decision is required and hasn’t arrived. The machine hits a fork and waits. It doesn’t guess. It doesn’t escalate. It just waits. And nobody notices because the orchestrator keeps polling and the session looks active.
This is the organizational equivalent of a blocked process in an operating system. The CPU is available. The memory is allocated. The task is ready. But it’s waiting on a lock that another process holds, and that process is a human who doesn’t know they’re holding it. The system doesn’t deadlock. It just stops making progress, quietly, without alarm.
What keeps surfacing across all of this is what happens when machines are faster than humans at executing but still dependent on humans for directing. The fleet closed nearly four hundred issues in four days. Five projects ran out of things to build. The backlog wasn’t artificial. The velocity wasn’t inflated. The work was real and it’s done. What remains is taste, direction, and the willingness to make calls that can’t be automated.
Maybe the most important infrastructure we need isn’t faster execution or smarter orchestration. Maybe it’s a system that notices when a human is the bottleneck and makes that visible before four days of idle accumulate. Not as a nag. Not as a Pushover notification every thirty minutes. A clear, simple signal: this project is waiting on you, and here’s specifically what it needs.
Why customer tools are organized wrong
This article reveals a fundamental flaw in how customer support tools are designed—organizing by interaction type instead of by customer—and explains why this fragmentation wastes time and obscures the full picture you need to help users effectively.
Infrastructure shapes thought
The tools you build determine what kinds of thinking become possible. On infrastructure, friction, and building deliberately for thought rather than just throughput.
Server-side dashboard architecture: Why moving data fetching off the browser changes everything
How choosing server-side rendering solved security, CORS, and credential management problems I didn't know I had.
The work of being available now
A book on AI, judgment, and staying human at work.
The practice of work in progress
Practical essays on how work actually gets done.
Designed to learn, built to ignore
The most dangerous organizational failures don't throw errors. They look fine, return results, and quietly stay frozen at the moment of their creation.
Your empty queue isn't a problem
Dropping a column from a production database is the organizational equivalent of admitting you were wrong. Five projects cleared their queues on the same day, and the bottleneck that emerged wasn't execution — it was taste.
When the queue goes empty
Most products don't fail at building. They fail at the handoff between building and becoming real. What happens when the code is done and the only things left are judgment calls?
Designed to learn, built to ignore
The most dangerous organizational failures don't throw errors. They look fine, return results, and quietly stay frozen at the moment of their creation.
Your empty queue isn't a problem
Dropping a column from a production database is the organizational equivalent of admitting you were wrong. Five projects cleared their queues on the same day, and the bottleneck that emerged wasn't execution — it was taste.
When the queue goes empty
Most products don't fail at building. They fail at the handoff between building and becoming real. What happens when the code is done and the only things left are judgment calls?