Skip to main content
Paul Welty, PhD AI, WORK, AND STAYING HUMAN

· development

Your empty queue isn't a problem

Dropping a column from a production database is the organizational equivalent of admitting you were wrong. Five projects cleared their queues on the same day, and the bottleneck that emerged wasn't execution — it was taste.

Duration: 7:42 | Size: 8.8 MB

Dropping a column from a production database is the organizational equivalent of admitting you were wrong. Not catastrophically wrong. Just regular wrong. You added a field that seemed necessary, built systems around it, and eventually realized the data model would be cleaner without it. Most teams leave the column there forever. It’s not hurting anything, they say. The queries still work. Nobody wants to trace every read and write through the codebase and methodically remove them.

Authexis dropped a column yesterday. stage_status, a field that tracked content pipeline state, was eliminated in a seven-issue arc. Remove all writes. Migrate all reads. Drop the column. The content pipeline now runs on a single status field with four clean states. It sounds like housekeeping. It’s actually a statement about what you believe your data model should look like versus what you’re willing to tolerate.

The reason most dead columns survive isn’t technical. It’s emotional. Removing something means acknowledging that the person who added it, possibly you, made a choice that turned out to be wrong. In organizations, this discomfort compounds. The column was added in a sprint. It was reviewed. It was approved. Removing it feels like retroactively questioning the judgment of everyone involved. So it stays, and the data model accumulates scar tissue.

A different kind of courage showed up in Dinly yesterday. Eighty-one hardcoded color values across twenty-six files. Every one of them a dark-theme assumption baked directly into a component. Someone decided early on that the app would be dark, then wrote #1a1a2e directly into the CSS instead of using a variable. By file twenty-six, the decision was no longer a decision. It was infrastructure. Changing it meant touching every visual surface in the application.

They changed it. All eighty-one occurrences, converted to a proper light-and-dark theme system. This is the kind of work that doesn’t show up in feature announcements. Nobody tweets about converting color variables. But it’s the difference between an application that can evolve and one that can’t. Every hardcoded value is a frozen decision, and every frozen decision is a constraint on what you can do next. The theme conversion didn’t add a feature. It removed eighty-one constraints.

Something interesting happened with accessibility work across the fleet. Prakta built a reusable radio group hook with arrow key navigation. Scholexis added enumeration protection to its password reset flow. Dinly added error feedback to four forms that had been silently swallowing failures. None of these projects coordinated. None of them were responding to an accessibility audit or a compliance deadline. They found these gaps through automated scouting and fixed them.

When accessibility shows up as a compliance checkbox, it’s grudging. When it shows up independently across four projects because the scouting systems surface it as a natural finding, something has changed in the culture. The projects aren’t being told to care about accessibility. They’re noticing that not caring leaves gaps. That’s a different relationship with quality. Not “we have to do this” but “this is obviously missing.”

The same pattern showed up with the password reset flow in Scholexis. Rate limiting on login attempts. Enumeration protection so attackers can’t probe which email addresses have accounts. These aren’t features users asked for. They’re features you build because you’ve internalized that every authentication endpoint is a target. Nobody filed a ticket saying “please add enumeration protection.” The scout found the gap, the prep shaped the fix, and the execution loop closed it. Security and accessibility are converging into the same category: things the system notices are missing, rather than things humans remember to request.

Five projects cleared their active queues on the same day. Eclectis shipped a full podcast feature, an AI provider gateway, and an 81-case Playwright walkthrough, then ran out of things to build. Dinly cleared its entire issue queue to zero. Prakta’s active list is empty. Scholexis has four remaining issues, all requiring human input. Synaxis-h has been in maintenance mode since its twelfth scout pass found nothing.

The organizational response to an empty queue is usually panic. The team looks idle. The velocity chart flatlines. Someone starts inventing work to fill the gap. But an empty queue after sustained high output isn’t a problem. It’s a signal. It means the backlog was real work, not artificial padding, and the team actually finished it. The correct response is to make the decisions that were deferred. LLM budget choices. Merge strategies. Deployment configurations. The queue is empty because machines did everything machines can do. What’s left is judgment.

Skillexis is the counterexample. It has seven ready-for-dev issues sitting idle. The orchestrate loop ran all day and didn’t advance any of them. The grind queue is full and nothing is grinding. Somewhere between “ready” and “executing,” the machinery stalled. Meanwhile, one issue has been in-progress for multiple days without visible movement. This is what it looks like when automation reaches a state it can’t resolve and doesn’t escalate.

The difference between Dinly clearing its queue to zero and Skillexis sitting frozen with a full queue isn’t about the code. It’s about whether the system knows what to do next. Dinly’s work was concrete: convert these colors, parallelize these queries, add these tests. Skillexis’s remaining work requires decisions about architecture (RLS workspace scoping) and infrastructure (engine test setup) that the automated pipeline can’t make. The machine is waiting for a human who isn’t looking.

Eclectis shipped something yesterday worth pausing on. They built per-user podcast RSS feeds with iTunes extensions. A user’s briefing can now be read aloud, published to a personal feed, and listened to on any podcast app. The content intelligence platform is generating audio content for individual users. This wasn’t on a roadmap. It came from the intersection of TTS availability, briefing infrastructure, and the observation that reading isn’t the only way to consume analysis.

The best features aren’t always planned. They’re assembled from capabilities that already exist. The TTS was there. The briefings were there. The RSS infrastructure was there. Someone noticed you could wire them together and create something that didn’t exist before: a personalized audio intelligence feed. That’s not innovation in any grand sense. It’s attention. Noticing that the pieces are already on the table and wondering what they look like connected.

Ninety-seven issues closed yesterday. A hundred and forty-three commits across ten projects. And the most important number might be five — the number of projects that ran out of machine work and are now waiting on human judgment.

What does a team do when the machines finish first? When the backlog is real, the velocity is genuine, and the queue hits zero not because work was descoped but because it was actually done? Most planning systems assume the bottleneck is execution. But execution is the part we’ve gotten very good at automating. The bottleneck that’s emerging is taste. Direction. The willingness to say “this is what we want” rather than “here’s more stuff to build.”

Your empty queue isn’t a problem to solve. It’s a question to answer.

Why customer tools are organized wrong

This article reveals a fundamental flaw in how customer support tools are designed—organizing by interaction type instead of by customer—and explains why this fragmentation wastes time and obscures the full picture you need to help users effectively.

Infrastructure shapes thought

The tools you build determine what kinds of thinking become possible. On infrastructure, friction, and building deliberately for thought rather than just throughput.

Server-side dashboard architecture: Why moving data fetching off the browser changes everything

How choosing server-side rendering solved security, CORS, and credential management problems I didn't know I had.

The work of being available now

A book on AI, judgment, and staying human at work.

The practice of work in progress

Practical essays on how work actually gets done.

Designed to learn, built to ignore

The most dangerous organizational failures don't throw errors. They look fine, return results, and quietly stay frozen at the moment of their creation.

The variable that was never wired in

The gap between having a solution and using a solution is one of the most persistent failure modes in organizations. You see the escaped variable. You see the risk register. You assume the work is done.

When the queue goes empty

Most products don't fail at building. They fail at the handoff between building and becoming real. What happens when the code is done and the only things left are judgment calls?

Designed to learn, built to ignore

The most dangerous organizational failures don't throw errors. They look fine, return results, and quietly stay frozen at the moment of their creation.

The variable that was never wired in

The gap between having a solution and using a solution is one of the most persistent failure modes in organizations. You see the escaped variable. You see the risk register. You assume the work is done.

When the queue goes empty

Most products don't fail at building. They fail at the handoff between building and becoming real. What happens when the code is done and the only things left are judgment calls?