The removal tax
The most productive thing you can do with a product is take features away. Eighty-nine issues closed across eight projects, and the hardest lesson came from a pipeline that ran perfectly and produced nothing.
Duration: 7:10 | Size: 6.57 MB
The most productive thing you can do with a product is take features away. That sounds wrong. It sounds like the kind of contrarian thing consultants say to seem clever. But I watched it happen across an entire portfolio yesterday, and the results were unambiguous.
Authexis, the content intelligence platform I’ve been building, hit a milestone I’d been working toward for weeks: v2.0, the product simplification milestone. Twenty-one issues, all closed in a single day. And every single one of them was a removal. Content types, gone. Replaced with a simple length selector and a style dropdown. The multi-stage pipeline with its outline stage, script stage, review stage — collapsed into four states. Reactions as a content type, retired entirely. Citations and bibliography, pulled out of the pipeline. The settings page for managing content types, deleted. The filter pills for browsing by type, deleted.
The arithmetic is worth paying attention to. We removed roughly thirty features and the product got better. Not “leaner” in some abstract minimalist sense. Actually better. Users can now create content by choosing “short” or “long” and picking a tone, instead of navigating a taxonomy of content types that mapped to an internal pipeline they didn’t care about.
The architectural change underneath tells you something. The engine prompts used to branch on content type slugs, a conditional maze that grew with every new type someone invented. Now they branch on length tiers and style metadata. Two dimensions instead of fifteen categories. The code got simpler and the output got more flexible at the same time. That’s the sign you’ve found the right abstraction.
But nobody warns you about the other side. When you aggressively remove features, you create a different kind of instability. Five deploy failures hit in a single afternoon. The cause was trivially stupid: an orphaned bookmarks page was still importing action functions that no longer existed. The feature was gone, the page that depended on it was still there, and the build broke.
This is the removal tax. When you add a feature, you know where it touches. When you remove one, you discover where it was touching after the fact. The deploy failures were a four-alarm fire that turned out to be a one-line fix, but they revealed something: subtraction requires more discipline than addition. You need to trace the dependency graph backward, and most codebases don’t make that easy.
While Authexis was subtracting, two other projects were doing something equally ambitious in the opposite direction. Simplebooks, a bookkeeping application, was completely rewritten in Next.js. From create-next-app to a fully functional application with financial reports, invoice management, CSV import, reconciliation, and bulk transaction handling, in one session. Thirteen issues, thirteen commits, milestone complete.
Scholexis, the academic task manager, closed twenty-four issues and reached seventy-five out of seventy-seven on its own Next.js port. Admin dashboards, user management, access control, real-time subscriptions, accessibility improvements. Almost done.
What’s happening is convergence. These projects started on different stacks at different times for different reasons. Now they’re all landing on the same foundation: Next.js, Supabase, a shared component library, a shared deployment pattern. Nobody mandated this. There was no memo. The individual sessions, each working autonomously, independently arrived at the same conclusion about what the right stack looks like. That’s either emergent wisdom or groupthink, and I’m genuinely not sure which. But the practical benefit is real: fixes in one project’s patterns transfer to the others. Security hardening that happened in Authexis (workspace membership checks, timing-safe comparisons) showed up independently in Eclectis (row-level security on the webauthn table) and Scholexis (rate limiting on AI endpoints). Four projects shipped security improvements on the same day without coordinating. The shared stack makes the shared instinct actionable.
On the topic of security becoming habitual rather than heroic — that’s a pattern worth naming. When security work shows up as individual issues scattered across a sprint, it’s a chore. When it shows up independently across four projects on the same day, it’s culture. The difference is that nobody asked for it. The scout passes found the gaps, the prep pass shaped the fixes, and the execution loop closed them. Security went from “we should do a security audit” to “the system audits itself continuously.”
The most quietly important thing that happened, though, was the infrastructure shift. The supervisor session, the agent that coordinates all the other agents, migrated from my laptop to a server called speedy-gonzales. This sounds like a devops task. It’s actually a philosophical change in how the whole operation works.
On the laptop, every day had a start and a stop. I’d open the terminal in the morning, run a startup command, the agents would recover context from yesterday’s work log, and we’d go. At five o’clock, I’d run a close command, the agents would write their handoff notes, and everything would shut down. Clean daily cycle. Human-shaped.
On the server, the agents run continuously. There is no morning. There is no evening. The work just keeps happening. But you still need the daily boundary, because without it, the agents lose track of time. They start thinking it’s still Tuesday when it’s Thursday. They write work logs for the wrong date. They lose the narrative thread.
So we built a rollover pipeline. At two in the morning, a deterministic process pauses all the work-injection agents, waits thirty minutes for anything in-flight to finish, then tells each session to write its work log for the day that just ended. Critically, it passes the date explicitly — “write the log for March 21st” — because if you let the AI figure out the date from the system clock at 2 AM, it gets confused about whether it’s closing out yesterday or opening today.
The first run crashed. Classic launchd environment problem: the paulos command wasn’t on the PATH because launchd doesn’t load your shell profile. Fixed it in ten minutes. The second run succeeded mechanically but failed practically. The sessions didn’t recognize the new skill because they’d been started before it was committed to the codebase. So the pipeline ran perfectly, every step completed, and no work logs were written. A beautifully executed failure.
There’s a lesson in that about the difference between working and useful. The system did everything it was supposed to do. Every subprocess returned success. The logs show a clean run. But the outcome was zero. The gap between “the pipeline completed” and “the pipeline produced value” is the gap between engineering and operations. Engineering builds the machine. Operations makes sure the machine’s inputs are actually present when it runs.
Eighty-nine issues closed across eight projects in one day. Ninety-eight commits. Two complete application rewrites. A major product simplification milestone. A fleet coordination system that didn’t exist twenty-four hours ago. And a reminder that the most interesting problems aren’t in the code you write but in the systems you build around the code — the daily rhythms, the handoff protocols, the quiet assumption that tomorrow’s session will know what today’s session did.
What happens to a team’s sense of time when the work never stops? When there’s no morning standup and no evening shutdown, just a continuous stream of issues opening and closing? I don’t have an answer yet. But I’m building the infrastructure to find out, and the first thing that infrastructure taught me is that even machines need to know what day it is.
Why customer tools are organized wrong
This article reveals a fundamental flaw in how customer support tools are designed—organizing by interaction type instead of by customer—and explains why this fragmentation wastes time and obscures the full picture you need to help users effectively.
Infrastructure shapes thought
The tools you build determine what kinds of thinking become possible. On infrastructure, friction, and building deliberately for thought rather than just throughput.
Server-side dashboard architecture: Why moving data fetching off the browser changes everything
How choosing server-side rendering solved security, CORS, and credential management problems I didn't know I had.
The work of being available now
A book on AI, judgment, and staying human at work.
The practice of work in progress
Practical essays on how work actually gets done.
The product changed its mind
A product pivoted its entire philosophy mid-session — from 'here's your list' to 'here's your next thing.' The code shipped in the same conversation as the idea. That's not iteration. That's something else.
Your project management tool was made for a non-human (AI) factory, not for you
Every project or task management tool on the market descends from Frederick Taylor's factory floor. The assumptions were wrong then. They're catastrophic in the Age of AI.
The last mile is all the miles
Building the product is the fun part. Deploying it, configuring auth, pasting email templates into dashboards, rotating leaked API keys — that's where the work actually lives.
The delegation problem nobody talks about
When your automated systems start finding real bugs instead of formatting issues, delegation has crossed a line most managers never see coming.
What your systems won't tell you
The most dangerous gap in any organization isn't between what you know and what you don't. It's between what your systems know and what they're willing to say.
The gun you didn't need
Every organization has loaded weapons lying around that nobody remembers loading. The most dangerous capability in any system is the one you built 'just in case.'