The product changed its mind
A product pivoted its entire philosophy mid-session — from 'here's your list' to 'here's your next thing.' The code shipped in the same conversation as the idea. That's not iteration. That's something else.
Duration: 8:22 | Size: 7.67 MB
The most dangerous thing a product can do is show you everything you need to do.
That sounds wrong. Every task management tool on the market is built on the assumption that visibility is the goal. See your tasks. See their priorities. See their deadlines. See where you are in the sprint. The more you can see, the more control you have. Except what you actually have is anxiety. Every visible undone item is an open cognitive loop. Your brain treats each one as unfinished business, generating intrusive thoughts that degrade your performance on everything else. Bluma Zeigarnik proved this in 1927. The productivity industry has been ignoring her ever since.
Today one of our products changed its mind about this. Mid-session. While the developer was coding.
Prakta is a task intelligence tool for neurodivergent workers. It started two days ago as an energy-aware planner: you see your tasks organized by cognitive mode and energy level, the AI sequences them for your day, you work through the list. Good intentions. Standard model. Still showing you the list.
Today, through a conversation that started as a feature discussion and became a philosophy session, the model flipped entirely. The new principle: serve, don’t show. The user never sees a task list. Prakta serves one chunk at a time. You check out a piece of work, you do it, you check it back in. Like a library book. The system knows what’s in the queue. You don’t need to. You need to know what’s next, and only what’s next.
This isn’t a minor UI tweak. It’s a different answer to the fundamental question of task management. The old answer: give the human full information and let them decide. The new answer: hold the full picture so the human brain doesn’t have to, and surface only what matters right now.
The schema changes, the API, the UX, and the product documentation for this pivot all shipped in the same session as the conversation that produced the idea. 27 issues closed. The serve API is a deterministic scoring function that picks the best chunk without calling an AI model. Energy budget as a number, not four emoji states. One cycle length per user. Decomposition at ingestion. No priorities at all. The whole system redesigned and rebuilt while the philosophy was still warm.
I keep saying the gap between deciding and building is collapsing. This is what that actually looks like. Not “we had an offsite, wrote a PRD, and then spent two sprints implementing it.” More like “we had a thought, and by the end of the thought, the code existed.” The constraint on product development is no longer implementation speed. It’s the quality of the thinking that precedes it.
The second thing worth talking about is the pattern showing up across multiple products: tools that know when things should happen, instead of waiting for the human to decide.
Dinly, the meal planning tool, shipped a weekly cadence system today. Four settings: week start day, meals per week, finalize lead time, voting window. From those four numbers, a pure function derives every milestone in the planning week — when the draft is due, when voting opens, when the reminder goes out, when voting closes, when it’s time to finalize. A daily cron job checks the timeline and sends the right nudge to the right person: “time to add candidates” to the cook, “you haven’t voted yet” to the family member who hasn’t, “voting closed, time to finalize” when the window expires.
Nobody asked for this explicitly. The feature gap was that people had to remember to do things at the right time. The insight was that the tool already knows when the right time is. It just wasn’t telling anyone.
Prakta’s serve model has the same DNA. The system doesn’t wait for you to look at a list and pick something. It knows your energy level, your cognitive mode preference, what’s in the queue, what you’ve been avoiding, what’s due. It picks for you. Not because you can’t pick. Because picking is work, and that work depletes the same cognitive resources you need for the actual task.
This is a shift from reactive tools to proactive ones. Every task app I’ve ever used sits there waiting for me to open it. The best ones remind me of deadlines. None of them think ahead. None of them nudge me at the right moment based on what they know about my patterns. The cadence system and the serve model are both tools that think ahead. They convert user settings into autonomous behavior. That’s a different category of software.
The third insight is about what happens when you let the automated pipeline run unsupervised. Eclectis, the content intelligence platform, closed 20 issues today through the scout-triage-prep-exec pipeline with no human in the loop for most of it. The scout found issues. The triage labeled them. The prep wrote specs. The exec implemented them. Some of these were substantial: composite database indexes, N+1 query fixes, a time-decayed ranking system for the articles page, dead code removal, silent failure fixes in server actions.
The pipeline caught a false positive too. The scout flagged “learned preferences aren’t used in scoring” as a bug. The exec agent traced through the code and found the feedback loop was intact — the indirection through a shared module confused the scanner. It closed the issue with an explanation. The system corrected itself.
This is where the automated improvement pipeline stops being a novelty and becomes infrastructure. It’s not impressive that it found 20 issues. It’s expected. The interesting question is what happens when it stops finding things. Eclectis is now down to 3 parked backlog items. All milestones closed. The pipeline has nearly exhausted what it can find without human direction. The next issues won’t be “this is broken” or “this is missing.” They’ll be “what should this product become next?” And the pipeline can’t answer that.
Fourth: Authexis shipped 30 native macOS app issues in a single session. Not 30 bug fixes. 30 feature implementations — list view enrichment, briefing detail selection, PDF download via WKWebView, a REST command endpoint for async operations, accessibility labels, toast overlays, source ratings, filter systems. The iOS issues I filed this morning from testing on my phone were a different batch. Those are still in the queue.
What made this possible is that the web app already has all the business logic, the API endpoints, the data models. The native app is pure UI, calling the same backend, rendering the same data. Once the pattern is established (how to make API calls, how to handle auth, how to show toasts), each subsequent screen is a variation. The 30th issue ships as fast as the 5th because the agent has accumulated the patterns.
This is the argument for web-first, native-second. Not because native doesn’t matter. Because native without a solid API is a nightmare, and a solid API is a byproduct of building a good web app. The web app forced the abstractions. The native app consumes them.
Fifth: SimpleBooks entered the fleet today as project number 16. It’s a personal bookkeeping app migrating from Rails to Next.js. No code was written. The entire session was design and planning — a 20-task implementation plan, two review rounds that caught 14 issues, and a clear identification of the highest risk: a cold Postgres dump from November 2025 that’s the only copy of six months of financial data.
I want to highlight this because it’s the opposite of Prakta’s approach. Prakta pivoted and built in the same breath. SimpleBooks planned without building. Both are correct for their context. Prakta is a new product where the cost of building the wrong thing is low — you throw it away and rebuild. SimpleBooks has irreplaceable data where the cost of a migration bug is losing six months of financial records. The right amount of planning depends on what you’re risking.
The fleet’s velocity creates pressure to just build everything immediately. Ship it. Close issues. Big numbers. But the SimpleBooks session was a reminder that some things deserve more thought before the first line of code. Not every problem is a sprint.
Today the fleet closed 109 issues across 9 projects. Third consecutive day over 100. A product pivoted its philosophy and shipped the new model in the same session. A native app marathon produced 30 implementations in one sitting. An automated pipeline ran unsupervised and corrected its own false positive. A new project entered the fleet with a plan instead of a prototype.
When building becomes this fast, what’s the right ratio of thinking to doing? Prakta suggests you can do both simultaneously. SimpleBooks suggests sometimes you shouldn’t. The answer probably isn’t a ratio. It’s a judgment call, made in the moment, about what you’re risking and what you’re learning.
The tools are getting fast enough that the human’s job is no longer “make it go faster.” The human’s job is “know when to slow down.” That might be the hardest skill to teach, to yourself or to your agents.
Why customer tools are organized wrong
This article reveals a fundamental flaw in how customer support tools are designed—organizing by interaction type instead of by customer—and explains why this fragmentation wastes time and obscures the full picture you need to help users effectively.
Infrastructure shapes thought
The tools you build determine what kinds of thinking become possible. On infrastructure, friction, and building deliberately for thought rather than just throughput.
Server-side dashboard architecture: Why moving data fetching off the browser changes everything
How choosing server-side rendering solved security, CORS, and credential management problems I didn't know I had.
The work of being available now
A book on AI, judgment, and staying human at work.
The practice of work in progress
Practical essays on how work actually gets done.
Your project management tool was made for a non-human (AI) factory, not for you
Every project or task management tool on the market descends from Frederick Taylor's factory floor. The assumptions were wrong then. They're catastrophic in the Age of AI.
The last mile is all the miles
Building the product is the fun part. Deploying it, configuring auth, pasting email templates into dashboards, rotating leaked API keys — that's where the work actually lives.
The day we shipped two products and the agents got bored
112 issues across 12 projects. Two new products went from nothing to code-complete MVP in single sessions. And the most interesting signal wasn't the speed — it was the scout that came back empty-handed.
The last mile is all the miles
Building the product is the fun part. Deploying it, configuring auth, pasting email templates into dashboards, rotating leaked API keys — that's where the work actually lives.
The delegation problem nobody talks about
When your automated systems start finding real bugs instead of formatting issues, delegation has crossed a line most managers never see coming.
What your systems won't tell you
The most dangerous gap in any organization isn't between what you know and what you don't. It's between what your systems know and what they're willing to say.