What shipped today
The main effort was rewriting PRODUCT.md from a feature pitch into an operational spec. The old version had stale numbers (claimed 32+ MCP tools, actually 30; said 50 core modules, actually 47) and was missing key operational sections entirely. The rewrite adds documentation for the daily workflow rhythm (pm start → pm next → ship → pm close), the full interface inventory (CLI, MCP, Discord, skills, cron), the label-driven pipeline state machine, and the four-layer configuration system. Notion was synced as part of the ship.
The second thread was cleaning up the server’s EOD automation. Removed all crontab entries — launchd is the only scheduler now. Split the single com.paulos.eod launchd job into two: com.paulos.eod.synthesize at 10pm and com.paulos.eod.reflect at 11pm, matching the old crontab timing. The wrapper script (run-eod.sh) now takes subcommand arguments instead of running the deprecated monolithic paulos eod. Both plists are installed and verified on the server.
Also switched the default TTS provider from OpenAI back to ElevenLabs across all callsites. The OpenAI provider has a 4,000 character limit that was forcing chunk splitting and requiring ffmpeg for concatenation — ElevenLabs handles 9,500 characters natively, which covers most podcast scripts without chunking. Paul’s cloned voice ID is already configured in the shared utilities .env.
Completed
- PR #127 — Rewrite PRODUCT.md as operational spec with accurate counts and new sections
- PR #128 — Update launchd EOD script to run staged commands instead of deprecated full eod
- PR #129 — Split EOD launchd into two jobs (synthesize at 10pm, reflect at 11pm)
- Default TTS provider switched from OpenAI to ElevenLabs (shipped in PR #127)
- Server crontab cleared — launchd-only automation
- Server pulled to latest (all 3 PRs deployed)
Carry-over
- ffmpeg still not installed on server — only matters if a podcast script exceeds ElevenLabs’ 9,500 char limit and needs chunk concatenation
- The deprecated
paulos eodmonolithic command still exists in code — could be removed in a future cleanup
Risks
- None
Flags and watch-outs
- Dual launchd jobs depend on synthesize completing before reflect starts — the 1-hour gap should be plenty, but if synthesize hangs, reflect will run against stale data
- ElevenLabs API key and voice ID are in
utilities/.env, not the project.env— shared env loading must stay enabled
Next session
- Verify tonight’s EOD pipeline runs cleanly with the new launchd split (check
eod-synthesize.logandeod-reflect.logon the server tomorrow morning) - Consider removing the deprecated monolithic
paulos eodcommand or at minimum marking it deprecated in code - Review milestone queue for next feature work
Why customer tools are organized wrong
This article reveals a fundamental flaw in how customer support tools are designed—organizing by interaction type instead of by customer—and explains why this fragmentation wastes time and obscures the full picture you need to help users effectively.
Infrastructure shapes thought
The tools you build determine what kinds of thinking become possible. On infrastructure, friction, and building deliberately for thought rather than just throughput.
Server-side dashboard architecture: Why moving data fetching off the browser changes everything
How choosing server-side rendering solved security, CORS, and credential management problems I didn't know I had.
The work of being available now
A book on AI, judgment, and staying human at work.
The practice of work in progress
Practical essays on how work actually gets done.
Junior engineers didn't become profitable overnight. The work did.
We've been celebrating that AI made junior engineers profitable. That's not what happened. AI made it economically viable to give them access to work that actually builds judgment, work we always knew
Three projects, three opposite methods, all monster output days: what that taught me about when process helps and when it's just comfort
I've been running a portfolio of software projects using a mix of autonomous AI pipelines and human-led parallel agent sessions. Yesterday, three different projects had monster output days — and th...
What happens when the pipeline doesn't need you
So here's something I noticed today that I want to sit with. I run several projects that use autonomous pipelines — AI systems that pick up tasks, write code, open pull requests, ship changes. One ...