Work log: Phantasmagoria — March 22, 2026
What shipped today
Today was the biggest architectural shift in the project’s history: the complete narrative-first generation pipeline is now working end-to-end for all three event types — standalones, anomalies, and dig sites. The old roller-first approach (deterministic effects → AI writes text to match) has been replaced with a three-phase system: Phase 1 (AI writes the story freely), Phase 2 (AI assigns effects that match the narrative), Phase 3 (deterministic assembly into render-ready YAML).
The prompt engineering work was the core of the session. Each event type got its own Phase 1 template (PROMPT_STANDALONE.md, PROMPT_ANOMALY.md, PROMPT_DIG_SITE.md) with type-specific guidance — standalones get choice architecture patterns (all-good, all-bad, risk-vs-reward, gain-vs-loss), anomalies get survey context and planet-scoped deposit support, dig sites get multi-chapter arc planning with auto-resolve middle chapters. Phase 2’s outcome prompt was updated with choice pattern awareness and single-option follow-up support. Follow-up events are now defined inline by Phase 1 as resolution popups — single “Acknowledged” dismiss button, no new decision points.
Infrastructure improvements landed too: all AI model references moved from hardcoded strings to .env (AI_MODEL=claude-sonnet-4-6), release YAML can now override tone and reward_scope probability weights, and a new event_assembler.py helper centralizes Phase 3 assembly for all event types. Test harnesses exist for each type (test_narrative_first.py, test_narrative_anomaly.py, test_narrative_site.py) producing render-ready YAML with working follow-up wiring.
Completed
- #211 (partial) — Narrative-first architecture: Phase 1, 2, and 3 working for all event types
- #222 — Created and triaged: “Allow config for early/mid/late game timing in release YAML” (backlog)
Carry-over
- Integrate narrative-first prompts into the actual generator scripts — the test harnesses work end-to-end but
generate_standalone.py,generate_anomaly.py, andgenerate_site.pystill use the old prompt flow. The new prompts and assembler need to replace the old code paths. - Phase 2 still uses
society_research/physics_researchas resource names — banned in the prompt but the AI occasionally ignores it. Needs stronger enforcement or post-processing cleanup. - Tech variety is narrow — Phase 2 leans heavily on
tech_mine_rare_crystalsandtech_shields_3. The tech pool inoutcome_prompt.pyhas more options but the AI gravitates to a few. - Title diversity — with the celestial_equinox theme, the AI keeps generating “The Cartographer’s [X]” titles. The
existing_eventscontext will help in batch mode, but may need explicit “avoid these words” guidance.
Risks
- The Sonnet 4.6 model ID (
claude-sonnet-4-6) works today but is an alias — if Anthropic changes the routing, all generation breaks. The.envapproach means a one-line fix but we’d need to notice. - Middle dig site chapters sometimes include options when they shouldn’t. The assembler handles this gracefully (treats 1-option chapters as auto-resolve) but it means wasted Phase 2 API calls in batch generation.
Flags and watch-outs
PRODUCT.mdprinciple #1 says “AI for creativity, determinism for balance” — the narrative-first approach has AI assigning effects (Phase 2), which shifts this principle. It’s working well but the product doc should be updated to reflect the new architecture.- The old
followup_generator.pystill exists with the roller-first follow-up approach. It’s used by the existing generators but will be dead code once the new pipeline is integrated.
Next session
- Integrate new prompts into real generators — replace the old prompt building in
generate_standalone.pywithPROMPT_STANDALONE.md+event_assembler.py. Same for anomaly and site. This is the bridge from “test harness works” to “batch generation uses the new flow.” - Generate a test release with the new pipeline — run a full celestial_equinox generation (3 standalones, 3 anomalies, 3 dig sites) and playtest it.
- Clean up dead code — once generators use the new flow, remove old prompt building, old follow-up generator’s roller-based Phase 2, and deprecated functions.
Why customer tools are organized wrong
This article reveals a fundamental flaw in how customer support tools are designed—organizing by interaction type instead of by customer—and explains why this fragmentation wastes time and obscures the full picture you need to help users effectively.
Infrastructure shapes thought
The tools you build determine what kinds of thinking become possible. On infrastructure, friction, and building deliberately for thought rather than just throughput.
Server-side dashboard architecture: Why moving data fetching off the browser changes everything
How choosing server-side rendering solved security, CORS, and credential management problems I didn't know I had.
The work of being available now
A book on AI, judgment, and staying human at work.
The practice of work in progress
Practical essays on how work actually gets done.
The removal tax
The most productive thing you can do with a product is take features away. Eighty-nine issues closed across eight projects, and the hardest lesson came from a pipeline that ran perfectly and produced nothing.
The product changed its mind
A product pivoted its entire philosophy mid-session — from 'here's your list' to 'here's your next thing.' The code shipped in the same conversation as the idea. That's not iteration. That's something else.
Your project management tool was made for a non-human (AI) factory, not for you
Every project or task management tool on the market descends from Frederick Taylor's factory floor. The assumptions were wrong then. They're catastrophic in the Age of AI.