Skip to main content
Paul Welty, PhD AI, WORK, AND STAYING HUMAN

What shipped today

Today was about quality assurance: running /scout to find bugs left by the overnight pipeline, fixing them, and doing a full end-to-end visual review of the learner flow in the browser.

The scout pass turned up five issues. Four were bugs introduced by the 19-PR overnight pipeline run — duplicate migration timestamps that would break schema application, two .single() calls that crash when rows are missing (dashboard and app layout), and a silent data loss bug where AI scoring could succeed but the persistence write could fail without surfacing the error. The fifth (GH-172, passing lesson context to simulations) was a future enhancement punted to the next milestone.

The first grind attempt with parallel worktree agents hit a cross-contamination problem: stage_all=true caused agents to pick up each other’s changes. One clean PR (GH-168, migrations) merged; the other three were contaminated and closed. The remaining four fixes (GH-148 breadcrumbs, GH-169/170 .single() crashes, GH-171 score persistence) shipped as a single manual PR (#177) and merged cleanly.

The visual review confirmed the full learner flow works end-to-end: login, dashboard with stats cards and simulation personas, modules list with progress bars, module detail with breadcrumbs and goals, lesson pages with typed content units (Concept/Task) and markdown rendering, assessment submission with response persistence, mark-complete with progress tracking, and prev/next lesson navigation. During testing, we discovered six database migrations hadn’t been applied to the hosted Supabase instance — applied them all, which unblocked assessment response saving.

Completed

  • GH-148 — Add breadcrumb navigation to learner module detail page
  • GH-168 — Fix duplicate migration timestamps causing schema conflicts
  • GH-169 — Fix dashboard crash when user has no scored simulations
  • GH-170 — Fix app layout crash when user has no membership or profile
  • GH-171 — Return error when assessment score persistence fails
  • Applied 8 missing migrations to hosted Supabase (assessment_responses table, scoring columns, lesson content units, assessments)
  • Full E2E visual review of learner flow (dashboard → modules → lesson → assessment → progress tracking)

Carry-over

  • GH-172 — Pass lesson context through to simulation practice (future milestone, not urgent)
  • ANTHROPIC_API_KEY not set in .env.local — AI scoring gracefully degrades but won’t produce actual feedback until configured
  • Anonymous auth disabled on hosted Supabase — demo flow (/demo) won’t work until enabled in Supabase dashboard
  • Paul’s DISC assessment not completed — content shows default style, not DISC-adapted

Risks

  • Six migrations were applied directly to production via psql, bypassing Supabase’s migration tracking — if supabase db push is run later, it may try to re-apply them (they use IF NOT EXISTS and ON CONFLICT guards, so should be idempotent)
  • Duplicate RLS policies on lesson_progresses — created three then removed them, but the originals were already present; verify no policy conflicts

Flags and watch-outs

  • Grind parallel agents with stage_all=true cause cross-contamination in worktrees — need to fix the grind skill to use explicit file staging instead
  • Hosted Supabase signup requires email confirmation — can’t create test accounts easily; had to temporarily modify Paul’s password via direct DB access for testing (restored afterward)
  • The “Mark complete” button had a transient error on first click (lesson_progresses 403) but worked on page reload — RLS policies were already present, likely a timing/caching issue

Next session

  • Set ANTHROPIC_API_KEY in web/.env.local to enable AI scoring feedback on assessments
  • Enable anonymous auth in Supabase dashboard if demo flow is needed
  • Complete DISC assessment as Paul to see DISC-adapted content in action
  • Close “Adaptive lesson delivery” and “First module: Delegation” milestones — all issues are resolved except GH-172 which belongs to a future milestone
  • Plan the next milestone — likely “Learner analytics and reporting” or “Multi-module curriculum”
  • Consider creating a proper migration tracking solution (run supabase db push or reconcile the manually-applied migrations)

Why customer tools are organized wrong

This article reveals a fundamental flaw in how customer support tools are designed—organizing by interaction type instead of by customer—and explains why this fragmentation wastes time and obscures the full picture you need to help users effectively.

Infrastructure shapes thought

The tools you build determine what kinds of thinking become possible. On infrastructure, friction, and building deliberately for thought rather than just throughput.

Server-side dashboard architecture: Why moving data fetching off the browser changes everything

How choosing server-side rendering solved security, CORS, and credential management problems I didn't know I had.

The work of being available now

A book on AI, judgment, and staying human at work.

The practice of work in progress

Practical essays on how work actually gets done.

Manual fluency is the prerequisite for agent supervision

You cannot responsibly automate what you cannot do manually. AI agents speed up work for people who already know how to do it. They do not replace the need to learn the work in the first place.

The gun you didn't need

Every organization has loaded weapons lying around that nobody remembers loading. The most dangerous capability in any system is the one you built 'just in case.'

Nobody promotes you to operator

There's a moment in every project where the work stops being about building and starts being about keeping things running. Nobody announces this transition. Nobody gives you new tools for it. And most people keep building long past the point where they should have stopped.