Skip to main content
Paul Welty, PhD AI, WORK, AND STAYING HUMAN

Synthesis: April 5, 2026

The day the products died (and the business was born)

Five projects shipped work today, but the through-line is a strategic reckoning that cuts across all of them. The foundation models have eaten the tool layer. Authexis, Eclectis, Scholexis — every SaaS product built on top of AI capabilities that are now free and native in Claude, ChatGPT, and Perplexity. A “Steal my AI marketing stack” infographic crystallized it: one person, 12 mostly-free tools, $50-60/month, full marketing operation. The gap between Authexis and the cobble is closing every quarter. Paul’s testers bounce. The non-scrappy user who won’t figure out 12 tools also won’t figure out Authexis. The scrappy user who will figure them out doesn’t need it.

The response wasn’t despair — it was a pivot that had been building for weeks. The three-layer model of work (execution / pattern judgment / actual judgment) emerged from another session and was immediately operationalized. Layer 2 — expert pattern recognition that feels like judgment but is actually encodable — is the hidden middle where the money is. Most knowledge workers think 80% of their work is judgment. It’s actually 80% pattern recognition that feels like judgment because expertise makes patterns invisible.

This became a concrete go-to-market artifact: “The Judgment Test,” a 14-question scored self-assessment built for Tally. The assessment makes prospects uncomfortable in a productive way — forcing them to confront how much of their team’s expensive expertise is encodable. Four result buckets, each with a calibrated CTA. Two LinkedIn post variants drafted. The consulting offer writes itself: discovery → extraction → deployment → compounding. The three months of product building weren’t wasted — they produced the proof that Layer 2 is encodable, and the methodology (role docs, learning loops, review cycles) is the case study.


Per-project detail

Paulos

The session opened with a scout scan across six dimensions, filing five issues: a command injection vulnerability in SSH session variable handling (#676), ~350 lines of confirmed dead code in roles.py and linear.py (#677), silent API failures in sentry.py and editorial.py (#678), stale PRODUCT.md skill inventory (#679), and missing pip-audit for dependency scanning (#680).

The paul knowledge corpus got a major expansion. 53 blog posts were missing from ~/Projects/paul/corpus/blog/ — everything since mid-March. All copied and re-indexed. Then 281 work-log files and 43 syntheses were added as new corpus directories. The vector store grew from 760 to 1,084 files (2,658 → 3,961 chunks). This was prompted by Paul trying to find a blog post about AI doing judgment — the search failed because the corpus was incomplete. Even after expansion, semantic search couldn’t find the post (“AI as Götterdämmerung”) because the remembered phrasing (“not positive AI can’t do judgment too”) used different vocabulary than the actual text. Filed #681 for automated sync.

The social pipeline was hardened in two ways. First, switched from a blocklist to an allowlist — only content_type:essay posts now enter the social queue. Reflections, newsletters, and podcasts are excluded regardless of author. Second, all social post URLs now carry per-platform UTM tags (utm_source=linkedin/bluesky/mastodon&utm_medium=social) so GA4 can attribute traffic by channel. Three Authexis API issues filed (#1935-#1937) for missing social post list/search/bulk-archive capabilities discovered during queue cleanup.

Authexis

The dashboard got a complete redesign (#743) — the single biggest UX change since the March simplification. The old vertical stack replaced by a command center: smart greeting with attention summary, color-coded action cards, proportional pipeline bar with clickable segments, distribution stats. This completed v1.5 (50/50 issues closed). Social queue performance fixed with explicit column selection and archived post exclusion.

On the agency side, the Synaxis product launch engagement progressed through two rounds of questionnaire/brief corrections. Jeffrey Jones confirmed as available for testimonial. Tagline locked: “AI for execution. Humans for judgment.” 90-day target set at 25-50 active workspaces. .vercelignore added after CLI deploy hit a 10MB body limit.

Diktura

The biggest architectural change in the project’s history: flat app_users table replaced by a proper identity graph. Three layers — persons (tenant-scoped, one per human), identity_nodes (email/phone/external ID with priority resolution), workspace_contacts (per-workspace view with traits). Progressive enrichment means anonymous users get retroactively linked when they provide an email later.

An events table provides a unified timeline index — every create action writes an event, so the inbox and customer detail page query one table instead of three parallel domain queries. The entire migration was executed via subagent-driven development: 9 sequential tasks with spec compliance review catching two critical bugs before shipping. 123 tests passing across 16 files.

Earlier in the day: team management features (invite by email, role management, member removal), integration cards (Brevo, PostHog, Sentry, GitHub), inbound email webhook fix, and a lo-fi design refresh (Space Mono + DM Sans, offset drop shadows). 20 commits pushed but not yet deployed to production.

Eclectis

Short but high-impact session. PostHog proxy route was being caught by auth middleware — every client-side event was silently failing since the April 4 deploy. One regex fix in proxy.ts restored telemetry. Added name and plan to PostHog identify calls for plan-tier segmentation.

Custom intro paragraph for manual briefings shipped (#676) — textarea that passes through to the engine, replacing the AI-generated intro when provided. Closed #664 (URL import) as already covered by existing file upload + paste features.

Synaxis AI

First real end-to-end engagement run through the agency loop. Trina dispatching personas, Alex reviewing with learn cycles, Notion status model driving the flow. The process worked but exposed gaps: loop unblocking too early (at Approved instead of Done), clumsy tmux delivery, Alex’s review not auto-triggering learn cycles. Each gap fixed in skills as encountered.

The methodology grew significantly: three new deliverable types (primary user research, secondary user research, comparative research), product launch playbook expanded from 10 to 13 deliverables, engagement questionnaire now asks for research subjects, brief template includes downstream persona inputs, research now runs before competitive analysis.

The competitive analysis was rewritten against the full AI marketing stack cobble, producing the honest strategic conclusion that blocks the positioning brief: the scrappy user doesn’t need Authexis, and the non-scrappy user won’t figure it out. This is the same reckoning that drove the paulos session’s pivot to consulting.


Cross-cutting themes

The tool layer is dead; the judgment layer is the business. This theme dominated two sessions (paulos and synaxis-ai) and has implications for the entire fleet. The SaaS products were built on a floor that keeps rising. The consulting pivot — with the three-layer assessment as lead gen — is the strategic response.

Identity and instrumentation. Three projects (diktura, eclectis, paulos) all worked on knowing who people are and what they’re doing. Diktura built a proper identity graph. Eclectis fixed PostHog so it actually works. Paulos added UTM tracking to social posts. The fleet is getting serious about measurement.

Methodology as product. Synaxis ran its first real engagement and the methodology evolved through contact with reality — three new deliverable types, updated playbook, questionnaire and brief improvements. The learning loop is working: the system gets better each cycle. This is the proof case for the consulting pitch.

Subagent-driven development. Diktura’s 9-task identity graph migration was dispatched entirely to implementation agents with spec compliance review. The pattern is maturing — complex architectural work executed by agents with human review at checkpoints.


By the numbers

ProjectIssues closedIssues createdPRs mergedMilestone status
Paulos08 (5 scout + 3 authexis)0March: 24/24, April: 2/2
Authexis3 (#743, #1862, #664)3 (#1934-#1937)0v1.5: 50/50 complete
Diktura0 (architectural work, no issues)00
Eclectis3 (#676, #664, #659→backlog)00All clear
Synaxis AI2 (engagement deliverables)2 (#84, #85)0

Carry-over

  • The judgment test assessment — Tally form needs building, Zapier wiring (→ Brevo + Google Sheet), Calendly for CTAs, synaxis.ai landing page, LinkedIn post. This is the revenue path.
  • Diktura production deploy — 20 commits including the identity graph migration sitting on main, not deployed. Events backfill needed for historical data.
  • Eclectis production deploy — PostHog fix and custom intro on main but not live.
  • Synaxis strategic question — Who is the Authexis customer? Scrappy/non-scrappy paradox must be resolved before the positioning brief can proceed.
  • Social queue cleanup — ~20 reflection posts still in Authexis social queue, need manual archiving (API gaps block automation).
  • Paulos #676 — SSH command injection. Security issue, should be first dev-loop pick.

Risks

  • Foundation model commoditization is not a future risk — it’s the present reality. The SaaS revenue thesis needs to be replaced by the consulting/services thesis. The assessment is the bridge.
  • Diktura identity graph migration changes the database schema significantly. Production deploy is pending with no rollback plan documented.
  • PostHog data gap on eclectis — events were silently lost from April 4 deploy until today’s fix. Historical data starts today, not yesterday.
  • Claude Code Max throttling — aggressive rate limiting during the session. If this persists, it affects the speed of interactive development across all projects.

Why customer tools are organized wrong

This article reveals a fundamental flaw in how customer support tools are designed—organizing by interaction type instead of by customer—and explains why this fragmentation wastes time and obscures the full picture you need to help users effectively.

Infrastructure shapes thought

The tools you build determine what kinds of thinking become possible. On infrastructure, friction, and building deliberately for thought rather than just throughput.

Server-side dashboard architecture: Why moving data fetching off the browser changes everything

How choosing server-side rendering solved security, CORS, and credential management problems I didn't know I had.

The work of being available now

A book on AI, judgment, and staying human at work.

The practice of work in progress

Practical essays on how work actually gets done.

The costume just got cheap

If 80 percent of what you thought was judgment turns out to be pattern recognition, what does that say about you? Not about your job — about you.

The bottleneck moved and nobody noticed

When execution becomes nearly free, the bottleneck shifts from doing the work to deciding what work to do. Most organizations are optimized for the wrong constraint.

The inbox nobody reads is the one that matters

Every organization has a monitoring system that works perfectly and reports to nobody. The gap between having information and acting on it is where most failures actually live.