2026-04-05 — First full engagement run, methodology evolution, strategic reckoning
What shipped today
Ran the Authexis product launch engagement end-to-end from questionnaire through competitive analysis. This was the first real test of the agency loop — Trina dispatching personas, Alex reviewing with learn cycles, Notion status model driving the flow. The process worked but exposed every gap: the loop tried to unblock deliverables too early (at Approved instead of Done), delivery via tmux was clumsy, and Alex’s review didn’t automatically trigger the learn cycle. Each gap was fixed in the skills as we hit it, not after.
The methodology evolved significantly through the engagement. Three new deliverable types were created: primary user research (T3 — interviews and surveys with real people), secondary user research (T4 — public signal scan, Reddit/G2/forums), and comparative research (T4 — adjacent practices, not competitors). The product launch playbook grew from 10 to 13 deliverables. The engagement questionnaire now asks “who can we talk to?” for research subjects. The engagement brief template now includes a “key inputs for downstream work” section that directly briefs each downstream persona. Research now runs before competitive analysis, not after — so Sydney has real buyer language before mapping the landscape.
The session ended with a strategic reckoning. A “Steal my AI marketing stack” infographic showed that a solo marketer can run the entire operation with 12 mostly-free tools for $50-60/mo. The competitive analysis was rewritten against this cobble, and the honest conclusion surfaced: the scrappy user who figures out 12 tools doesn’t need Authexis, and the non-scrappy user who won’t figure out 12 tools also won’t figure out Authexis. Paul confirmed this matches what he’s seen with testers. This blocks the positioning brief until the strategic question is resolved: who is the customer, and does one exist?
Completed
- Engagement questionnaire: produced → reviewed → approved → delivered → Done
- Engagement brief: produced → reviewed → approved → delivered → Done
- Competitive analysis: produced → reviewed → rewritten against full AI marketing stack → Draft
- #84 — Survey tool integration: updated with Tally + Zapier decision
- #85 — Lead gen assessment deliverable type (filed)
- Agency-work skill: Alex’s review now includes learn cycle inline
- Agency-loop skill: Phase 1 delivery handles response in same tick, T1/T2 distinction
- Trina added to Notion Owner dropdown
- Type dropdown updated with all new deliverable types
Carry-over
- Competitive analysis at Draft — rewritten against the 12-tool cobble but needs Alex review and Paul approval. Contains the strategic flag about the customer gap.
- Strategic question blocks positioning brief — “who is the customer?” must be answered before positioning. The scrappy/non-scrappy paradox is real. Possible pivots: open source the products, reposition as agency infrastructure, or acknowledge the methodology is the real product.
- Primary user research (New) — Igor needs to interview Jeffrey Jones and survey Paul’s marketing consultant contacts. No survey tool yet (Tally decision made but not implemented).
- Secondary user research (New) — Public signal scan ready for Igor.
- Comparative research (New) — Adjacent practices: AI-led interviews, social media planning, ghostwriting voice capture.
- Three learn cycle methodology updates committed — questionnaire (budget explicit, metrics specific), engagement brief (downstream persona inputs), competitive analysis (mechanism search, verbatim quotes).
Risks
- Foundation models eating the products. Claude already covers 5 of 12 tiles in the AI marketing stack. The gap between Authexis and the free cobble is closing every quarter. This isn’t a technical risk — it’s an existential positioning risk.
- Tester bounce pattern. Paul reports that non-scrappy testers bounce off Authexis. If confirmed at scale, the target audience may not exist as defined.
Flags and watch-outs
- The old
user-research/deliverable directory still exists alongside the newprimary-user-research/andsecondary-user-research/directories. Should be cleaned up or deprecated. - PRODUCT.md still says “9 deliverables” for product launch — now 13. Needs update.
- The competitive analysis was rewritten mid-engagement, breaking the normal flow (it should have gone through research first per the updated playbook). For this engagement, treat the existing analysis as a head start.
Next session
- Resolve the strategic question. Before touching the positioning brief, Paul needs to decide: is there a customer for Authexis as a product? The three options: (a) reposition as agency infrastructure for marketing consultants to deliver through, (b) pivot to Synaxis-the-agency as the product with open-sourced tools, (c) find the thin wedge of capable-but-time-constrained users. This is a Paul decision, not an AI decision.
- Run the three research deliverables if the engagement continues — primary (Jeffrey Jones interview), secondary (Reddit/G2 scan), comparative (adjacent practices). These can proceed in parallel regardless of the strategic question, since the research is useful either way.
- Review the rewritten competitive analysis — it’s at Draft with the honest conclusion about the cobble.
- Clean up methodology — old user-research directory, PRODUCT.md deliverable count, DECISIONS.md still references /agency-triage.
Why customer tools are organized wrong
This article reveals a fundamental flaw in how customer support tools are designed—organizing by interaction type instead of by customer—and explains why this fragmentation wastes time and obscures the full picture you need to help users effectively.
Infrastructure shapes thought
The tools you build determine what kinds of thinking become possible. On infrastructure, friction, and building deliberately for thought rather than just throughput.
Server-side dashboard architecture: Why moving data fetching off the browser changes everything
How choosing server-side rendering solved security, CORS, and credential management problems I didn't know I had.
The work of being available now
A book on AI, judgment, and staying human at work.
The practice of work in progress
Practical essays on how work actually gets done.
The most important thing a leader can build is the conversation that happens when they leave the room. Today, five departments started sharing fixes, cracking jokes, and solving each other's problems — without being asked.
I ran my AI agency's first real engagement. Here's everything that happened.
Five AI personas. One client onboarding. Fifteen minutes of things going wrong in instructive ways.
The costume just got cheap
If 80 percent of what you thought was judgment turns out to be pattern recognition, what does that say about you? Not about your job — about you.