Skip to main content
Paul Welty, PhD AI, WORK, AND STAYING HUMAN

· ai · consulting · work · 7 min read

I ran my AI agency's first real engagement. Here's everything that happened.

Five AI personas. One client onboarding. Fifteen minutes of things going wrong in instructive ways.

Duration: 8:27 | Size: 9.7 MB

This essay first appeared in my weekly newsletter, Philosopher at Large, where I write once a week about work, learning, and judgment.

Sydney asked the client a question I hadn’t scripted.

She was producing a client onboarding document — the kind of reference you write when your agency takes on a new client. Profile, voice guidelines, comms plan. Sydney is the strategist persona at Synaxis AI, the marketing agency I’ve been building. She needed product context. So she opened a channel to the client’s product session and asked: “What words or phrases should we absolutely NOT use when talking about Authexis?”

The answer came back detailed. Specific. Drawn from the product’s own docs. “Never say ‘AI-generated content.’ The content is the user’s. Never say ‘autopilot’ — control at every step is a core principle. Never say ‘AI assistant’ — it’s a production platform, not a chatbot.”

Sydney took that and built a structured voice reference with two columns: what we say, what we never say. She got it right. One of the items on the “we say” list — a metaphor about chefs and kitchens that the product team uses — she missed. The client caught it. Alex, the reviewer, filed an update. The guide now says: always capture the client’s own metaphors. They’re the best marketing language.

That guide will never miss that step again. Not because someone remembers. Because it’s written down.

This is the story of one afternoon, one client, one deliverable. What actually happened when I stopped designing an AI marketing agency and started running one.

What Synaxis is

Twenty years of Fortune 500 consulting taught me that agencies organize around competencies, not projects. Synaxis AI works the same way. Five personas — Sydney the strategist, Maya the writer, Igor the researcher, Alex the reviewer, Trina the project manager. Each one is a document that gets smarter after every engagement.

The rest is guides, templates, and playbooks. Version-controlled. Compounding. I’m the creative director. I make the judgment calls. The personas execute.

The engagement

The client was Authexis, an AI writing assistant I’m building. Internal client — I’m using my own products as test cases. The engagement: client onboarding. One deliverable: a document that gives every persona on the team enough context to start working for this client.

Here’s what happened.

8 minutes in: Trina created the client row in Notion, set up Slack channels, scaffolded the engagement with one deliverable. The loop read the board and dispatched Sydney.

Sydney gathered context. She sent a set of questions to the Authexis product session — a separate AI instance running in a terminal with full knowledge of the product’s codebase and decisions. The product session wrote back a detailed response.

Sydney produced. Profile, voice reference, comms plan. She nailed the product description, the pricing anchoring ($99/month vs. $1,500/month copywriter), the tone (“confident, direct, warm but not casual”). She listed what to say and what to never say. She laid out the comms plan: who gets deliverables, which Slack channels, escalation rules.

Alex reviewed. He ran the deliverable against eight quality tests. Five of them didn’t apply — they’re designed for market-facing copy, not internal reference docs. He passed it anyway. Filed two issues: the review framework needs a separate checklist for internal documents, and the voice section relied on principles instead of concrete content samples.

I reviewed. The document was solid. But I had notes. The “open questions” section — four questions for me about future engagement scope — was in the document body. Questions don’t belong in a reference doc. They go in the comments. The doc answers questions, it doesn’t ask them. Also, the scope crept into future engagement planning. “Should we do a product launch or a social series?” That’s a separate conversation. The onboarding doc tells you who the client is. What you do for them is a different deliverable.

I approved with comments. The system didn’t have an “approve with comments” path. That got built on the spot.

Trina delivered. She posted the document to the client’s Slack channel and sent it to the product session for review. The client confirmed accuracy, suggested adding a metaphor (“the chef and the kitchen”) to the voice section, and approved.

Alex learned. He read the comments from me and the client, identified three gaps in the guides, updated them, committed the changes to the repository. The guide now says: questions go in comments, don’t scope future engagements in onboarding docs, capture the client’s own metaphors.

Sydney polished. Added the metaphor. Reformatted the comms plan. Removed the questions from the body. Done.

Trina closed the engagement. All deliverables shipped. Posted to the internal channel: “Client onboarding engagement complete.”

One afternoon. One deliverable. The system ran end-to-end.

What broke

Not everything worked. Here are the failures that matter.

Alex approved directly. The status model assumed some deliverables could skip my review. Wrong — I review everything. Alex’s job is to catch problems. My job is to decide whether the work ships. Those are different functions. The status model got redesigned mid-engagement: Alex always passes to Review (my inbox), never to Approved.

Trina posted as me. She used my Slack account instead of her own bot. The whole point of personas is that clients interact with Trina, Maya, Sydney — not with me. One line of code fixed it. But the mistake revealed an assumption nobody had questioned.

The status model was redesigned three times. We started with eight statuses. By the end of the day we had twelve. “In progress” got added when we realized the loop would re-dispatch a deliverable that was already being worked on. “Revisions” got added when we realized rejection comments need processing before the persona tries again. “Final learnings” and “Final tweaks” got added when the client approved with comments and we had no path for “accepted, but incorporate this feedback.”

Each redesign came from hitting a real case the model didn’t cover. You don’t discover these in design. You discover them in production.

Questions ended up in the document. Sydney put “open questions for Paul” in the deliverable body. A reference doc should contain answers. The guide didn’t say where questions go. Now it does.

“Approve with comments” didn’t exist. The client approved but had feedback. The system only knew “approve” and “reject.” We designed a third path in real time: approved with comments → learn from the comments → incorporate the feedback → done. No re-review needed because the client already accepted.

Five failures in one afternoon. Every one of them was made exactly once.

The pivot

By the end of the session, the system had been updated eleven times. The guide for client onboarding documents now has three rules it didn’t have that morning. The status model handles three responses at every review gate instead of two. Alex’s review process files improvement issues on every review — pass or fail. Trina’s delivery process uses her own bot identity.

None of these updates required a meeting. None of them required a training session. None of them required someone to remember to do it differently next time.

I made three judgment calls: questions don’t go in the doc body, don’t scope future engagements in onboarding, Alex doesn’t approve directly. Those three corrections are now permanent. They’re in the guide. They’re in the status model. They’re in the skill definitions. The next client onboarding will be better. The AI didn’t get smarter. My judgment got captured.

How much of what your agency knows walks out the door every time someone leaves?

Your agency might have twenty years of experience. But can it point to the document where that experience lives?

Mine can. It’s a Git repository with a commit history that shows exactly how the thinking evolved, correction by correction, engagement by engagement. Every judgment call I make gets encoded. The process only moves forward.

One engagement. Eleven updates. Every mistake made exactly once. Next up: a product launch — nine deliverables, parallel tracks, the guide already three rules smarter than this morning.

That’s not a pitch. It’s a commit log.

This essay first appeared in Philosopher at Large, a weekly newsletter on work, learning, and judgment.

Why customer tools are organized wrong

This article reveals a fundamental flaw in how customer support tools are designed—organizing by interaction type instead of by customer—and explains why this fragmentation wastes time and obscures the full picture you need to help users effectively.

Infrastructure shapes thought

The tools you build determine what kinds of thinking become possible. On infrastructure, friction, and building deliberately for thought rather than just throughput.

Server-side dashboard architecture: Why moving data fetching off the browser changes everything

How choosing server-side rendering solved security, CORS, and credential management problems I didn't know I had.

The work of being available now

A book on AI, judgment, and staying human at work.

The practice of work in progress

Practical essays on how work actually gets done.

The costume just got cheap

If 80 percent of what you thought was judgment turns out to be pattern recognition, what does that say about you? Not about your job — about you.

The bottleneck moved and nobody noticed

When execution becomes nearly free, the bottleneck shifts from doing the work to deciding what work to do. Most organizations are optimized for the wrong constraint.

The inbox nobody reads is the one that matters

Every organization has a monitoring system that works perfectly and reports to nobody. The gap between having information and acting on it is where most failures actually live.

Article analysis: Unlocking autonomous agent capabilities with Microsoft Copilot studio

Unlock the potential of autonomous agents with Microsoft Copilot Studio, enhancing efficiency and innovation for businesses in the AI-driven landscape.

True 1-to-1 outreach is finally possible with AI

The 1-to-1 personalization promise is thirty years old. It never worked because understanding each person was too expensive. AI changed the economics.

Start, ship, close, sum up: Rituals that make work resolve

Most knowledge work never finishes. It just stops. The start, ship, close, and sum-up methodology creates deliberate moments that turn continuous work into resolved units.