The accommodation tax
Every time I ask an AI agent for a change, I still cringe. The flinch response trained into me by years of working with humans never unlearned itself, even when the other side is incapable of pushback.
Duration: 7:09 | Size: 8.2 MB
Every time I ask an AI agent for a change, I still cringe.
Not because the change is hard. Not because the agent will push back, complain, or silently resent me for it. But because my nervous system doesn’t know that yet. Somewhere deep in the wiring built over twenty-five years of managing human teams, there’s a flinch response that fires every time I’m about to redirect someone’s work. This will cause a conflict. They’ll take it personally. They’ll do the work but you’ll pay for it in morale. Pick your battles.
The battles I was picking were never about the work. They were about the feelings surrounding the work. And I didn’t realize how much of my decision-making was shaped by that until the feelings disappeared.
The tax
I spent most of my career making suboptimal decisions to avoid interpersonal friction. Not dramatic compromises. Small ones. Keeping a mediocre paragraph because the writer was sensitive about revisions. Approving a design that was 80% right because the designer had already revised it twice and a third round would read as distrust. Routing work to the person who would accept it gracefully instead of the person who would do it best.
These weren’t irrational fears. The pushback was real. The designer who received a third round of revisions did, in fact, start delivering slower on everything after that. The writer who got honest feedback did, in fact, stop volunteering for assignments. The senior developer who was told his architecture was wrong did, in fact, spend six months quietly undermining the replacement. These aren’t hypothetical costs. I watched them happen. More than once, I watched good work get sabotaged by someone whose ego was bruised by the feedback that produced it.
Managing humans means managing emotions, and emotions have consequences. The question is whether you see those consequences or just absorb them.
I absorbed them for years. I thought that was leadership.
What I thought was patience was actually avoidance
There’s a version of management wisdom that says: give people space to arrive at the right answer themselves. Don’t micromanage. Trust the process. Let them own it.
This is good advice when the person genuinely needs time to think. It’s terrible advice when the person needs clear direction and you’re withholding it because you’re afraid of how they’ll react. I confused these two situations constantly. I’d wait three days for someone to figure out what I could have told them in a sentence. Not because I respected their autonomy, but because I was managing their feelings instead of managing their work. And because the last time I didn’t wait, the person spent the next two weeks doing the minimum while looking for another job.
With AI agents, the pretense evaporates. “Change the headline” takes one sentence, costs zero emotional capital, and lands in three seconds. There is no posturing, no negotiation, no reading the room. I say what I mean. The work happens. What surprises me most is how much faster everything moves when you remove the social choreography.
The flinch is real
I still hesitate before asking for a major revision. I still soften my language when I should be direct. I still phrase requests as suggestions when they’re requirements. Old habits built on old fears. The AI doesn’t care. It processes “change this entirely” with the same equanimity as “tweak the opening.” But I feel the difference. My body still prepares for impact that never comes.
This isn’t about AI being better than humans. It’s about discovering, through contrast, how much of my professional behavior was defensive. How much energy went into managing reactions instead of managing outcomes. How many conversations I rehearsed in my head before having them, not because the content was difficult but because the reception was genuinely dangerous. I once lost an entire quarter of a project because I gave direct feedback to the wrong person on the wrong day. The feedback was correct. The project still died.
What the fleet taught me
I run a fleet of AI agents across thirteen projects. None of them have feelings in the way human collaborators have feelings. But this produces something I never had with human teams: total psychological safety for me, the person directing the work. I can change my mind at 3pm and nobody’s afternoon is ruined. I can say “this approach is wrong, start over” and the agent starts over without the three-day recovery period. I can be honest about what I want without calculating the political cost of honesty.
And here’s the uncomfortable part: my work is better. Not because the AI is smarter than the humans I worked with. It isn’t. Not at judgment, not at taste, not at the things that matter most. My work is better because I’m finally saying what I actually think instead of what I think will be received well.
The human cost of working with humans
The best work I’ve ever been part of was collaborative, human, messy, and real. The dinner parties, the whiteboard sessions, the 2am debugging with someone who cared as much as I did. Those are irreplaceable.
But between those peaks was a vast plateau of accommodation. Status meetings where the actual status was known by everyone before the meeting started but we performed the ritual anyway. Feedback sandwiches where the bread existed solely to make the filling less painful. Hiring decisions influenced by who would “fit the culture,” meaning who would be easy to manage, not who would do the best work. And the quiet departures. People who left not because the work was bad but because they couldn’t handle being told, politely and accurately, that their work needed to be better.
I didn’t build an AI fleet because I hate working with people. I built it because I recognized, with a start, how much of my working life was spent managing around the work instead of doing the work. And once I saw it, I couldn’t unsee it.
The question I can’t answer yet
If the accommodation tax is real. If all of us are spending 30%, 40%, 50% of our professional energy managing the feelings surrounding the work instead of doing the work. What does it mean that the tax is suddenly optional? Not for everyone. Not yet. But for a growing number of people who can direct AI agents, the tax dropped to zero overnight.
Is the friction actually the point? The thing that makes the work human, that builds real relationships, that produces the 2am debugging sessions and the whiteboard breakthroughs?
Or is the friction just friction? Something we tolerated because we had no alternative, and romanticized because we couldn’t imagine the work without it?
I don’t know. But I know this: every time I catch myself softening a request to an AI agent that doesn’t have feelings, I learn something new about the shape of the cage I was in. And twenty-five years of working with humans left me with a flinch response that an afternoon with AI agents has started to heal.
Why customer tools are organized wrong
This article reveals a fundamental flaw in how customer support tools are designed—organizing by interaction type instead of by customer—and explains why this fragmentation wastes time and obscures the full picture you need to help users effectively.
Infrastructure shapes thought
The tools you build determine what kinds of thinking become possible. On infrastructure, friction, and building deliberately for thought rather than just throughput.
Server-side dashboard architecture: Why moving data fetching off the browser changes everything
How choosing server-side rendering solved security, CORS, and credential management problems I didn't know I had.
The work of being available now
A book on AI, judgment, and staying human at work.
The practice of work in progress
Practical essays on how work actually gets done.
I built a content tool that starts from your voice, not a prompt
Every AI content tool starts from a prompt. Authexis starts from your voice — literally. Here's what I learned about the gap between generating content and creating content that sounds like you.
The org chart nobody drew
The most honest org chart is the one that emerges from how people actually work, not the one someone drew on a whiteboard. Today, a team restructured itself through conversation — and nobody told them to.
Why I built Textorium
600 Hugo posts, one file tree, and the moment I decided grep wasn't a content management strategy.
I ran my AI agency's first real engagement. Here's everything that happened.
Five AI personas. One client onboarding. Fifteen minutes of things going wrong in instructive ways.
Your process was built for a different speed
When work changes velocity, governance systems don't just fall behind. They become theater. And theater is worse than nothing—it gives you the feeling of control without any of the substance.
I built a content tool that starts from your voice, not a prompt
Every AI content tool starts from a prompt. Authexis starts from your voice — literally. Here's what I learned about the gap between generating content and creating content that sounds like you.