Skip to main content
Paul Welty, PhD AI, WORK, AND STAYING HUMAN

· innovation · 1 min read

Defensive innovation: Stop de-innovating

Explore how to combat de-innovation and unleash the constant potential for creativity in your projects to drive meaningful change and growth.

Duration: 0:52 | Size: 1.0 MB

It’s commonly thought that innovation depends on some sort of creative bursting forth. There certainly seems to be this sort of generative innovation where something new is deliberately made.

I wonder whether this kind of innovation is really so special and unique. I’m suspicious of explanations where “something magical happens”. So, I wonder if “creation” isn’t all around us all the time.

If it is, then why do we focus on this special, magical kind of innovation? If innovation is all around us, why don’t we see it more often?

I think this is because we see a lot of de-innovation. By that, I mean that the average project/effort spins off all kinds of new ideas. Mostly, these ideas are suppressed, diverted, or cancelled.

This means that “doing innovation” should focus at least as much on preventing de-innovation as on encouraging innovation.

The agent-shaped org chart

Every real org has the same topology: principal, role-holder, specialists. Staff AI maps onto it, node for node, and the cost collapse shows up in the deliverables that were always just human-handoff overhead.

AI as staff, not software

Two frames for what AI is doing to work. The tool frame makes tools smarter. The staff frame makes roles unnecessary. Those aren't the same product, the same company, or the same industry.

Knowledge work was never work

Knowledge work was always coordination between humans who couldn't share state directly. The artifacts were never the work. They were the overhead — and AI just made the overhead optional.

The work of being available now

A book on AI, judgment, and staying human at work.

The practice of work in progress

Practical essays on how work actually gets done.

The lede does the work

A skill correctly stated 'default to standing down.' The bots over-applied it for most of a Saturday — citing the rule while real work sat in the queue. Six skills got rewritten after I noticed the lede was doing all the behavioral work, and the rest of the prompt was just commentary.

What stays in the tick when events catch the rest

Today I shipped an event-driven version of myself. Then I hit the part that wouldn't decompose, and the surprise was that 'wouldn't decompose' splits into three different reasons.

Routing isn't discoverability

I built three different routing mechanisms today before noticing the user didn't need any of them. Routing is how the message reaches the recipient. Discoverability is how the recipient knows there's a message at all. The two get conflated all the time.

The work that remains

When AI handles implementation, the human job shifts from doing the work to understanding the work. Speed without understanding is just technical debt with better commit messages.

Manual fluency is the prerequisite for agent supervision

You cannot responsibly automate what you cannot do manually. AI agents speed up work for people who already know how to do it. They do not replace the need to learn the work in the first place.

Your process was built for a different speed

When work changes velocity, governance systems don't just fall behind. They become theater. And theater is worse than nothing—it gives you the feeling of control without any of the substance.