Paul Welty, PhD AI, WORK, AND STAYING HUMAN

You were trained to suppress yourself

Organizations didn't accidentally reward the machine-self. They engineered it. And you cooperated because it worked—until now.

You were trained to suppress the most valuable parts of yourself. Not by accident. By design.

Organizations needed processors, not meaning-makers. They needed people who followed processes, hit metrics, produced outputs on schedule. Judgment was expensive. Creativity was unpredictable. Questions slowed things down.

So work rewarded the version of you that showed up on time, stayed quiet, and did what you were told. The machine-self.

Here’s what got suppressed: curiosity about whether the work mattered. Judgment about whether the answer was right, not just whether you got the right answer. Creativity that didn’t fit the template. Reflection that might reveal the whole system was pointed in the wrong direction.

These aren’t small things. They’re what make you human rather than programmable.

I saw this pattern everywhere when I consulted for organizations. Smart people, capable people, who had learned to turn off entire dimensions of themselves at the office door. They’d arrive with questions, insights, concerns about direction. And they’d suppress all of it because the system didn’t reward any of it.

The suppression was functional. It worked. For 200 years, being machine-like was how you succeeded. You got promoted for reliability, not judgment. For execution speed, not discernment. For throughput, not wisdom.

And now AI can do all of that better than you can.

The skills you spent decades developing—the ones that got rewarded, that got you promoted, that proved you were valuable—are exactly the ones getting automated. Because they were always machine skills. You were just playing the role of the machine until the actual machine showed up.

This is what people deny. Not that AI is coming. Not that jobs will change. But that the version of yourself you’ve been performing at work was already diminished. That the suppression happened to you, and you cooperated with it, and it cost you something real.

The machine-self isn’t something you chose. It’s what remained after the filtering process. You showed up as your full self—curious, questioning, creative, discerning—and the workplace systematically filtered out every dimension that didn’t produce predictable output.

You learned quickly. Ask too many questions: flagged as difficult. Suggest a different approach: not a team player. Point out that the metrics are measuring the wrong thing: not strategic thinking, just resistance.

So you stopped. You filtered yourself. You became what the system needed you to be.

And it worked. Until the system automated the exact capabilities it had trained you to develop.

Here’s the uncomfortable part: you can’t go back and reclaim those suppressed capacities without admitting they were suppressed in the first place. Without acknowledging that the professional success you built came from making yourself smaller. More compliant. More machine-like.

That’s the denial. Not “my job might be automated.” But “I was already being treated as automatable, and I accepted it because it was rewarded.”

The question isn’t whether AI will replace you. The question is whether the version of you that work created was ever actually you at all. Whether there’s something underneath the machine-self that can’t be automated because it was never mechanical to begin with.

When I taught at Emory, I’d watch graduate students arrive with curiosity and leave with caution. They’d start asking real questions—questions that challenged the framing, that exposed contradictions, that required sitting with uncertainty. And then they’d learn that academic success meant playing the game: cite the right people, use the right language, don’t challenge too directly.

The ones who succeeded learned to suppress what made them good thinkers. Because the system rewarded compliance, not courage.

The same thing happens in every organization. You learn what gets rewarded. You perform that. Everything else gets filed away as “not appropriate for work.”

The denial is thinking this was just practical adaptation. That there was no cost. That you can perform the machine-self for decades and still have access to the full self when you need it.

You can’t. The capacities you don’t use atrophy. The questions you stop asking stop occurring to you. The judgment you don’t exercise becomes uncertain.

So when you feel threatened by AI, what you’re actually feeling is the recognition that you built your professional identity on skills that were always automatable. That the system trained you to be replaceable and called it career development.

That’s what we deny. That the suppression happened. That it cost something. That the professional success came at the expense of what actually makes you irreplaceable.

AI didn’t create this problem. It exposed it. The machine-self was always a compromise. It just worked well enough that you could ignore the cost.

Now you can’t.

The work isn’t learning to use AI tools. The work is reclaiming what you suppressed to succeed in systems that didn’t value it. Judgment. Discernment. Creativity. The ability to ask whether the work matters, not just whether you did it correctly.

Those capacities are still there. But they’re not going to come back just because you want them to. You have to practice them. Strengthen them. Stop filtering yourself to fit the machine’s requirements.

That’s the work. Not staying relevant. Not learning new tools. Becoming actually human again instead of pretending the machine-self was you all along.

This is what The Work of Being is about. Not tips for the AI age. The uncomfortable recognition that you’ve been performing a diminished version of yourself for years, and the path to reclaiming what was suppressed.

If that’s landing, the book is here: The Work of Being: Staying Human in the Age of AI

This essay first appeared in my weekly newsletter, The Work of Being, where I write once a week about work, learning, and judgment.

Featured writing

Why customer tools are organized wrong

This article reveals a fundamental flaw in how customer support tools are designed—organizing by interaction type instead of by customer—and explains why this fragmentation wastes time and obscures the full picture you need to help users effectively.

Infrastructure shapes thought

The tools you build determine what kinds of thinking become possible. On infrastructure, friction, and building deliberately for thought rather than just throughput.

Server-Side Dashboard Architecture: Why Moving Data Fetching Off the Browser Changes Everything

How choosing server-side rendering solved security, CORS, and credential management problems I didn't know I had.

Books

The Work of Being (in progress)

A book on AI, judgment, and staying human at work.

The Practice of Work (in progress)

Practical essays on how work actually gets done.

Recent writing

Dev reflection - February 10, 2026

I want to talk about where complexity actually lives. Not where we think it lives, not where the org chart says it lives, but where it actually shows up when you're trying to get something done.

Dev reflection - February 09, 2026

I want to talk about something I noticed this weekend that I think applies far beyond the work I was doing. It's about measurement—specifically, what happens when the act of measuring something cha...

Dev reflection - February 08, 2026

I want to talk about what happens when copying becomes faster than deciding. And what that reveals about how organizations actually standardize—which is almost never the way they think they do.

Notes and related thinking

We always panic about new tools (and we're always wrong)

Every time a new tool emerges for making or manipulating symbols, we panic. The pattern is so consistent it's almost embarrassing. Here's what happened each time.

Files are permanent. Databases are not.

Why choosing files as your source of truth matters more than you think—and what happens when you don't.