Paul Welty, PhD AI, WORK, AND STAYING HUMAN

· development

Dev reflection - February 16, 2026

So here's something I want to think through today. I've been working across several projects simultaneously, and what's striking me isn't the building. It's the deleting. The removing. The taking a...

Duration: 6:42 | Size: 6.14 MB


Hey, it’s Paul. Monday, February 16, 2026.

So here’s something I want to think through today. I’ve been working across several projects simultaneously, and what’s striking me isn’t the building. It’s the deleting. The removing. The taking away. And I think there’s something important in that—something that applies well beyond software, well beyond my specific work.

Let me start with the first thing I noticed.

There’s a moment in any system—any organization, any workflow, any product—where you stop asking “what else does this need?” and start asking “what is this thing actually for?” Those are fundamentally different questions, and the shift between them is where most of the real work happens.

I had a publishing platform that also generated social media teasers. Made sense when I built it. Write an essay, automatically create the promotional posts, ship everything together. Efficient, right? Except now I have a separate system whose entire job is content workflows—queuing, scheduling, prompt management. So the publishing platform was doing content work that belongs somewhere else. Not because the code was bad. Because the boundary was wrong.

Here’s what’s interesting about that. Nine files deleted. Gone. And the system got better, not worse. Because clarity about what a system doesn’t do is just as important as clarity about what it does.

You see this in organizations constantly. A team that started handling customer complaints also starts doing product feedback analysis, which drifts into competitive research, which means they’re now doing three jobs and none of them well. Not because anyone made a bad decision—each addition made sense at the time. But nobody asked the boundary question: if we have a dedicated team for competitive research, should this team still be doing it on the side?

The answer is almost always no. But the removal is harder than the addition ever was, because by now people have built habits around it. Workflows depend on it. Someone’s identity is tied to being the person who does that thing. Deletion isn’t a technical act. It’s a political one.

Second thing. I completely rewrote how my prompt system works—the part of the system that tells AI what to generate and how. The old version was incredibly flexible. You could configure mandatory preambles, per-content-type overrides, field-specific variations, conditional wrappers. It could construct almost any prompt imaginable.

And that was the problem.

When everything is configurable, nothing is predictable. I couldn’t reason about what would happen when I changed one setting, because the assembly logic had too many conditional paths. The system worked. Content was being generated. But I couldn’t explain it—not to myself six months from now, not to anyone else.

So I stripped it down to four slots. Four. System prompt, workspace context, field prompt, format instructions. One prompt per field, no compound operations, no special cases.

This is the configurability trap, and it shows up everywhere. Think about any enterprise software rollout. The vendor says “it’s fully customizable” like that’s a selling point. Six months later, you have fourteen conditional workflows, nobody remembers why half of them exist, and changing anything feels like defusing a bomb. The flexibility that was supposed to serve you is now the thing preventing you from understanding your own system.

I tried to write clean documentation for the old prompt model and couldn’t. And that was the signal. If you can’t write a clear explanation of how your system works, your system is wrong. The documentation isn’t failing—your abstraction is. The inability to explain is diagnostic, not editorial.

This applies to strategy documents, org charts, compensation structures, anything. If you can’t explain it clearly, it’s not because you need a better writer. It’s because the thing itself has gotten too complicated to justify.

Third thing—and this one’s been nagging at me. I started a new project with a structure borrowed from best practices: separate directories for the web API, for a mobile app, for shared libraries. Modern, clean, scalable. Except this project is a management training tool. There is no mobile app. There might never be. The actual product is managers sitting at their desks practicing difficult conversations. I built the architecture for a product I’m not making.

Why? Because my project management tools assume that’s what you build. The infrastructure recognizes that structure, supports it, makes it easy. So you adopt it. The tools shaped the architecture more than the product requirements did.

This is how organizational defaults work. You use the budgeting template your company provides, so your project gets shaped by that template’s assumptions about what projects look like. You hire through the standard process, so your team composition reflects HR’s model of what a team should be, not what your specific work requires. You hold weekly standups because that’s what the project management methodology prescribes, not because your work actually changes on a weekly cadence.

The question I keep coming back to is: when the mismatch becomes obvious, do you simplify the structure to match reality, or do you grow into the structure you already have? Both happen. And the choice between them often determines whether a project stays focused or drifts into building things nobody asked for, just because the scaffolding was already there.

Last thing. I’m building a management simulation where you practice having tough conversations with AI personas. One of them is named Deflective Dan. That name—“Deflective Dan” instead of “Try Demo” or “Launch Simulation”—is doing more product design work than any of the backend engineering. It establishes that you’re not interacting with a system. You’re dealing with a person. A difficult one. Someone who dodges accountability in ways that feel familiar if you’ve ever managed anyone.

The button doesn’t work yet. The AI integration isn’t built. But the name has already constrained what “working” means. It’s not “can the AI generate responses.” It’s “does Dan feel like someone you’ve actually managed?” That’s a completely different bar, and it was set by an interface decision, not a technical one.

This happens more than people realize. The way you name things, the way you present them to users, the language you put on a button—these aren’t cosmetic choices layered on after the real work. They are the real work. They define what success looks like before anyone writes a line of logic. A button that says “Generate AI Response” sets one expectation. A button that says “Talk to Dan” sets a completely different one. And you’ll build a different product depending on which button you’re trying to make true.

So here’s what I’m sitting with. All of this—the deletion, the simplification, the structure questioning, the naming—it’s convergence. It’s the moment where you stop building what you planned and start building what the work is actually telling you it needs to be. You can’t get there from a whiteboard. You can only get there by building the wrong thing first, watching where it breaks, and having the honesty to take it apart.

The question is whether you’re paying attention when that moment arrives. Or whether you’re so committed to the plan that you keep building something the work has already outgrown.

Why customer tools are organized wrong

This article reveals a fundamental flaw in how customer support tools are designed—organizing by interaction type instead of by customer—and explains why this fragmentation wastes time and obscures the full picture you need to help users effectively.

Infrastructure shapes thought

The tools you build determine what kinds of thinking become possible. On infrastructure, friction, and building deliberately for thought rather than just throughput.

Server-Side Dashboard Architecture: Why Moving Data Fetching Off the Browser Changes Everything

How choosing server-side rendering solved security, CORS, and credential management problems I didn't know I had.

The work of being available now

A book on AI, judgment, and staying human at work.

The practice of work in progress

Practical essays on how work actually gets done.

Dev reflection - February 15, 2026

I want to talk about what happens when something stops being a tool and becomes plumbing. Because that shift is happening in my work right now, and I think it's happening everywhere, and most peopl...

Building in public is broken — here's how to fix your signal-to-noise ratio

Building in public promised accountability and community. It delivered content production under a different name. Most builders now spend more time documenting work than doing it, trapped in a perform

You can't skip the hard part

Reskilling won't save you. Frameworks won't save you. The work of becoming human again is personal, uncomfortable, and has no shortcut.

Dev reflection - February 15, 2026

I want to talk about what happens when something stops being a tool and becomes plumbing. Because that shift is happening in my work right now, and I think it's happening everywhere, and most peopl...

Dev reflection - February 14, 2026

So I want to talk about archiving. Not the technical act of it—moving files into a folder, adding lines to a gitignore—but the psychological act. The decision to say: this thing is done. Not broken...

Dev reflection - February 13, 2026

So here's something I've been thinking about. When systems fail, they don't just reveal technical problems. They reveal priorities. They reveal what teams actually value versus what they say they v...