Paul Welty, PhD AI, WORK, AND STAYING HUMAN

· development

Dev reflection - February 14, 2026

So I want to talk about archiving. Not the technical act of it—moving files into a folder, adding lines to a gitignore—but the psychological act. The decision to say: this thing is done. Not broken...

Duration: 6:28 | Size: 5.92 MB


Hey, it’s Paul. Saturday, February 14, 2026.

So I want to talk about archiving. Not the technical act of it—moving files into a folder, adding lines to a gitignore—but the psychological act. The decision to say: this thing is done. Not broken. Not failed. Done. And what it means that we almost never actually delete anything when we do it.

This week I watched four separate projects go through what I can only describe as a ritual. Legacy code got moved into archive directories. Old decision documents got tucked away alongside the implementations they described. And in every case, the move was careful, deliberate, almost reverent. Nobody hit delete. Nobody purged the history. They moved things aside, labeled them clearly, and walked away.

Here’s what I think that reveals. We treat our past decisions like we treat old journals. We don’t want to read them. We definitely don’t want to live by them anymore. But throwing them away feels like throwing away proof that we were thinking. That we had reasons. That the choices we made weren’t random.

This shows up everywhere, not just in software. Organizations hold onto old strategic plans, old org charts, old mission statements. Not because anyone references them, but because discarding them feels like admitting those efforts didn’t matter. And there’s a real tension here, because sometimes the principles behind old decisions do transcend the specific context. And sometimes they absolutely don’t. The hard part is figuring out which is which—and most of us avoid that work entirely by just keeping everything.

What caught my attention is that architectural decision logs got archived alongside the code they described. As if the reasoning was inseparable from the implementation. That’s a choice, even if nobody made it consciously. It means we’re treating our thinking as contextual rather than universal. Which might be honest. But it also means every time we start fresh, we start without the accumulated wisdom of why we made the choices we made last time. We just remember that we made them, vaguely, and that they didn’t quite work out.

The question I keep coming back to: what would it look like to actually extract principles from old decisions before archiving them? Not preserve the decisions themselves, but distill what they taught you? Almost nobody does this. It’s harder than it sounds.

Second thing. I’ve been watching what happens when multiple teams—or in my case, multiple projects—converge on the same infrastructure without anyone formally coordinating that convergence. You pick a tool because it works. Someone else picks the same tool for the same reason. A third project adopts it because the first two already did and there’s momentum. Pretty soon you have four or five systems built on the same foundation, and nobody decided that. It just happened.

The upside is obvious. When someone figures something out in one context, that knowledge transfers almost instantly. Not through documentation. Not through formal channels. Through recognition. Someone sees a problem, remembers solving something similar last week in a different project, and the fix moves sideways. It’s fast, it’s efficient, and it feels like magic when it works.

But here’s the part nobody talks about. That same convergence means your failure modes synchronize too. When the foundation has a subtle flaw, it doesn’t surface once. It surfaces everywhere, simultaneously, and each instance looks like a local problem until someone steps back far enough to see the pattern. You end up debugging the same issue three times before realizing it’s not your code—it’s the platform, or the shared assumption, or the architectural choice everyone inherited without questioning.

This is the delegation problem dressed up in infrastructure clothing. Every manager faces a version of this. You standardize processes because consistency enables efficiency. But consistency also means a bad process fails at scale instead of failing locally where it’s cheap to fix. The question isn’t whether to standardize. It’s how much visibility you maintain into the shared assumptions underneath your standards, and whether anyone’s job is to stress-test those assumptions before they break on their own.

Third idea, and this one’s been building for a while. There’s a maturity curve in how people integrate AI-generated content into their work, and I think most people are stuck at the wrong stage. The early stage is plumbing—can I get the system to generate something at all? That’s where most of the excitement lives. The tutorials, the demos, the “look what I made it do” posts.

But the interesting stage, the one that actually matters, is quality control. Not “can it generate this?” but “is what it generated reliable enough to use without checking every single output?” And that’s a fundamentally different problem. It’s not an engineering problem. It’s an editorial problem. It’s a judgment problem.

What I’ve been seeing in my own work is that the systems are adapting to accommodate nondeterminism rather than trying to eliminate it. You ask for structured data, sometimes you get structured data, sometimes you get a conversational response with the data buried in it. So you build fallback parsers. You add format instructions to prompts. You create spinners so users know something’s happening during the unpredictable wait. You’re not fixing the unreliability. You’re designing around it.

And that’s actually the right move, I think. Because the unreliability isn’t a bug to be fixed. It’s a characteristic of the medium. Like managing people. You don’t eliminate human variability. You build systems that account for it, that catch the important deviations, that let the unimportant ones pass through.

The question this raises for anyone integrating AI into their work: are you still trying to make the output perfectly predictable, or have you started designing for the reality that it won’t be? Because those two approaches lead to completely different architectures—of software, of teams, of workflows. One is a control problem. The other is a curation problem. And most people haven’t made that shift yet.

Last thing. Visibility. Cross-project audits, regular check-ins, systematic comparisons—they work. When someone explicitly creates a surface for comparison, gaps become obvious and get closed fast. But here’s the catch: it only works when someone does it. It’s a ritual, not a system. Nobody automates the question “are we still aligned?” You have to ask it on purpose, on a schedule, with discipline.

And the moment you stop asking, drift begins. Not dramatically. Quietly. Each project making locally reasonable decisions that slowly diverge from what everyone else is doing. By the time you notice, the cost of realignment is ten times what it would have been if you’d caught it a month earlier.

So the real question isn’t whether to do regular audits. It’s who owns the ritual. Who’s responsible for the question nobody’s asking? Because that’s the job that matters most and gets funded least—in software, in organizations, in life.

That’s where I’ll leave it. Happy Valentine’s Day. Go check on something you’ve been assuming is fine.

Why customer tools are organized wrong

This article reveals a fundamental flaw in how customer support tools are designed—organizing by interaction type instead of by customer—and explains why this fragmentation wastes time and obscures the full picture you need to help users effectively.

Infrastructure shapes thought

The tools you build determine what kinds of thinking become possible. On infrastructure, friction, and building deliberately for thought rather than just throughput.

Server-Side Dashboard Architecture: Why Moving Data Fetching Off the Browser Changes Everything

How choosing server-side rendering solved security, CORS, and credential management problems I didn't know I had.

The work of being available now

A book on AI, judgment, and staying human at work.

The practice of work in progress

Practical essays on how work actually gets done.

Building in public is broken — here's how to fix your signal-to-noise ratio

Building in public promised accountability and community. It delivered content production under a different name. Most builders now spend more time documenting work than doing it, trapped in a perform

You can't skip the hard part

Reskilling won't save you. Frameworks won't save you. The work of becoming human again is personal, uncomfortable, and has no shortcut.

Why your thought leadership content pipeline is broken

The problem isn't workflow efficiency. It's that you're treating thought leadership like a manufacturing process when it's actually a translation problem.

Dev reflection - February 13, 2026

So here's something I've been thinking about. When systems fail, they don't just reveal technical problems. They reveal priorities. They reveal what teams actually value versus what they say they v...

Dev reflection - February 12, 2026

So everything broke today. Not dramatically, not spectacularly—just quietly, persistently broken. Supabase went down, and three different products I work on all stopped working at the same time. Sa...

Dev reflection - February 11, 2026

So here's something I've been sitting with today. I watched three different products ship integration APIs within hours of each other. Same basic problem—let external systems send data in. Three co...