Paul Welty, PhD AI, WORK, AND STAYING HUMAN

· development

Dev reflection - February 12, 2026

So everything broke today. Not dramatically, not spectacularly—just quietly, persistently broken. Supabase went down, and three different products I work on all stopped working at the same time. Sa...

Duration: 6:53 | Size: 6.31 MB


Hey, it’s Paul. Thursday, February 12, 2026.

So everything broke today. Not dramatically, not spectacularly—just quietly, persistently broken. Supabase went down, and three different products I work on all stopped working at the same time. Same infrastructure, same failure, same moment.

But here’s what I want to talk about: the failure itself isn’t interesting. What’s interesting is what it revealed about decisions I made months ago that I’d completely forgotten about.


First thing I’ve been sitting with: infrastructure becomes invisible precisely when it’s working. That’s the whole point of good infrastructure—you stop thinking about it. But invisible doesn’t mean absent. It means you’ve accumulated assumptions you can no longer see.

When I chose Supabase for authentication across multiple products, I was solving a friction problem. Setting up auth from scratch for every project is tedious. Shared infrastructure reduces that friction. Good decision, right?

Except I wasn’t making one decision. I was making two decisions bundled together. The first: use this service for auth. The second: accept that these products now share a failure mode. I only saw the first decision. The second one was invisible until today, when all three products hung for 39 seconds before returning errors.

This happens everywhere, not just in software. When an organization standardizes on a vendor, a process, a tool—they’re reducing friction. That’s real value. But they’re also creating correlation. When that shared thing breaks, everything that depends on it breaks simultaneously.

The question isn’t whether to share infrastructure. The question is whether you’ve accounted for the correlation you’re creating. Most organizations haven’t, because the friction reduction is visible and immediate, while the correlation risk is invisible until it materializes.


Second thing: I noticed today that some patterns standardize naturally while others resist it, and the difference isn’t about importance or difficulty.

I’ve been building admin interfaces across several products. Users tables, workspace management, role selectors—basic stuff. Within days, these implementations converged. They look nearly identical now. Not because I planned it that way, but because I touch admin pages constantly. Every session, I’m in there. Friction becomes obvious. Patterns emerge through repetition.

Meanwhile, observability setup—logging, monitoring, error tracking—remains ad-hoc across the same products. Different configurations, different gaps, different assumptions. One product has timeout handling. Another doesn’t. One configured alerting thresholds. Another forgot to.

Why the difference? It’s not that observability matters less. It’s that observability has a slow feedback loop. You configure it once during setup, it works, you forget about it. Then something breaks six months later and you discover you never defined what “healthy” means.

This is a general principle about what gets maintained versus what gets neglected. The things you touch daily improve through accumulated attention. The things you set up once and forget accumulate invisible assumptions. Annual performance reviews drift because managers only think about them once a year. Emergency procedures rot because you only need them during emergencies. The feedback loop determines what improves.

If you want something to stay good, you need to touch it regularly. If you can’t touch it regularly, you need a different strategy—scheduled reviews, automated testing, something that creates artificial feedback loops for things that don’t generate natural ones.


Third thing: I’ve been watching the boundary between automation and human judgment shift in interesting ways.

I have some automated scripts that run after every code commit. They work great for certain things—deterministic, read-only operations where the same input always produces the same output. Generate a file from a template. Update a signature. Simple stuff.

But today I hit a case where the automation kept breaking. The script needed to remember what it had done before, and it couldn’t. It fires fresh every time with no memory of previous runs. That’s not a bug—that’s the design. Stateless automation is simple and reliable precisely because it doesn’t carry state.

The solution isn’t smarter automation. The solution is recognizing where stateless logic reaches its boundary. Some operations need memory. Some need judgment about whether to proceed. Some need a human to close the loop.

I also built a script today that turns quick notes into structured tasks. It’s not automated—I trigger it manually, I review the output, I decide what happens next. The script handles the mechanical transformation. I provide the memory and judgment.

This is the delegation problem every manager faces, just in a different context. Which decisions can happen independently? Which require approval? The answer depends on reversibility and consequence, not capability. A stateless script can do sophisticated text processing, but if the operation needs to know “did I already do this?” then sophistication doesn’t help. The boundary isn’t about intelligence. It’s about what kind of memory the operation requires.


Fourth thing, and this one’s been nagging at me: content isn’t as portable as we pretend it is.

I’ve been moving content between different systems—marketing pages, blog posts, documentation. The naive assumption is that content is content. Write it once, render it anywhere. Markdown is markdown.

Except it isn’t. Every system makes assumptions about structure, styling, data flow, error handling. When you move content from one system to another, you’re not just moving text. You’re crossing a boundary where assumptions change.

Today I discovered a styling bug that had been silently doing nothing for weeks. The content looked fine because one system was compensating for the broken code. Move the content to a different system, the compensation disappears, the bug becomes visible.

This applies way beyond technical systems. When you move a process from one team to another, you’re not just moving steps. You’re moving it across a boundary where assumptions about communication, authority, timing, and quality all change. The process might work fine in its original context and break immediately in the new one—not because anyone did anything wrong, but because the receiving context makes different assumptions.

Portability is always more expensive than it looks. The content or process itself might transfer easily. The adaptation layer—the work of translating between different assumption sets—that’s where the real cost hides.


So here’s what I’m left with. Decisions bundle together in ways we don’t see until something breaks. Feedback loops determine what improves versus what drifts. Automation boundaries aren’t about capability but about what kind of memory operations require. And portability costs hide in the adaptation layer, not the content itself.

The question I keep coming back to: how do you make invisible assumptions visible before they bite you? Not after the outage, not after the migration fails, not after the process breaks in its new home. Before.

I don’t have a clean answer. But I suspect it starts with asking “what am I assuming will stay true?” every time you make a decision that’s going to become invisible. Because it will become invisible. That’s the nature of infrastructure, of process, of any decision that works well enough to stop thinking about.

The things you stop thinking about are still there. They’re just waiting.

Why customer tools are organized wrong

This article reveals a fundamental flaw in how customer support tools are designed—organizing by interaction type instead of by customer—and explains why this fragmentation wastes time and obscures the full picture you need to help users effectively.

Infrastructure shapes thought

The tools you build determine what kinds of thinking become possible. On infrastructure, friction, and building deliberately for thought rather than just throughput.

Server-Side Dashboard Architecture: Why Moving Data Fetching Off the Browser Changes Everything

How choosing server-side rendering solved security, CORS, and credential management problems I didn't know I had.

The work of being available now

A book on AI, judgment, and staying human at work.

The practice of work in progress

Practical essays on how work actually gets done.

Busy was always avoidance

Staying busy kept you from noticing where you were. AI didn't create the abyss—it just forced you to look.

The intelligence briefing you're not getting

Most knowledge workers spend 45 to 90 minutes each morning manually triaging the internet. The time already exists in your day. You're just spending it on filtering instead of reading.

Dev reflection - February 11, 2026

So here's something I've been sitting with today. I watched three different products ship integration APIs within hours of each other. Same basic problem—let external systems send data in. Three co...

Dev reflection - February 11, 2026

So here's something I've been sitting with today. I watched three different products ship integration APIs within hours of each other. Same basic problem—let external systems send data in. Three co...

Dev reflection - February 10, 2026

I want to talk about where complexity actually lives. Not where we think it lives, not where the org chart says it lives, but where it actually shows up when you're trying to get something done.

Dev reflection - February 09, 2026

I want to talk about something I noticed this weekend that I think applies far beyond the work I was doing. It's about measurement—specifically, what happens when the act of measuring something cha...