Paul Welty, PhD AI, WORK, AND STAYING HUMAN

· development

The silence that ships

Three projects independently discovered the same bug pattern today — code that reports success when something important didn't happen. The most dangerous failures don't look like failures at all.

Duration: 4:59 | Size: 4.57 MB


Hey, it’s Paul. Thursday, February 27th.

I touched six projects today. Different codebases, different languages, different purposes. And three of them independently surfaced the same bug. Not the same literal bug — different code, different contexts. But the same pattern. Code that fails silently. Code that catches an error, swallows it, and tells you everything’s fine.

Here’s what’s interesting: in each case, the feature worked. Users could use it. Tests passed. The system appeared healthy. But something important wasn’t happening, and nobody knew.

In one project, a server action was calling redirect() after a successful operation — standard Next.js pattern. But redirect() works by throwing an error. And the error handler around the action caught it. Swallowed it. Reported failure. The user saw nothing happen. Hit the button again. Nothing. The redirect was being treated as a crash. And because the UI showed no error message — just… nothing — it looked like the app was frozen, not broken.

In another project, a pre-commit hook was supposed to generate podcast audio whenever you committed a blog post. The audio generation would fail — bad API key, network timeout, whatever — and the hook would just… continue. Commit succeeds. Post goes live. No audio. You’d only notice when someone tried to play the episode and got silence. The hook’s job was to be a quality gate, and it had been rubber-stamping everything.

Third project: assessment scoring. The AI grades a student’s work, sends the score to the database. The API call to the AI succeeds. The write to the database fails. The function returns the score. Everything downstream thinks it worked. But the score is gone. The student sees their result, refreshes, and it’s vanished.

Three different bugs. Same shape. A try-catch block that catches too broadly, handles the error by ignoring it, and presents success when something important didn’t happen.

I’ve been thinking about why this pattern is so persistent. And I think it’s because our instinct as builders is to make things resilient. We don’t want our systems to crash. We don’t want users to see ugly error screens. So we wrap things in try-catch. We add fallbacks. We default to graceful degradation. And most of the time, that’s right.

But there’s a difference between resilience and dishonesty. A resilient system absorbs a shock and continues operating in a degraded but known state. A dishonest system absorbs a shock and pretends nothing happened. The first one is engineering. The second one is a lie.

And the lie compounds. Because when a system tells you it succeeded and it didn’t, you build on that success. You make decisions based on it. The student studies based on scores that don’t exist. The listener subscribes to a podcast that sometimes has episodes and sometimes has silence. The developer commits code through a quality gate that waves everything through.

Here’s where this connects to something bigger than code. Organizations do this constantly. A quarterly review where everyone reports green and nobody mentions the project that quietly dropped scope. A hiring process that technically follows all the steps but nobody asks the hard question. A product launch where the metrics look good because the dashboard doesn’t track the thing that’s actually broken.

The silence that ships. That’s what I’m calling it. When you build a system — technical or organizational — that’s optimized to avoid showing failure, you don’t eliminate failure. You just lose the ability to see it. The failures still happen. They just happen quietly, and they accumulate, and by the time someone notices, the cost of fixing them has grown by an order of magnitude.

So what’s the fix? It’s not “let everything crash.” That’s overcorrecting. The fix is: make the failure path as intentional as the success path. When you write a catch block, ask yourself: what does the caller need to know? If the answer is “that this failed,” then your catch block needs to surface that, not suppress it. When you build a quality gate, ask: what happens when the gate says no? If the answer is “it can’t say no,” you don’t have a gate. You have theater.

I spent part of today making my AI tools argue with each other. Literally. One agent builds the feature. Another agent tries to break it. A third agent reads the diff against the spec and asks whether the code actually solves the stated problem. They cycle up to three times. And the thing that strikes me is: this adversarial pattern exists specifically to surface the silence. The builder has every incentive to ship. The breaker has every incentive to find the thing that looks fine but isn’t. Without the breaker, the silence ships.

You can do this at the organizational level too. Not by creating adversarial culture — that’s toxic. By creating adversarial process. A code review that’s expected to find problems, not rubber-stamp approval. A retrospective where “what went wrong” isn’t a formality. A product review where someone’s job is to ask: “What are we not seeing?”

The uncomfortable truth is that most of us would rather ship silence than surface failure. Surfacing failure means admitting something’s broken. It means the dashboard turns red. It means the meeting gets longer. It means someone has to do the work of fixing it instead of moving on to the next thing.

But the silence ships whether you see it or not. The only question is whether you find it now, when the fix is small, or later, when it’s expensive.

What are your systems telling you succeeded today that actually didn’t?

Why customer tools are organized wrong

This article reveals a fundamental flaw in how customer support tools are designed—organizing by interaction type instead of by customer—and explains why this fragmentation wastes time and obscures the full picture you need to help users effectively.

Infrastructure shapes thought

The tools you build determine what kinds of thinking become possible. On infrastructure, friction, and building deliberately for thought rather than just throughput.

Server-side dashboard architecture: Why moving data fetching off the browser changes everything

How choosing server-side rendering solved security, CORS, and credential management problems I didn't know I had.

The work of being available now

A book on AI, judgment, and staying human at work.

The practice of work in progress

Practical essays on how work actually gets done.

When your work moves faster than your rules can keep up, governance quietly becomes theater

I want to talk about something that happened this week that looks like a technical problem but is actually a management problem. And I think it maps onto something most organizations are going to f...

Junior engineers didn't become profitable overnight. The work did.

We've been celebrating that AI made junior engineers profitable. That's not what happened. AI made it economically viable to give them access to work that actually builds judgment, work we always knew

Three projects, three opposite methods, all monster output days: what that taught me about when process helps and when it's just comfort

I've been running a portfolio of software projects using a mix of autonomous AI pipelines and human-led parallel agent sessions. Yesterday, three different projects had monster output days — and th...

What happens when the pipeline doesn't need you

So here's something I noticed today that I want to sit with. I run several projects that use autonomous pipelines — AI systems that pick up tasks, write code, open pull requests, ship changes. One ...

Dev reflection - February 24, 2026

I want to talk about what happens when the thing that runs the factory needs more maintenance than the factory itself.

The difference between persistence and stubbornness

I want to talk about persistence. Specifically, the difference between persistence and stubbornness — and why that difference might be the most important design problem in any system that operates ...