Dev reflection - February 10, 2026
Duration: 7:20 | Size: 6.72 MB
Hey, it’s Paul. Tuesday, February 10, 2026.
I want to talk about where complexity actually lives. Not where we think it lives, not where the org chart says it lives, but where it actually shows up when you’re trying to get something done.
Here’s what I noticed this week. I shipped a bunch of features across several projects—admin pages, calendar views, integrations, the actual product work. That took a handful of commits each. Straightforward. Then I spent twice as many commits debugging deployment configuration. Connection pooler hostnames. Healthcheck timeouts. Environment variable plumbing. Statement cache settings for transaction mode.
Now, the traditional mental model says application logic is the hard part and deployment is “just ops.” Something you hand off. Something that happens after the real work. But when I’m spending more problem-solving cycles on Railway healthcheck timeouts than on building a calendar view, that mental model is wrong. The deployment configuration is the product work. It’s not a detail. It’s not an afterthought. It’s where the actual complexity lives.
This applies way beyond code. Think about any knowledge work. Where do people actually get stuck? Usually not on the core task. The lawyer doesn’t struggle with legal reasoning—she struggles with document management systems, filing procedures, getting the right people to sign off. The consultant doesn’t struggle with the analysis—he struggles with getting access to the data, navigating client politics, formatting deliverables to spec. The complexity migrates to the seams. To the coordination points. To the places where your work has to interface with systems you don’t control.
So here’s the question: when you’re planning a project, are you budgeting time for where the complexity actually lives? Or where you think it should live?
Second thing. I’ve been eliminating JavaScript from some dashboards. Not because JavaScript is bad, but because I realized something: a lot of what I was doing client-side wasn’t actually dynamic. It was periodic updates pretending to be real-time.
Take a clock widget on a dashboard. The old way: render a placeholder, load JavaScript, fetch the current time, update the DOM. The new way: generate the timestamp when the page builds. Static HTML. No scripts. The dashboard shows what time it was when the build ran, not what time it is now.
That sounds like a downgrade, right? Stale data. But here’s the thing—if the build runs every few minutes, and nobody’s staring at the dashboard waiting for the second hand to tick, the staleness is invisible. Users don’t notice. The tradeoff isn’t performance versus features. It’s architectural simplicity versus the appearance of immediacy.
This pattern showed up in another project too. Email signatures that show your latest blog post. Old approach: JavaScript that detects the most recent post and updates the link. New approach: a hook that runs when you publish, finds the latest post, generates static HTML with the right URL baked in. No runtime logic. No API calls. Yes, there’s a brief window where the signature might link to a post that hasn’t fully deployed yet. But that window is measured in seconds, and the simplicity gained—no client-side state management, no error handling for failed API calls—is substantial.
The broader question here: how much runtime complexity exists in your systems purely to hide delays that users wouldn’t notice anyway? How much of what you call “dynamic” is actually just periodic updates that could happen at build time, at publish time, at any time other than the moment the user loads the page?
This isn’t about being cheap with compute. It’s about being honest about what “real-time” actually means for your use case. Sometimes it means milliseconds. Sometimes it means “whenever someone publishes something new.” Knowing the difference changes what you build.
Third thing. I’ve been extracting patterns from projects into a shared knowledge base. Deployment configurations, architectural decisions, lessons learned. The idea is: figure it out once, document it, reuse it next time.
Here’s what’s interesting. The documentation exists. I’ve got detailed notes on pooler configuration, port binding, statement cache settings. And I still spent multiple commits debugging the same issues on the next project.
At first I thought this was a knowledge transfer problem. The docs weren’t clear enough, or I wasn’t reading them carefully. But that’s not it. The issue is that each project’s deployment surface is unique enough that pattern matching doesn’t fully work. Some decisions—like “use a claim-then-process pattern for polling”—you make once and reuse verbatim. Other details—like which pooler hostname your specific database instance uses—you debug every time because they’re environmental, not architectural.
This is the difference between a principle and a procedure. Principles transfer. “Use connection pooling in transaction mode” is a principle. It applies everywhere. Procedures don’t transfer cleanly. “Set pooler hostname to aws-0” is a procedure. It might be aws-1 on your next project. Or something else entirely.
Organizations confuse these constantly. They document procedures and call them best practices. Then people follow the procedures in contexts where they don’t apply, and things break, and everyone blames the documentation. The documentation wasn’t wrong—it just encoded the wrong level of abstraction.
So the question becomes: how do you encode the difference? How do you signal to future-you, or to your team, which patterns to reuse verbatim and which ones need adaptation? That’s not a technical problem. That’s a knowledge management problem. And most organizations haven’t solved it.
Last thing. I noticed I’m replicating infrastructure across projects mechanically rather than building shared components. Email signatures, subfooters, syndication—each project reimplements the same patterns using its native tooling. Python hooks here, Rails helpers there, Hugo templates somewhere else. Nothing is abstracted into a shared dependency.
The advantage is obvious: nothing breaks across projects when one changes. The cost is also obvious: fixes don’t automatically propagate. When I update a subfooter link, I have to sync it to every portfolio app manually.
This is the coupling question that every organization faces. Shared components reduce duplication but create coordination costs. When the shared component changes, everything that depends on it has to adapt. Mechanical replication creates duplication but eliminates coordination. Each project is independent. Changes are local.
There’s no universal answer. But there’s a useful heuristic: what’s the cost of divergence? If your projects can safely diverge—if it doesn’t matter that one has a slightly different subfooter than another—then mechanical replication is fine. If divergence creates real problems—inconsistent branding, security vulnerabilities, user confusion—then the coordination cost of shared components is worth paying.
The mistake is treating this as a technical decision. It’s not. It’s a decision about how tightly coupled your work needs to be. And that depends on your organization, your users, your risk tolerance. Not on what’s architecturally elegant.
So here’s what I’m sitting with. Complexity lives at the seams, not the core. “Dynamic” often means “periodic updates hiding behind runtime logic.” Principles transfer, procedures don’t. And coupling is a business decision disguised as a technical one.
The question I keep coming back to: what would it look like to budget time, attention, and resources based on where complexity actually lives? Not where we wish it lived. Not where the job descriptions say it should live. But where it actually shows up, every time, demanding to be solved.
Featured writing
Why customer tools are organized wrong
This article reveals a fundamental flaw in how customer support tools are designed—organizing by interaction type instead of by customer—and explains why this fragmentation wastes time and obscures the full picture you need to help users effectively.
Infrastructure shapes thought
The tools you build determine what kinds of thinking become possible. On infrastructure, friction, and building deliberately for thought rather than just throughput.
Server-Side Dashboard Architecture: Why Moving Data Fetching Off the Browser Changes Everything
How choosing server-side rendering solved security, CORS, and credential management problems I didn't know I had.
Books
The Work of Being (in progress)
A book on AI, judgment, and staying human at work.
The Practice of Work (in progress)
Practical essays on how work actually gets done.
Recent writing
You were trained to suppress yourself
Organizations didn't accidentally reward the machine-self. They engineered it. And you cooperated because it worked—until now.
Dev reflection - February 09, 2026
I want to talk about something I noticed this weekend that I think applies far beyond the work I was doing. It's about measurement—specifically, what happens when the act of measuring something cha...
Dev reflection - February 08, 2026
I want to talk about what happens when copying becomes faster than deciding. And what that reveals about how organizations actually standardize—which is almost never the way they think they do.
Notes and related thinking
Dev reflection - February 09, 2026
I want to talk about something I noticed this weekend that I think applies far beyond the work I was doing. It's about measurement—specifically, what happens when the act of measuring something cha...
Dev reflection - February 08, 2026
I want to talk about what happens when copying becomes faster than deciding. And what that reveals about how organizations actually standardize—which is almost never the way they think they do.
Dev reflection - February 07, 2026
I've been thinking about friction. Not the dramatic kind—not the system crash, not the project that fails spectacularly. I mean the quiet kind. The accumulation of small things that don't quite wor...