The lock-in happens at the integration boundary
So here's something I've been sitting with today. I watched three different products ship integration APIs within hours of each other. Same basic problem—let external systems send data in. Three co...
Duration: 5:48 | Size: 5.31 MB
Hey, it’s Paul. Wednesday, February 11, 2026.
So here’s something I’ve been sitting with today. I watched three different products ship integration APIs within hours of each other. Same basic problem—let external systems send data in. Three completely different implementations. And the interesting part isn’t that teams made different choices. It’s that they didn’t really make choices at all. The infrastructure they’d picked months ago made the choices for them.
One product built a REST endpoint with SHA-256 hashing and bearer tokens and scoped permissions—security-first design that looks like best practices on paper. Another product just added a Rails controller action because the monolith was already there handling auth and database access in the same process. The third product went through three iterations in a single session—webhooks, then API v1 with bearer tokens, then just API keys after realizing they didn’t need half of what they’d built.
Here’s what this reveals: infrastructure lock-in only becomes visible at integration boundaries. You pick a deployment architecture for reasons that seem good at the time—performance, familiarity, whatever. Then months later you need to connect that system to something else, and suddenly you discover all these downstream effects on authentication patterns, on testing workflows, on deployment complexity. Effects that were invisible until you tried to cross the boundary.
This isn’t a technical problem. This is how organizations work. The decisions that constrain you most aren’t the ones you agonize over. They’re the ones you made quickly, early, when you didn’t know enough to see the implications. And by the time you see them, you’re not deciding anymore. You’re discovering what you already committed to.
Second thing I noticed. There’s this pattern with automation where simple cases work beautifully and complex cases break in ways that teach you something.
Take email signatures. A git hook fires on every commit, finds the latest published essay, generates HTML signature files. Works perfectly. Same input always produces same output. Read-only, deterministic, stateless.
Now take social media posts. Same approach—git hook fires on commit, generates content variants. Except now you need to know: have I already queued this post? And git hooks are stateless by design. They fire and forget. They have no memory of what they’ve done before.
You could bolt on state management—store the last-queued commit SHA in a file somewhere. But now you’re building memory on top of a system designed to be memoryless. You’re fighting the tool instead of using it.
The question this raises: what’s the half-life of automation? At what complexity threshold does the automated process become more fragile than the manual one it replaced?
I think the answer has to do with whether the automation needs to remember anything. If it’s purely reactive—same trigger, same response, every time—automation works. The moment it needs to track what it’s done, or check whether something already exists, or handle irregular inputs… you’re not automating anymore. You’re building a system. And systems need maintenance, debugging, monitoring. All the overhead you were trying to avoid.
This applies way beyond code. Think about any workflow you’ve automated in your organization. The ones that keep working are the ones that don’t need memory. Expense reports that route to the same approver every time. Calendar invites that always include the same Zoom link. The ones that break are the ones that need context. “Send a reminder if they haven’t responded”—but what counts as a response? “Escalate if it’s been too long”—but too long compared to what?
Automation succeeds at showing you where stateless logic reaches its boundary. That’s not failure. That’s the automation doing its job of revealing what kind of problem you actually have.
Third thing. This one’s been bugging me.
Admin tooling—user tables, workspace management, role selectors—converged across four products in about two weeks. Everyone ended up with basically the same patterns because they’re visible. You touch admin pages every session. You see what works and what doesn’t. Shared solutions emerge fast.
Observability—error tracking, analytics, event logging—is still copy-paste-adapt for every single product. Same setup steps. Same configuration gotchas. Same “oh right, you have to configure the CORS proxy” discoveries. But it never standardizes.
Why? It’s not that observability is less important. It’s not that it’s harder. It’s that it happens once during deployment and then disappears from view. You set it up, it works, you forget about it. Until the next product, when you rediscover the same steps from scratch.
What gets reused isn’t determined by importance or effort invested. It’s determined by visibility and feedback loops.
This explains so much about organizational knowledge. The stuff that spreads is the stuff people see each other doing. Meeting formats. Slide templates. The way people structure emails. You watch, you copy, patterns emerge.
The stuff that stays siloed is the stuff that happens once and disappears. Onboarding checklists. Deployment procedures. The weird workaround someone figured out for that one vendor’s API. Important knowledge, hard-won knowledge, but invisible to everyone who wasn’t there when it happened.
If you want knowledge to spread, you have to make it visible. Not documented—visible. Documentation is where knowledge goes to die. Visibility means people encounter it in the normal course of their work, repeatedly, until the pattern becomes obvious.
So here’s the question I’m left with. How much of what we call “technical debt” or “organizational dysfunction” is actually just… constraints we committed to before we understood them, automation that outgrew its stateless origins, and knowledge that never spread because no one could see it?
And if that’s true, what would it look like to design for discovery instead of decision? To assume you’ll learn the real requirements through implementation, not before it?
I don’t have a clean answer. But I think it starts with paying attention to integration boundaries, automation breakpoints, and invisible knowledge. That’s where the interesting stuff hides.
Why customer tools are organized wrong
This article reveals a fundamental flaw in how customer support tools are designed—organizing by interaction type instead of by customer—and explains why this fragmentation wastes time and obscures the full picture you need to help users effectively.
Infrastructure shapes thought
The tools you build determine what kinds of thinking become possible. On infrastructure, friction, and building deliberately for thought rather than just throughput.
Server-side dashboard architecture: Why moving data fetching off the browser changes everything
How choosing server-side rendering solved security, CORS, and credential management problems I didn't know I had.
The work of being available now
A book on AI, judgment, and staying human at work.
The practice of work in progress
Practical essays on how work actually gets done.
Junior engineers didn't become profitable overnight. The work did.
We've been celebrating that AI made junior engineers profitable. That's not what happened. AI made it economically viable to give them access to work that actually builds judgment, work we always knew
What happens when the pipeline doesn't need you
So here's something I noticed today that I want to sit with. I run several projects that use autonomous pipelines — AI systems that pick up tasks, write code, open pull requests, ship changes. One ...
The bottleneck moved
The constraint in knowledge work used to be execution. Now it's specification. Most organizations haven't noticed.
What happens when the pipeline doesn't need you
So here's something I noticed today that I want to sit with. I run several projects that use autonomous pipelines — AI systems that pick up tasks, write code, open pull requests, ship changes. One ...
Dev reflection - February 24, 2026
I want to talk about what happens when the thing that runs the factory needs more maintenance than the factory itself.
The difference between persistence and stubbornness
I want to talk about persistence. Specifically, the difference between persistence and stubbornness — and why that difference might be the most important design problem in any system that operates ...