Junior engineers didn't become profitable overnight. The work did.

We've been celebrating that AI made junior engineers profitable. That's not what happened. AI made it economically viable to give them access to work that actually builds judgment, work we always knew
The coordination tax on junior work just dropped
I spent an hour last week watching a junior developer use Claude to build a feature I would have assigned to a mid-level engineer six months ago. She finished it in two days. It worked. The code was fine. She understood what she built.
This is not supposed to be possible.
The conventional wisdom says junior developers are expensive. They write buggy code. They ask too many questions. They need supervision. The coordination tax is high. You hire them anyway because you need a pipeline, because someone has to do the grunt work, because eventually they become seniors.
That math just changed.
The coordination problem was always about time
Junior developers have always been capable of more than we let them do. The constraint was never ability. It was the cost of getting them there.
A junior can learn to build a feature. But first they have to understand the codebase. Then they have to figure out the patterns. Then they have to write the code. Then they have to fix it when it breaks. Then they have to do it again. Each step requires input from someone more senior. Each question interrupts someone else’s work. The feature that takes a senior two days takes a junior two weeks, and costs three people’s time.
The coordination tax made junior work expensive. So we rationed it. We gave them narrow tasks. We kept them away from anything complex. We told ourselves this was training, but mostly it was risk management.
AI tools remove most of that tax. A junior with Claude can explore a codebase without asking questions. They can try patterns without waiting for review. They can iterate without supervision. The time from “I don’t know how to do this” to “I built a working version” collapsed from days to hours.
Simon Willison reported from a recent Thoughtworks retreat that “Juniors are more profitable than they have ever been. AI tools get them past the awkward initial net-negative phase faster.” Junior developers now move through the unproductive early phase in weeks instead of months. They are what the retreat participants called “a call option on future productivity.”
The junior I watched last week asked me three questions. Two years ago, that feature would have required twenty.
The skill that matters is knowing what to build
Here is what did not change: someone still has to decide what to build.
Boris Cherny, who created Claude Code at Anthropic, put it plainly: “Someone has to prompt the Claudes, talk to customers, coordinate with other teams, decide what to build next.” Anthropic keeps hiring developers even though they build tools that write code. They know what matters.
The bottleneck in software development was never typing speed. It was understanding what needed to be built, why it mattered, and how it fit with everything else. AI makes the typing faster. It does not make the understanding easier.
Junior developers with AI can now build things quickly. But they still need someone to tell them what things. They still need context about why this feature matters more than that one. They still need help understanding how their work connects to the customer problem or the business goal.
This is where the productivity gains actually show up. Not in the code itself, but in the reduced time between “we should build this” and “it’s done.” The junior builds it faster. The senior spends less time explaining how. Everyone spends more time on the work that actually requires judgment.
A developer at OpenAI told Lenny Rachitsky that roughly 95% of engineers there use Codex, often working with fleets of 10 to 20 parallel AI agents. Code review times dropped from 10 to 15 minutes down to 2 to 3 minutes. The time savings are real. But someone still has to know what code to write, and someone still has to review whether it does the right thing.
The coordination tax dropped. The judgment tax did not.
Mid-level engineers are the actual problem
The Thoughtworks retreat identified something uncomfortable. According to Willison’s reporting, “The real concern is mid-level engineers who came up during the decade-long hiring boom and may not have developed the fundamentals needed to thrive in the new environment.”
This is the group that should worry. Not juniors. Not seniors. The people in the middle.
Junior developers are better at AI tools than senior engineers. Willison notes that juniors “are better at AI tools than senior engineers, having never developed the habits and assumptions that slow adoption.” They have no muscle memory to unlearn. They have no attachment to old workflows. They just use what works.
Senior engineers have judgment. They know what matters. They can debug the AI’s mistakes because they have seen those mistakes before. They can evaluate whether the generated code is good because they know what good looks like.
Mid-level engineers often have neither advantage. They learned to code during a period when jobs were plentiful and standards were loose. Many got promoted based on years of service rather than demonstrated skill. They know how to use the old tools but lack the fundamentals to evaluate the new ones. They can write code but struggle to assess whether it is correct.
The retreat participants discussed solutions: apprenticeship models, rotation programs, lifelong learning structures. Then they admitted that “no organization has yet solved the problem of retraining mid-level engineers” through any of those approaches. This population is the bulk of the industry by volume. Retraining them is genuinely difficult.
Organizations will have to choose. Invest heavily in retraining, or accept that a significant portion of their engineering workforce cannot adapt. Neither option is cheap. Both are necessary.
The productivity gains are real but uneven
The numbers tell a complicated story. One widely cited survey found that 93% of developers use AI tools, but productivity gains remain stuck at around 10%. Developers report saving about four hours per week. That is meaningful but not transformative.
The gap between adoption and results makes sense. Coding was never the bottleneck. As one developer noted in discussion of the survey, even doubling coding speed might just mean spending more time in meetings. The actual enjoyable part of the job gets automated away.
Controlled experiments show developers complete well-defined tasks 55% faster with AI assistance. But those experiments exclude integration, review, and deployment. They measure typing speed, not delivery velocity. When you zoom out to the full development cycle, the gains shrink.
Research compiled by Panto AI found that “Many teams report faster coding but little improvement in delivery velocity or business outcomes, showing organizational results are inconsistent despite individual speedups.” The perception gap is real. Developers feel more productive. Organizations see limited improvement in what actually ships.
The problem is quality. AI-generated code can introduce subtle defects and security risks that increase downstream work. Without proper governance, teams optimize for speed at the expense of correctness. They commit code faster than their review capacity can handle. The bottleneck moves from writing to verification.
Organizations that realize genuine gains share common practices. They implement automated testing gates, deploy security scans tuned for AI failure patterns, enforce pull request size limits, and align incentives to code quality rather than output volume. The Panto AI research notes that “Organizations realize productivity gains only when they implement governance controls.”
One developer described maintaining a shared markdown file that documents every mistake and constraint the AI encounters. “Over time, the agent stops repeating errors because expectations are explicit,” he explained. This kind of systematic approach works. But it requires discipline that most teams lack.
The window is closing
Sherwin Wu from OpenAI believes “the next 12 to 24 months are a rare window where engineers can leap ahead before the role fully transforms.” He is right, but the window is smaller than that.
Engineers who learn to work effectively with AI tools now will compound that advantage. They will get faster at prompting, better at evaluating output, more skilled at debugging AI mistakes. They will build intuition about what works and what fails. That intuition takes time to develop.
Engineers who wait will find themselves competing with people who have that intuition. The productivity gap between AI power users and everyone else is widening. It will not close.
This is not about learning to use ChatGPT. Everyone can do that. This is about developing judgment about when AI helps and when it hurts. About knowing which tasks to automate and which to do manually. About building the muscle memory that lets you move fast without breaking things.
The developers who figure this out in the next year will be dramatically more productive than their peers. The developers who do not will find their skills increasingly obsolete.
Organizations face the same choice. The ones that invest now in governance, in training, in systematic approaches to AI-assisted development will pull ahead. The ones that just give everyone Copilot and hope for the best will see minimal gains and maximum risk.
What this means for hiring
The junior developer I watched last week will be mid-level in two years instead of four. She will get there with better fundamentals because she had to understand what the AI was doing in order to direct it. She will be more valuable than the mid-level engineers who learned during the boom.
This changes hiring strategy. Junior developers are now a better investment than they have been in a decade. They are cheaper than mid-level engineers, faster to productivity, and more adaptable to new tools. They are call options on future productivity, and those options just got more valuable.
Mid-level engineers are riskier. You need to assess not just what they can do, but whether they have the fundamentals to adapt. Can they debug code they did not write? Can they evaluate whether AI-generated code is correct? Can they learn new tools quickly? Many cannot.
Senior engineers remain essential. They provide the judgment that AI cannot replicate. They decide what to build. They evaluate quality. They mentor juniors. Their value increased because the coordination tax on junior work dropped. They can now use their judgment across more people.
The shape of the team is changing. Fewer mid-level engineers. More juniors. The same number of seniors, but with greater reach. The pyramid is getting shorter and wider.
This is not a prediction. It is already happening. The teams that recognize it first will have an advantage. The teams that keep hiring like it is 2021 will waste money and time.
The coordination tax on junior work just dropped. The organizations that understand what that means will move faster than everyone else.
Further reading
Quoting Thoughtworks — Simon Willison’s Weblog
Quoting Boris Cherny — Simon Willison’s Weblog
“Engineers are becoming sorcerers” | The future of software development with OpenAI’s Sherwin Wu — Lenny’s Newsletter
Productivity gains from AI coding assistants haven’t budged past 10% — Hacker News
AI Coding Productivity Statistics 2026: Gains, Tradeoffs, and Metrics — Panto AI Blog
GPT-5.3 Codex Boosts Team Productivity with 24/7 Development — LinkedIn
Why customer tools are organized wrong
This article reveals a fundamental flaw in how customer support tools are designed—organizing by interaction type instead of by customer—and explains why this fragmentation wastes time and obscures the full picture you need to help users effectively.
Infrastructure shapes thought
The tools you build determine what kinds of thinking become possible. On infrastructure, friction, and building deliberately for thought rather than just throughput.
Server-side dashboard architecture: Why moving data fetching off the browser changes everything
How choosing server-side rendering solved security, CORS, and credential management problems I didn't know I had.
The work of being available now
A book on AI, judgment, and staying human at work.
The practice of work in progress
Practical essays on how work actually gets done.
Dev reflection - February 25, 2026
So here's something I noticed today that I want to sit with. I run several projects that use autonomous pipelines — AI systems that pick up tasks, write code, open pull requests, ship changes. One ...
The bottleneck moved
The constraint in knowledge work used to be execution. Now it's specification. Most organizations haven't noticed.
Dev reflection - February 24, 2026
I want to talk about persistence. Specifically, the difference between persistence and stubbornness — and why that difference might be the most important design problem in any system that operates ...
The bottleneck moved
The constraint in knowledge work used to be execution. Now it's specification. Most organizations haven't noticed.
Universities missed the window to own AI literacy
In 2023 the question of who would own AI literacy was wide open. Universities spent two years forming committees while everyone else claimed the territory. Then a federal agency published the guidance higher education should have written.
Revolutionizing legal practices: The impact of generative AI on the legal industry
Discover how generative AI is transforming the legal industry by enhancing efficiency, accuracy, and reshaping the roles of legal professionals.