AI hasn't changed what you need to know to build systems. It has made it impossible to pretend you don't have time for the work you've been avoiding.
The Fundamentals Haven’t Changed — Your Excuses Have
Same compilers. Same servers. Same need to understand systems. What’s different is that the backlog of things you “never had time for” just lost its best defense.
Here’s a tension I’ve been sitting with as I use AI tools more in my daily work: in practical, hands-on terms, this feels revolutionary. The speed at which I can scaffold, refactor, and prototype is genuinely different from a year ago. But when I zoom out — when I look at what’s actually running in production — it’s the same story. Compiled code deployed to servers, handling requests, managing state, failing in the same ways systems have always failed. Both of these things are true simultaneously, and the interesting question is what sits between them.
The Debt Collector Has Arrived
The mainstream narrative about AI coding tools is about productivity: build faster, ship more, do in an afternoon what used to take a week. And that’s real — but it’s not the most important thing that’s happening.
The more honest framing is that AI tools are a debt collector. Not for financial debt — for professional debt. The kind every team carries and every developer knows about but has learned to live with.
You know what I’m talking about. The library migration that’s been on the roadmap for three quarters and keeps getting bumped by feature work. The test coverage that plateaued at 40% because writing tests for legacy code was tedious and unrewarding. The documentation that exists only in one person’s head, or worse, in a Confluence page from 2021 that describes a system that no longer exists. The CI/CD pipeline improvements everyone agreed were important at the last retro and nobody prioritized at the next sprint planning.
These aren’t mysteries. Every experienced developer can walk through their codebase and point to the places where corners were cut, where “we’ll come back to this” became permanent, where the right thing was known but the time wasn’t there.
The 2024 Stack Overflow Developer Survey made this painfully concrete: technical debt is the top frustration at work for 63% of professional developers. A separate DX survey found that over two-thirds of developers said they lose eight or more hours per week to inefficiencies caused by technical debt, insufficient documentation, and flawed build processes. That’s a full working day, every week, lost to the accumulated cost of things that should have been done and weren’t.
AI tools don’t make these problems easier to understand. Everyone already understands them. What AI tools do is make them harder to ignore. When the cost of writing those missing tests drops from “two developer-weeks of tedious work” to “a few focused days of generation and review,” the decision not to do it stops being a constraint and starts being a choice. The backlog’s best defense — “we don’t have time” — just lost most of its credibility.
Same Foundation, New Interface
Now, I want to be careful here, because there’s a version of this argument that veers into naïveté. It goes: “AI writes the code, developers just think big thoughts.” That’s not what’s happening.
Software is still being built with the same compilers and deployed to the same servers. To build a real system — one that handles failure gracefully, scales under load, and doesn’t become a security liability — you still need to understand the same things you always did. Networking, deployment, observability, security, state management, data modeling. An AI agent can change configuration values, but you need to know which values and why. It can generate a database migration, but you need to understand the locking implications in production.
What has changed is the interface to some of that work. Instead of writing every line yourself, you’re increasingly reviewing, guiding, and correcting AI-generated output. This isn’t a smaller skill — it’s a different one, and in some ways it’s harder.
Werner Vogels put it well at AWS re:Invent 2025: when you write code yourself, comprehension comes with the act of creation; when a machine writes it, you have to rebuild that understanding during review. He called this “verification debt” — and the data backs it up. Sonar’s State of Code survey found that developer toil remains steady at about 24% of the work week, regardless of how frequently developers use AI tools. The toil doesn’t shrink. It shifts. Less time writing boilerplate, more time validating suggestions. Less time drafting documentation, more time catching subtle errors in generated code.
AI’s most frequent users — the ones relying on it multiple times a day — actually report more toil from managing technical debt than infrequent users (44% versus 34%). The tool generates code fast. Understanding and maintaining that code still takes the same human judgment it always did.
This is why the “same compilers, same servers” framing matters. The foundational knowledge hasn’t been replaced. What’s been added is a new layer: the ability to evaluate and verify at speed and volume. It’s the same judgment that makes someone good at code review, applied at a different scale. If you were already strong at reading code critically, understanding system behavior, and catching non-obvious failure modes — you’re positioned well. If you were primarily strong at producing code from scratch, the ground has shifted under you.
The Agency Question
Here’s the part of this conversation that stays polite but needs to be said directly.
In my original post about this topic, I wrote that “those who want to can ensure every project has world-class best practices.” I stand by that — but I’ve been thinking about what that “want to” really means in practice.
You, the individual developer, might look at AI tools and see an opportunity to bring test coverage from 40% to 85%. To finally tackle the library migration. To write the architectural decision records that capture why the system was built this way, before the people who built it leave the company.
Your organization might look at the same tools and see a way to double sprint velocity. Ship the next three features on the roadmap. Cut the team by 20% and maintain the same output.
Both of these are rational responses. The difference is who captures the value of the tool.
This isn’t a tension that AI created. It’s been there every time a developer said “we should really refactor this” and a product manager said “we need this feature by Thursday.” AI just made it more visible, because the gap between “what we could do” and “what we’re choosing to do” got wider. When the excuse was “we don’t have the bandwidth,” the tension was theoretical. When the bandwidth is suddenly available, the tension becomes a decision.
I don’t have a clean answer for this. I think the developers who thrive in this moment will be the ones who can make the case — to their teams, their managers, their organizations — that investing in quality is investing in velocity. That clearing technical debt isn’t a distraction from shipping; it’s what makes sustainable shipping possible. The data supports this argument. The challenge is making it in a room where the quarterly roadmap is on the screen.
The Reckoning Is the Opportunity
I want to end where I started — with the tension between revolutionary and familiar.
AI tools haven’t changed what you need to know to build real systems. They haven’t changed the importance of understanding the stack beneath your abstractions, the failure modes of your dependencies, or the trade-offs in your architecture. The fundamentals are the same.
What’s changed is the cost of not doing the work you’ve always known matters. The migration that was too tedious is now achievable in days. The tests that were too boring to write can be generated and reviewed in a fraction of the time. The documentation that never got prioritized is now extractable at scale.
The developers who will define the next era of this profession aren’t the ones who use AI to write code faster. They’re the ones who use it to finally do all the things they’ve always known should be done — and who can make the case that doing those things is worth the organization’s investment, not just the developer’s ambition.
The fundamentals haven’t changed. Your excuses have. What you do about that is the most interesting professional question of the next few years.
Next Step: Open your team’s backlog or issue tracker and find the oldest technical debt item that everyone agrees should be done but nobody has prioritized. Estimate what it would take with AI-assisted tooling versus the original estimate. Bring that comparison to your next sprint planning. The conversation that follows will tell you a lot about whether your organization is ready to capture the real value of these tools.
Discussion: What’s the biggest piece of technical debt in your codebase that suddenly feels achievable with AI tooling — and, honest question, will your team actually let you prioritize it?