The Comprehension Extinction: AI Isn’t Replacing Engineers. It’s Eliminating the Ones Who Understand.
I built our hiring process to filter out people who don’t understand fundamentals. It’s not complicated: explain how Node.js event loop works, name design patterns you’ve actually used, describe how an LLM functions.
Five years ago, maybe 30% of candidates failed these questions.
Now it’s closer to 80%.
People with 10 years of experience. Senior titles. GitHub profiles full of commits. And they can’t explain how the tools they use every day actually work.
They’re not engineers. They’re form-fillers. They don’t build systems. They assemble frameworks and pray.
And then Sam Altman says: “Maybe we do need less software engineers.”
The industry heard “less engineers.” I heard “less people who understand anything.”
We’re already there.
The Wrong Conversation
Everyone’s debating: “Can engineers review AI-generated code fast enough?”
Wrong question.
The right question: “Do the engineers reviewing this code actually understand what the fuck is happening?”
Because speed doesn’t matter if nobody comprehends the system.
The Real Problem
AI generates code at mid-level quality. Sometimes good. Often plausible-looking. Always confident.
It produces code that:
Passes tests
Looks reasonable in a diff
Follows patterns it’s seen before
Has zero understanding of your specific architecture, edge cases, or blast radius
To catch what AI misses, you need an engineer who:
Knows the system end-to-end
Understands why things were built the way they were
Can predict second-order effects
Recognizes when “tests pass” means nothing
These engineers are called seniors. Principals. Staff. Architects.
They’re expensive.
They’re the first ones getting cut.
The Experiment Accelerates
55,000 jobs cut in 2025 with AI explicitly cited. Then 30,000 more in the first six weeks of 2026.
Amazon cut 16,000 in January. CEO Jassy: “We will need fewer people” doing some of the jobs that are being done today.
Pinterest cut 15%, “reallocating resources to AI-focused roles.” Then fired two engineers who built a tool to track which colleagues got laid off. CEO Bill Ready called them “obstructionist.”
Dow cut 4,500. Block cut 1,100. The pattern repeats weekly.
Cut the expensive people. Keep the AI. Let the remaining team “scale.”
Here’s the contract nobody signed but everyone accepted:
AI generates at machine speed
Humans review at human speed
Humans take blame at production speed
When things break, it’s never “the AI screwed up.” It’s “the engineer should have caught it.”
But catching it requires understanding the system. Understanding requires experience. Experience requires years of actually building things.
You can’t shortcut comprehension with faster generation.
The Pipeline That’s Disappearing
Ask yourself: where do senior engineers come from?
They come from junior engineers who spent years:
Writing code
Making mistakes
Understanding why things break
Building mental models of complex systems
Now picture 2026:
Junior joins company. AI writes most of the code. Junior reviews AI output, clicks approve, moves tickets. Never builds mental model. Never understands the system. Never makes the formative mistakes.
Five years later: they’re “senior” by title. But they’ve never actually built anything. They’ve supervised a machine they don’t understand producing code for a system they don’t understand.
Who reviews the AI then?
This isn’t a capacity problem. It’s comprehension extinction.
We’re eliminating the pipeline that produces engineers who actually understand things.
The Klarna Warning Nobody’s Hearing
Klarna was the AI-efficiency poster child. They cut aggressively, bragged about AI doing the work of 700 customer service agents. Stock went up. LinkedIn celebrated. Every CEO took notes.
Then reality:
CEO Siemiatkowski, 2025: “Cost unfortunately seems to have been a too predominant evaluation factor... what you end up having is lower quality.”
They’re hiring humans again.
But the lesson isn’t landing. Because the incentive structure rewards the cut, not the comprehension.
CFO sees: “Headcount reduction. Savings.”
CFO doesn’t see: “Critical system knowledge walked out the door.”
Until production explodes. Then it’s an “incident.” Not a strategy failure. Never a strategy failure.
The Autonomous Coding Fantasy
The current hype: agentic coding, autonomous agents, AI that “just handles it.”
Codex. Claude Code. Cursor. Copilot Workspace. Everyone’s racing to remove humans from the loop entirely.
The pitch: “AI understands your codebase and makes changes autonomously.”
The reality: AI pattern-matches against your codebase and makes changes confidently.
Confidence isn’t comprehension.
The AI doesn’t know:
Why that weird config exists (it saved you from a production disaster in 2019)
Why that setTimeout(0) exists (race condition fix from 3 years ago)
Why you can't just "refactor" the auth module (it's integrated with 4 external systems nobody documented)
This knowledge lives in humans. Specifically, in senior humans who’ve been around long enough to accumulate it.
Fire them, and the knowledge doesn’t transfer to the AI. It just disappears.
The Question Nobody’s Asking
AI writes “past 50% now” of code at many companies. That’s probably true.
But the question isn’t how much code AI writes.
The question is: who understands what the code does?
If the answer is “nobody, but the tests pass”, you don’t have an engineering team. You have a prayer and a deployment pipeline.
The Two Types of Companies Emerging
Type 1: Comprehension-First
AI generates, humans architect and constrain
Senior engineers set boundaries before AI touches anything
Code review means “does this fit our system” not “does this look okay”
Slower generation, faster understanding
When production breaks, someone can actually explain why
Type 2: Generation-First
AI generates, humans rubber-stamp
Seniors cut because “AI handles it”
Code review is “tests pass, ship it”
Faster generation, zero understanding
When production breaks, everyone stares at logs hoping the AI can explain itself
Type 2 is cheaper. Type 2 looks better on quarterly reports. Type 2 is what most companies are choosing.
Type 2 is accumulating comprehension debt at machine speed.
The Debt Comes Due
Comprehension debt doesn’t show up on dashboards.
It shows up as:
The feature nobody can modify because nobody knows how it works
The outage that takes 14 hours to diagnose because no one understands the system
The security breach that exploited a “known” vulnerability nobody actually knew about
The migration that was supposed to take 2 weeks and took 8 months
By then, the executives who made the cuts have moved on. The “savings” were already reported. The stock already bumped.
The remaining engineers inherit a system nobody understands, generated by machines, approved by people who aren’t there anymore.
The Market Is Already Broken
I used to maintain a 1:1 ratio of ML engineers to fullstack developers on projects. Not anymore. We couldn’t hire a single qualified ML engineer for six months. We had to restructure the entire company. Now fullstack developers write most of our RAG implementations because we can’t scale the ML team.
Right now I have 5 open positions. The candidates are garbage. The good engineers aren’t getting fired. My people have been with the company 3, 5, 7 years. Nobody job-hops anymore because there’s nowhere to hop to. And what’s available on the market is questionable at best.
This isn’t an AI problem. This is a comprehension problem that’s been building for years. Frameworks abstracted everything. Stack Overflow gave answers without understanding. “It works” became the only success metric.
AI just accelerated it 10x.
Now these same engineers are supposed to review AI-generated code? They don’t understand the code they wrote themselves. How will they catch what the machine gets wrong?
The Uncomfortable Truth
Almost six months ago, I wrote about the quality collapse. How we normalized shipping broken software, how “move fast and break things” became “move fast and never fix things.”
This is worse.
Back then, at least the people writing bad code understood what they were writing. They made tradeoffs. They knew where the bodies were buried. They could fix it if they had to.
Now we’re generating code faster than anyone can understand, reviewed by engineers who don’t know how their own tools work, approved by teams that lost their senior knowledge when the layoffs hit.
The speed at which we’re heading into the abyss is staggering.
We are fucked. Good luck.

