AI Won't Save Us From the Talent Crisis We Created
I’ve spent the last 18 months watching teams integrate AI into their workflows. The pattern is always the same: Senior engineers get a 30% productivity boost. Juniors? They ship faster, but create technical debt that takes months to unwind. The AI doesn’t know what it doesn’t know - and neither do they.
Here’s the uncomfortable truth I’ve learned managing engineering teams: We’re betting the future of our industry on a tool that amplifies expertise but can’t create it. And we’re running out of humans who have that expertise.
The Brutal Reality of AI as a Multiplier
After tracking AI adoption across multiple teams, the pattern is undeniable. When senior engineers with 5+ years of experience use Copilot, Cursor, or Claude, they ship 30% faster. They catch hallucinations instantly. They know when AI suggests an O(n²) solution that will melt production servers.
But juniors using the same tools? They ship faster, too, but also create technical debt that takes months to unwind. They can’t distinguish between good and bad suggestions. They accept deprecated APIs, overlook edge cases, and create beautiful abstractions that disregard actual business logic.
Even with perfect prompts, detailed context, and comprehensive documentation, AI can’t guarantee consistency. Run the same prompt tomorrow, get different architecture decisions. That’s not a development tool - that’s Russian roulette with production systems.
The cruel irony: the engineers who benefit most from AI are those with 5+ years of experience - precisely the ones we’re not creating anymore because we won’t hire juniors.
The Actual State of Engineering Hiring
Look at what’s actually happening in our industry:
The hiring collapse:
Big Tech hires just 7% new graduates (down 50% from pre-pandemic levels)
74% of developers can’t find jobs despite “talent shortage” claims
“Entry-level” positions require 2-5 years of experience (Discord admits requiring 2+ years, while most Dallas postings demand 3+ years)
300,000+ experienced engineers laid off since 2022 compete for the same roles
The education paradox:
CS graduates face 6.1% unemployment - worse than Art History majors at 3%
New grads submit 150+ applications for roles requiring experience they can’t get
Bootcamp graduates tout 72-96% placement (including teaching assistants and part-time work)
The H-1B escape route just closed: Trump’s new $100,000 fee per H-1B visa killed the strategy companies used for years. One company laid off 27,000 Americans while hiring 25,000+ H-1B workers. Now, they must either pay the $ 100,000 premium or actually invest in domestic talent. Guess which option they’re choosing? Neither. They’re just complaining louder about “talent shortages.”
The AI paradox nobody talks about: 84% of developers use or plan to use AI tools. Companies believe these tools will solve their talent problem. But here’s the disconnect: If AI could actually replace engineers, wouldn’t demand drop across all levels? Instead, companies desperately need senior talent while refusing to create the pipeline that produces it. They’re betting AI can replace the juniors they won’t hire, not realizing AI only works in the hands of the seniors they can’t find.
Why AI Can’t Scale (Enough)
Let’s talk about physical constraints everyone ignores:
Energy reality: A single ChatGPT query uses 0.3-0.43 watt-hours versus Google’s 0.04 - roughly 10x more. Data centers already consume 4.4% of US electricity, heading toward 12% by 2028.
Big Tech’s response? Build gigawatt-scale data centers. Microsoft’s $500 billion Stargate project, Amazon’s 2.2 GW Indiana campus, Meta’s 1 GW facility, xAI targeting 3 GW by 2026. For context, 1 GW powers a city of 750,000 homes.
The energy source problem? Trump killed wind and solar tax credits, banned new renewable permits, and requires personal approval from Interior Secretary for any renewable project. Companies are scrambling for nuclear deals - Microsoft is spending $1.6 billion to restart Three Mile Island, and Meta signed for 1.1 GW of nuclear. But new reactors won’t come online until the 2030s.
Scaling AI to replace even 10% of engineers would require energy infrastructure that doesn’t exist and can’t be politically built.
Compute bottleneck: We’re seeing 6-month waitlists for H100 GPUs. TSMC can’t magically 10x production. The infrastructure needed for serious AI work doesn’t scale fast enough.
The Taiwan Black Swan: Here’s What Nobody Discusses - 92% of advanced chips come from TSMC in Taiwan. One geopolitical crisis, one natural disaster, one supply chain disruption, and the entire AI revolution comes to a halt. We’re betting everything on a single point of failure in the world’s most geopolitically tense region.
Context limitations: Enterprise systems contain millions of lines of code accumulated over decades. Even the best AI models cannot fully grasp the complexity of legacy systems, undocumented business logic, and years of accumulated technical debt. They see fragments, not the whole.
Reliability gap: Financial systems need 99.999% uptime. AI delivers maybe 90% accuracy. That 9.999% gap is where companies die.
Three Failure Patterns I’ve Seen Firsthand
From the trenches, here’s what happens when companies try to replace expertise with AI:
The Startup Delusion: “We don’t need senior engineers, we’ll just use AI!” Six months later: Drowning in technical debt, can’t scale, no one understands the codebase. We are now desperately hiring seniors at 150% of the market rate.
The Enterprise Fantasy: “AI will help our offshore team perform like seniors!” Twelve months later: Rewriting everything, security breaches, customers fleeing to competitors who invested in real expertise.
The Scale-up Trap: “We’ll maintain velocity with fewer engineers plus AI!” Eighteen months later: Can’t ship complex features, everything breaks, market share evaporating.
I’ve watched three companies in my network try these strategies. All three are now in crisis mode, throwing money at senior engineers to fix the mess.
What AI Actually Does vs. Marketing Claims
After 18 months of real AI integration, here’s the reality:
What AI Does Well:
Boilerplate generation (saves seniors 20-30% time)
Documentation drafts (still need heavy editing)
Simple refactoring suggestions (require verification)
Test case generation (as starting point only)
Code autocomplete
What AI Can’t Do:
Understand the accumulated business context
Make architectural decisions based on tribal knowledge
Debug complex distributed system failures
Maintain consistency across large codebases
Replace the mentorship juniors need to become seniors
The gap between what AI promises and delivers is massive. It’s a powerful tool for those who already have a clear understanding of what they’re doing. For everyone else, it’s an expensive way to create tomorrow’s legacy code.
The Lost Generation We’re Creating
We’re watching an entire generation of potential engineers get locked out. What happens when today’s rejected juniors give up and go to finance or consulting?
Year 3: “Why can’t we find mid-level engineers?”
Year 5: “Senior engineers cost $400K and leave after 16 months average tenure“
Year 10: “We have 50 million lines of legacy code nobody understands”
Big Tech is already hemorrhaging senior talent. Experienced engineers see the writing on the wall: an industry that won’t train juniors, can’t import cheap labor anymore, and believes AI will magically fill the gap. They’re cashing out before the house of cards collapses.
The Uncomfortable Truth Nobody Wants to Hear
We’re committing industrial suicide while pretending we’re innovating.
Companies claim they can’t find talent while rejecting hundreds of qualified juniors. They implement AI tools, thinking it’ll compensate for not training new engineers. They fire experienced developers to cut costs, assuming AI will fill the gap.
Here’s what’s really happening: We’re creating a lost generation of engineers while the current generation burns out and leaves. AI isn’t replacing engineers - it’s highlighting how desperately we need them. Every hallucination, every inconsistency, every production failure proves that expertise can’t be automated.
The physical constraints of energy, compute, geopolitics, and reliability mean that AI mathematically cannot scale to replace human engineers. But by the time companies realize this, we’ll have lost 5-10 years of talent development.
The Choice We Face
In my 10+ years managing engineering teams, I’ve never seen a more preventable crisis. We know what works:
Hire juniors and invest in their growth
Use AI as a tool, not a replacement
Build expertise through mentorship, not prompts
Accept that developing engineers takes time and money
Instead, we’re choosing quarterly earnings over long-term survival. We’re optimizing for today while destroying tomorrow.
The companies that survive the next decade won’t be those with the best AI tools. They’ll be the ones who understood a simple truth: AI amplifies human expertise - it doesn’t create it.
Without humans who deeply understand systems, architecture, and trade-offs, AI is just an expensive random code generator. And we’re rapidly running out of humans who have that understanding.
The question isn’t whether AI will save us from the talent crisis. It won’t. It can’t. The math doesn’t work.
The question is whether we’ll admit this before it’s too late to course-correct.
What’s your experience with AI in production? Are you seeing the same expertise paradox in your teams?
P.S. If you’re a junior engineer getting rejected from “entry-level” positions - it’s not you. The system is broken. Keep building, keep learning, and find the few companies that still understand that investing in talent is investing in survival.



Warhammer 40k coming early. We'll all be tech-priests soon 🙏.
Thanks for another insightful article! I wonder how you think senior engineers should prepare for the near- to medium-term future.
I have ~12yr professional experience from data-engineering to web development. I personally haven't experienced much of an increase in productivity from AI coding tools. Outside of small scripts and starting points for things you shouldn't be implementing yourself anyways (e.g. JSON float parsing), most of my experiments with AI coding tools end in frustration. When I'm in a mature code base that I am familiar with, then I already have a couple of solutions in my head. Typing them out is not the problem. On the contrary, the process itself forces me to interact with the code base and shows me which of the existing patterns make this new implementation difficult. I can then choose a different approach or refactor first. As a side effect I become intimately familiar with that code base, and that familiarity is where the real productivity gains come from. This is something that most managers (that I've met) under-appreciate, causing them to under-value retention and team stability. You are at your most useless when you inherit a legacy code base. With AI written code, it always feels that way.
During the early stages of your engineering career, you are mostly trying to get the thing to work at all. In that stage, AI can help, but I'm worried that it also robs you of the opportunity to think through hard problems yourself. However, as a senior you should be choosing from several working solutions and considering the impact to the rest of the system and the team. AI is bad at that. I was hoping that it could speed up prototyping of ideas, but that doesn't seem to work very well in larger code bases where a feature touches many different places. It just creates a huge mess. It never simplifies, always complicates, breaks stuff, and then capitulates. (Or, my personal favorite, claiming to fix a bug by placing a comment that says: "// Here we fix the bug that ...")
My personal take-away is to focus on strengthening my fundamentals, so that future me is in a better position to deal with the inevitable problems that the AI is unable to fix, or that the AI itself caused. To the extent that AI tools – mostly the chat kind – can help with that, I will use them. However, I don't feel that I have much of an edge over less experienced developers "driving" the AI coding agents, so I'm trying to avoid it as much as possible. Unfortunately, that is not a stance that I can comfortably publicize in my organization. It feels a bit like I'm undercover in a cult.
You probably know the Principle Skinner meme: "Am I so out of touch? No, it's the children who are wrong!" Well, nobody want's to be that guy, so I intend to check on the latest tools every few months. Would you say that I'm missing the boat by not committing to AI coding? How would you strike a good balance? Also, should I recommend to juniors to rely less on AI tools even if that means delivering less? That might hurt their performance review in the current climate.
I strongly agree with you, Denis. I believe there is a way to address this problem. It involves treating AI as a tool and assessing it on its real merits. Upon doing so, it should become clear to business leaders that there is a better path that is more human aligned.