The exponential decline of quality is not unique to the software industry. It's felt across all industries, and is simply a consequence of the capital driving the industry focusing on short-term profit and self-enrichment rather than benefiting the customers, company, the common good.
Until we learn to stop rewarding greedy sociopaths and psychopaths with positions of power rather than referring them to mental health, none of this has any hope of ever getting better.
C-level execs have no interest in providing a decent product for customers, or employment for their employees etc,... they just want to boost the imaginary short-term-profit numbers so they can cash out with their golden parachutes before the shit hits the fan.
The problems are rooted in human greed, not of humanity in general but of a minority with a disproportionate amount of power and means, who at this point, especially in the US, have been entrenched in power for so long that they have shaped the systematic fabric of society in their favor, which is why they have become increasingly more bold and brazen about what they are doing.
The only ethical choice at this point is to leave whatever big corporation you're at and go at it yourself. Make software the right way. However, you will be competing with larger companies who employ unethical practices, and be at a severe disadvantage. Due to that disadvantage your pricing will be above market, combined with the fact that most normal people are seeing their buying power evaporate in recent times, odds of success are slim to none. But it's the only option we've got left.
You're describing exactly why I run a small agency instead of optimizing someone else's quarterly numbers. The structural incentives in big tech are broken. But I've seen good engineering happen at scale too, usually where leadership actually shipped product before they got titles. Rare, but not impossible.
Warhammer 40k coming early. We'll all be tech-priests soon 🙏.
Thanks for another insightful article! I wonder how you think senior engineers should prepare for the near- to medium-term future.
I have ~12yr professional experience from data-engineering to web development. I personally haven't experienced much of an increase in productivity from AI coding tools. Outside of small scripts and starting points for things you shouldn't be implementing yourself anyways (e.g. JSON float parsing), most of my experiments with AI coding tools end in frustration. When I'm in a mature code base that I am familiar with, then I already have a couple of solutions in my head. Typing them out is not the problem. On the contrary, the process itself forces me to interact with the code base and shows me which of the existing patterns make this new implementation difficult. I can then choose a different approach or refactor first. As a side effect I become intimately familiar with that code base, and that familiarity is where the real productivity gains come from. This is something that most managers (that I've met) under-appreciate, causing them to under-value retention and team stability. You are at your most useless when you inherit a legacy code base. With AI written code, it always feels that way.
During the early stages of your engineering career, you are mostly trying to get the thing to work at all. In that stage, AI can help, but I'm worried that it also robs you of the opportunity to think through hard problems yourself. However, as a senior you should be choosing from several working solutions and considering the impact to the rest of the system and the team. AI is bad at that. I was hoping that it could speed up prototyping of ideas, but that doesn't seem to work very well in larger code bases where a feature touches many different places. It just creates a huge mess. It never simplifies, always complicates, breaks stuff, and then capitulates. (Or, my personal favorite, claiming to fix a bug by placing a comment that says: "// Here we fix the bug that ...")
My personal take-away is to focus on strengthening my fundamentals, so that future me is in a better position to deal with the inevitable problems that the AI is unable to fix, or that the AI itself caused. To the extent that AI tools – mostly the chat kind – can help with that, I will use them. However, I don't feel that I have much of an edge over less experienced developers "driving" the AI coding agents, so I'm trying to avoid it as much as possible. Unfortunately, that is not a stance that I can comfortably publicize in my organization. It feels a bit like I'm undercover in a cult.
You probably know the Principle Skinner meme: "Am I so out of touch? No, it's the children who are wrong!" Well, nobody want's to be that guy, so I intend to check on the latest tools every few months. Would you say that I'm missing the boat by not committing to AI coding? How would you strike a good balance? Also, should I recommend to juniors to rely less on AI tools even if that means delivering less? That might hurt their performance review in the current climate.
I strongly agree with you, Denis. I believe there is a way to address this problem. It involves treating AI as a tool and assessing it on its real merits. Upon doing so, it should become clear to business leaders that there is a better path that is more human aligned.
I actually think there is a gap in innovative solutions. In short, programming languages lack the semantic or meaning basis to create the human alignment I refer to. Bridging that gap has inherent attributes that enable human alignment. That’s not a leadership problem as much as it is technology and philosophy challenges. It’s about first principles thinking regarding what both humans and machines operate from. The former is driven by intent and meaning. The latter by meaning translated into instructions.
Correct. It absolutely matters what the goals are from the outset. In order to create a solution that is transcendent, it cannot be constrained by design decisions that won’t work. It begins with a question. If 99.9% of humans don’t use programming languages and machines don’t either, what would work for both of them? It needs to be a precision approach. Not guessing. There must be transparency and control. In other words, not a language model.
BTW, I think an intent-driven approach has bigger economic potential than code-driven approaches because it unlocks ideas. A new marketplace for ideas without the wasted energy, complexity, and other problems that come from building on top of substrate that behaves like quicksand.
As far as language being the interface goes, it can work by being based on semantics and dialog. It also has to be comprehensive and flexible. These are big topics to consider.
I'm almost afraid to say that this sort of de-labouring feels intentional. Between 2010 and the time I resigned in 2019 I'd say more than 85% of the headcount at my company was gone. COVID bought some time, but the local office just closed. Now the company is just a cloud licence mill that outsources deployment and development to the lowest bidder.
As a senior engineer, I am also having trouble finding (ethical, as in no-ai-nonsense, no defense industry, etc...) decent software jobs these days. From my perspective, it seems they almost prefer junior roles to senior ones because the junior people are less likely to call out management on their short-term-profit destructive ways. I think in the end, they just want to get rid of all of us. After all, when you don't even need customers (as we've seen with some social media companies inflating their user numbers with ai agents), and you don't care about the company in any way beyond how much you'll cash in when you're ready to sell your stash of stock or options, you certainly don't want employees (a cost center). Trying to play this down as only affecting junior roles is probably a mistake.
You're living the article. Seniors who push back are expensive and inconvenient. Juniors who don't know better are cheap and compliant. Until nobody's left who understands the system.
There's a tragedy-of-the-commons situation here, though. Even if everyone at a given company recognizes that only juniors can grow up to be seniors, they can't stop those seniors from going and working for someone else. So it's in their interest that lots of people should be trained, but not that they should do it themselves.
The exponential decline of quality is not unique to the software industry. It's felt across all industries, and is simply a consequence of the capital driving the industry focusing on short-term profit and self-enrichment rather than benefiting the customers, company, the common good.
Until we learn to stop rewarding greedy sociopaths and psychopaths with positions of power rather than referring them to mental health, none of this has any hope of ever getting better.
C-level execs have no interest in providing a decent product for customers, or employment for their employees etc,... they just want to boost the imaginary short-term-profit numbers so they can cash out with their golden parachutes before the shit hits the fan.
The problems are rooted in human greed, not of humanity in general but of a minority with a disproportionate amount of power and means, who at this point, especially in the US, have been entrenched in power for so long that they have shaped the systematic fabric of society in their favor, which is why they have become increasingly more bold and brazen about what they are doing.
The only ethical choice at this point is to leave whatever big corporation you're at and go at it yourself. Make software the right way. However, you will be competing with larger companies who employ unethical practices, and be at a severe disadvantage. Due to that disadvantage your pricing will be above market, combined with the fact that most normal people are seeing their buying power evaporate in recent times, odds of success are slim to none. But it's the only option we've got left.
You're describing exactly why I run a small agency instead of optimizing someone else's quarterly numbers. The structural incentives in big tech are broken. But I've seen good engineering happen at scale too, usually where leadership actually shipped product before they got titles. Rare, but not impossible.
Warhammer 40k coming early. We'll all be tech-priests soon 🙏.
Thanks for another insightful article! I wonder how you think senior engineers should prepare for the near- to medium-term future.
I have ~12yr professional experience from data-engineering to web development. I personally haven't experienced much of an increase in productivity from AI coding tools. Outside of small scripts and starting points for things you shouldn't be implementing yourself anyways (e.g. JSON float parsing), most of my experiments with AI coding tools end in frustration. When I'm in a mature code base that I am familiar with, then I already have a couple of solutions in my head. Typing them out is not the problem. On the contrary, the process itself forces me to interact with the code base and shows me which of the existing patterns make this new implementation difficult. I can then choose a different approach or refactor first. As a side effect I become intimately familiar with that code base, and that familiarity is where the real productivity gains come from. This is something that most managers (that I've met) under-appreciate, causing them to under-value retention and team stability. You are at your most useless when you inherit a legacy code base. With AI written code, it always feels that way.
During the early stages of your engineering career, you are mostly trying to get the thing to work at all. In that stage, AI can help, but I'm worried that it also robs you of the opportunity to think through hard problems yourself. However, as a senior you should be choosing from several working solutions and considering the impact to the rest of the system and the team. AI is bad at that. I was hoping that it could speed up prototyping of ideas, but that doesn't seem to work very well in larger code bases where a feature touches many different places. It just creates a huge mess. It never simplifies, always complicates, breaks stuff, and then capitulates. (Or, my personal favorite, claiming to fix a bug by placing a comment that says: "// Here we fix the bug that ...")
My personal take-away is to focus on strengthening my fundamentals, so that future me is in a better position to deal with the inevitable problems that the AI is unable to fix, or that the AI itself caused. To the extent that AI tools – mostly the chat kind – can help with that, I will use them. However, I don't feel that I have much of an edge over less experienced developers "driving" the AI coding agents, so I'm trying to avoid it as much as possible. Unfortunately, that is not a stance that I can comfortably publicize in my organization. It feels a bit like I'm undercover in a cult.
You probably know the Principle Skinner meme: "Am I so out of touch? No, it's the children who are wrong!" Well, nobody want's to be that guy, so I intend to check on the latest tools every few months. Would you say that I'm missing the boat by not committing to AI coding? How would you strike a good balance? Also, should I recommend to juniors to rely less on AI tools even if that means delivering less? That might hurt their performance review in the current climate.
This is one of the sharpest takes I’ve read on this.
You’re absolutely right, familiarity is the real productivity gain, and AI strips it away.
What I’m seeing now is that juniors are skipping the “painful familiarity” phase entirely.
They ship faster, but they don’t build intuition, the thing that makes seniors valuable in the first place.
In the short term, that looks efficient.
In the long term, it’s how we lose engineering as a discipline and turn it into prompt assembly.
I strongly agree with you, Denis. I believe there is a way to address this problem. It involves treating AI as a tool and assessing it on its real merits. Upon doing so, it should become clear to business leaders that there is a better path that is more human aligned.
That’s the sane path, but it requires leaders to value time horizons longer than a quarter.
AI isn’t misaligned because of the tech. It’s misaligned because incentives are.
I actually think there is a gap in innovative solutions. In short, programming languages lack the semantic or meaning basis to create the human alignment I refer to. Bridging that gap has inherent attributes that enable human alignment. That’s not a leadership problem as much as it is technology and philosophy challenges. It’s about first principles thinking regarding what both humans and machines operate from. The former is driven by intent and meaning. The latter by meaning translated into instructions.
That’s a great point, intent vs. instruction is exactly where alignment fails.
Machines don’t understand meaning; they only understand hierarchy. Humans design hierarchies based on incentives, not intent.
So when language becomes the interface, we don’t get alignment — we get amplification of whatever drives the prompt.
And right now, that’s profit, not purpose.
Correct. It absolutely matters what the goals are from the outset. In order to create a solution that is transcendent, it cannot be constrained by design decisions that won’t work. It begins with a question. If 99.9% of humans don’t use programming languages and machines don’t either, what would work for both of them? It needs to be a precision approach. Not guessing. There must be transparency and control. In other words, not a language model.
BTW, I think an intent-driven approach has bigger economic potential than code-driven approaches because it unlocks ideas. A new marketplace for ideas without the wasted energy, complexity, and other problems that come from building on top of substrate that behaves like quicksand.
As far as language being the interface goes, it can work by being based on semantics and dialog. It also has to be comprehensive and flexible. These are big topics to consider.
As a software engineer with over 35 years in the industry that uses AI agents heavily, I couldn't agree more.
One of my favorite citation on the issue is:
“It’s like getting the mushroom in Super Mario Kart — it makes you go faster, but it doesn’t make you a better driver.”
Joseph Carson (Black Hat USA 2025, 1Password Panel)
That’s a perfect analogy.
AI amplifies direction, not competence.
And when the direction is wrong, acceleration causes us to collapse faster.
"Supervising AI feels like managing 50 literal junior engineers at once — fast, obedient, and prone to hallucinations. You can’t out-code them. You must out-specify them." My quote from https://techtrenches.substack.com/p/supervising-an-ai-engineer-lessons
I'm almost afraid to say that this sort of de-labouring feels intentional. Between 2010 and the time I resigned in 2019 I'd say more than 85% of the headcount at my company was gone. COVID bought some time, but the local office just closed. Now the company is just a cloud licence mill that outsources deployment and development to the lowest bidder.
Feels that way because it is.
It’s cheaper to rent talent than to build it, until there’s no one left who remembers how things actually work.
As a senior engineer, I am also having trouble finding (ethical, as in no-ai-nonsense, no defense industry, etc...) decent software jobs these days. From my perspective, it seems they almost prefer junior roles to senior ones because the junior people are less likely to call out management on their short-term-profit destructive ways. I think in the end, they just want to get rid of all of us. After all, when you don't even need customers (as we've seen with some social media companies inflating their user numbers with ai agents), and you don't care about the company in any way beyond how much you'll cash in when you're ready to sell your stash of stock or options, you certainly don't want employees (a cost center). Trying to play this down as only affecting junior roles is probably a mistake.
You're living the article. Seniors who push back are expensive and inconvenient. Juniors who don't know better are cheap and compliant. Until nobody's left who understands the system.
There's a tragedy-of-the-commons situation here, though. Even if everyone at a given company recognizes that only juniors can grow up to be seniors, they can't stop those seniors from going and working for someone else. So it's in their interest that lots of people should be trained, but not that they should do it themselves.