Anthropic claimed 100% of Claude Code is AI-written. A source leak exposed a 3,167-line function, regex sentiment analysis, and 250K wasted API calls daily
I’ve been in the software industry for 21 years and worked on several software startups. In my experience, the attitude and the culture that this article speaks about aren’t new.
All CEOs, all CTOs, and most project managers I’ve worked with have had this attitude of disregarding technical debt and shipping lower-quality code. It’s no surprise to me that that culture exists within Anthropic and I’m reasonably certain that it’s no different in any other AI company.
What makes any of this new, and much worse for the industry as a whole, and for customers, is that now that culture is automated.
What used to be the occasional “garbage in, garbage out” is now going to be a frequent “toilet flush in, toilet flush out”.
You're not the first industry veteran who tells me this. Maybe I was lucky throughout my career, working with teams that didn't ship garbage. Maybe I expect from the industry what was never really there.
To be clear, my experience has been that mid-level and senior engineers generally write quality code and don’t default to shipping garbage.
The entry-level/junior engineers learn with experience but start not knowing the difference between good and bad code.
It’s really just the top-level and some mid-level management people who have the attitude that it’s more important to ship *something* than to ship something of *quality*.
I think there’s a direct correlation between how much code one has actually written professionally and the importance given to quality code.
It’s not until you’ve spent a lot of time figuring out some hard bug that you realise how important and time-saving it is to write good quality well-tested code from the start.
It's not just non-technical management anymore. Garry Tan ran YC, has a tech background, and spent March bragging about 10K lines of code per week. A developer checked the output. Slop. LOC is the KPI now, even for people who should know better
That attitude permeates everything in the tech industry now. We used to be very concerned with shipping features that we knew people wanted and worked, both functionally and in terms of usability. Now we just ship “something” so we can tell the shareholders/investors.
Hii bro I'm a 1st year cs major student and I'm from a very low tier college and i only have one question....is it okay to learn coding...because I feel I'm very very behind like I'm learning if/else statement rn....and everytime I sit and try to learn there's always a new notification from some ceo that ai is gonna replace everyone....idk what should I do...I don't have any roadmap...I'll be grateful if I get some guidance from you
Hi Vinit. I’ll be honest. The employment prospects for junior software engineers aren’t good right now, and I don’t think they will improve.
Having said that, I personally think that a degree in CS is still a good thing (any technical degree is a good thing, in my opinion). You could stay in academia or pursue a career in industries that rely heavily on CS graduates (fintech and AI companies come to mind) but you shouldn’t expect to be able to get a typical software engineering job (ie, a coding job) anymore.
My suggestion is that you make sure to learn a good deal of mathematics (linear algebra, in particular) if you want to have a better chance to get hired by tech companies to do something that is more than just coding. Coding job opportunities for junior engineers, unfortunately, are almost nonexistent now.
But at the end of the I had to learn to code to know what's going on right....like for the time being my roadmap something goes like this.....learn coding (c-language currently i'm a beginner)--> data structure and algorithms -->chose one field I've to do specialisation (haven't decided yet)
Yes, learning to code is part of the CS curriculum, as is data structures and algorithms but you shouldn’t limit yourself to the default curriculum. Earlier I suggested linear algebra because it’s the math from which a lot of AI comes from.
By the way, I forgot to mention something before. You said that you’re in a low-tier college. That’s not all that important. With enough motivation, you can learn on your own topics that are not taught in your courses. In fact, it’s easier now more than ever because of open university courses. Both MIT and Stanford have excellent free online CS courses. I highly recommend that you take advantage of those.
I’ve got 2 decades of experience in startups too and I agree that disregarding tech debt is common but not universal. In my opinion it’s more common among the startups that have failed in my experience. The most successful startups I worked for (one now $10B public) would prioritize significant tech debt concerns that engineering could support with data.
At these more successful startups Martin Flowler’s “Refactoring” textbook was a must read for engineers who wanted to move up. In fact using Flowler’s “Design Stamina Hypothesis” to justify refactoring work can be a useful strategy in healthy engineering orgs. Tech debt is a very real, measurable thing that can negatively impact customers and slow down development.
I think the rise of this “tech debt doesn’t matter” mindset has come with the rise of ZIRP and zombie unicorns. The next credit cycle and AI bubble bust will shift the culture again. Hopefully towards craft.
I've been working in software engineering in Silicon Valley for 30 years. I've founded companies and I've led large teams at mega-cap tech firms too. The ZIRP era created a rot in the industry that has metastasized like a malignant tumor. The VC firms suffer from "group think" and many of the more prominent general partners seem to have untreatable social media brain poisoning.
That being said, ironically the coding agents may be what saves the industry. In the hands of a talented engineer, these tools basically eliminate the moat that big VC funding rounds used to provide. One or two talented engineers can produce high quality products without ever needing a dollar of VC money. My hope is that this ushers in a new era of software companies that build genuinely useful products rather than pursuing the rent-seeking business models that are so prevalent now.
Agreed 100%, but this caveat is **critical**: “In the hands of a talented engineer…” The gap in code quality and velocity between my JR and SR devs is wider than ever.
So, the flip side is that when AI agents are in the hands of a JR founder thats only following the vibe coding maxxers on Twitter and their investors are saying it’s the future. They dont realize how slow they’re moving and how much slower they’re getting with all that tech debt.
Like look at YC’s president, Garry Tan, right now. He’s got a CS degree he almost never used instead focusing his career prior to YC on design/product (nothing wrong with that), but he unironically thinks it’s impressive that he rolled his own CMS in 300k LOC of Rails in 6 weeks. None of the yes men around him have the heart to break it to him how a staff level Rails dev could’ve cranked out a 10x better version of Garry’s list in a week with 1/10 the Claude code tokens and LOC.
Completely agree. If anything AI coding agents have made the value of good software design and architecture go through the roof. Far from lessening it, we need it more than ever.
Hii bro I'm a 1st year cs major student and I'm from a very low tier college and i only have one question....is it okay to learn coding...because I feel I'm very very behind like I'm learning if/else statement rn....and everytime I sit and try to learn there's always a new notification from some ceo that ai is gonna replace everyone....idk what should I do...I don't have any roadmap...I'll be grateful if I get some guidance from you
You need to learn both fundamentals cs and AI. I don’t think that it waste of time. The only issue that it will be much harder to find first job. AI is a good tool for learning, so if you are really passionate and interested in engineering profession everything will be fine
Increasingly, before I retired two years ago, I watched the priorities go from "make this good code" to "get this out by an arbitrary deadline" (plus also "put it on the newest tech," which was also not always the best fit).
I designed and wrote a system under those constraints and it was my next-to-last major project. I made it by the deadline with a lot of commentary on potential problem points and failure modes.
My final project? A 50% rewrite/redesign when one of those problem points reared it's ugly head.
"There's never time to do it right, but there's always time to do it over."
I am a non-professional hobby-coder. For years, I have been trying to follow all the advice for good and clean code. This caused me a lot of headaches and increased complexity. And now I have to learn here that neither the pros, nor the coding agents care about that.
This article also shattered my illusion that Claude Code is the professional harness. I use opencode and pi. But I had doubts If they are inferior to the polished product from Anthropic, coming from people with insane salaries.
But it also reinforces my suspicion that that the only thing that Americans do really well is story-telling. Almost every American product turns out to be over-hyped crap.
Pros care about clean code - at least, most of the pros I know, including myself. The problem is that pros also face a lot of other competing incentives and constraints, and not just to ship profitable code, though that certainly is one.
Even absent profit, coordinating a bunch of people to produce and maintain well-designed code is tough, for the same reason that any large system with many participants get complicated and hard to change. Similarly, sometimes low-quality codebases can succeed in spite of that quality, at least for a time.
Like with any complicated, enduring human endeavor, you have to trust and argue that taking the time to produce good code is worth it. Just because others are code quality nihilists doesn't mean we have to be.
authors and artists are sharing a similar sentiment: vast disappointment that getting good at their craft and responding to client micromanagement needs took up so much of their effort and time, and now that these clients have switched to sloppy AI, they dont care. just so much human craft, effort and goodness flushed down by these repulsive silicon valley bros and their linkedin fungal networks.
Honestly, a lot of clean code advice is mostly meant for code with multiple contributors and that needs to be maintained over years... For a personal project the a quick way is often better (not always!)
I wasn't speaking about someone being watching or not. My point is: is the added complexity an investment that will pay off, or a waste of time and effort?
You need to know why you're doing what you are doing!
(and yes, a professional may err on the side of writing clean code when it's not needed, to maintain the habit for it. But for an hobbyist like Dhaenor, it may be precisely the wrong habit to have, depending on the specifics of his projects)
As a lifelong software engineer the debt for this is only as bad as the regression test suite. If the regression test suite is bad then refactoring this will be risky. If they have a good regression test suite then it's inconsequential. I sat down with my hobby code yesterday and seeing that one of my modules had crept over 1000 lines decided it was time to refactor. A 20 minute session with Codex got it refactored. It's only effort because my regression suite for hobby code isn't built into the pipeline so I have to actually run it.
If coding effort is almost zero cost then our understanding of technical debt has to change. But QA is still real and we only know if Anthropic can manage the technical risk by looking at their QA assets and internal bug tracker.
Unlimited compute. Everything is optimised towards the relevant constraint. So, Anthropic would never pull a Deepseek because of unlimited computing access. Without hunger there is not acumen for elegance, fundamentals and, in the end, resilience. Let the limits be reached and ingenuity and pride will flow. Context is everything indeed …
DeepSeek is good example. Constraints force creativity. When you can throw unlimited compute at every problem, there's no reason to write elegant code. The 3,167-line function exists because nobody had a reason to make it smaller.
Honestly I’ve seen worse written by humans and shipped. Successfully deployed software is often really badly written in my experience. It’s only products with no customers that have the luxury of good code quality.
One of my questions with this is, can AI maintain its terrible code? Because if it can thats actually a step forward.
AI is pattern matching trained on our own open source code, most of which is terrible. It produces exactly what it learned. Can you make it write clean code? Yes. But you have to actually look at what it produces and not accept the first output. Most people don't.
And yes, AI can maintain its own terrible code. Until something unpredictable happens and a human needs to step in. Good luck reading that 3,167-line function at 2am during an outage.
Yeah, I wonder if Denis has ever worked on something designed and implemented by TCS or WiPro? I have, I saw class files that were 30k lines long. Have you ever had to split a Java class because it was too long for javac to compile? That's what these guys did... This wasn't at a startup either, it was a fortune 10 company.
Yeah, I saw one method body in a Smalltalk system in a major scientific software company that was 10k lines long. Written by a very smart person who was in a great hurry.
Your comment about your grandfather resonated with me. My dad was an electrical engineer. Worked on crossbar switches.
He adopted computers early and bought an early Mac and one of the first printers released for it. The printer driver was very broken and rarely functioned. Dad turned to me and remarked with irritation and astonishment, "I think they shipped this before it was finished!" And I said, "Dad, Dad, let me tell you about the software business."
My eye was caught by the comments about the dismissive way that a bot was reported to have closed bug reports. I don;t know whether that was explicitly programmed, or learnt behaviour from historical data but either way it would make a clear statement about the company's view of the unimportance of users.
People used to write better code because they HAD to. They didn't have the memory or processing power to fit bad code. That habit was ingrained, and passed on.
AI doesn't have that history or that concern. So what if something takes 10x the milliseconds that it should?
The only problem would be if you were running your code on systems that charge for the memory used, the processing calls made... oh, wait!
So many valid points of concern and disappointment.
The only thing I differ on is Anthropic being the end all be all of this industry.
As another person said, this kind of short sighted, high funded attitude was in effect long before AI became the centre of focus it has become.
Other LLM providers will see this and position themselves accordingly. Open source alternatives have a way to usurp closed models on this front as well. My guess is Qwen and Pangu will continue to gain dominance and the conversation will be less Anthropic dominant in the months ahead
Agree. Less competition means less pressure to get things right. If Qwen and open source alternatives close the gap, Anthropic will have to compete on quality, not just velocity. Right now they can afford not to.
This is why the claim that "we still need senior engineers to write quality code" won't hold up in the long run. If they can do it faster and cheaper, they will ship it without regards to the quality.
Management wants to replace absolutely everyone. It was only the engineers who cared about code quality all along.
Back in 1982/83 I wrote a program that controlled a small model railway. Very simple. Bar code underneath each locomotive and car. I cannot recall exactly how much but the total code was way less than 1 k byte. Written in assembler. Not one single superfluous instruction. A few years later I found a bug in a C assembler. To do that I had to single step through every hex code. There was some "superfluous" code in the name of scaleability and simplifying code. But not what I considered excessive, maybe 25%?
The price dive of HW cost has prepared the ground for accepting bloated code. That in turn has made reviewing much tougher. And when speed to market is everything whatever was left of quality assurance fell by the wayside.
It shall be interesting to see what happens when the AI correction comes, and come it will.
I've been a senior software developer for decades, and in my opinion, most of those who take decisions in a company's software department commit the mistake of believing that AI is, for wacky pseudo-psychological reasons, somehow really intelligent, in the sense that it has insight in what it's doing. No, it isn't intelligent, nor has it any insight - it's an associative memory that reproduces stuff it has learned that most closely matches some input. AI doesn't "write code". It takes it from others' code it has digested.
The predecessor of this was taking code from, say, StackOverflow without cleaning it up before putting it into production code. The gurus there know what they're doing, don't they? What could possibly go wrong... Now this fragile code is part of the AI memory, and the additional problem is now that nobody knows anymore where the AI generated code really came from. But AI is intelligent and knows what it's doing, doesn't it? Well, think again...
I've always warned my young AI-infested colleagues about the security risks inherent in the unguarded reuse of AI-generated code. Thanks to the Claude leak, we see now in full size the vast shortcomings of the "100% AI code" miracle. Will the world learn from that? Nah, probably not... It probably needs a real big bang to make people rethink.
That's literally the thesis of some of my draft. 420 million GitHub repos in the training data, most of them student projects and 3 AM deadline code. The best code in the world sits behind corporate firewalls and never made it into training. AI reproduces what it learned. And what it learned is mediocre.
I don't disagree. I've also been in the industry for a long time and have tried to do things right.
The question I have is, if the models keep improving, which is not guaranteed, can the new model fix the old model's code? Can it "understand" the codebase and clean it up? If yes, then maybe they did the "right" thing of shipping now if it works, and waiting for a new model to clean it later. Claude code has been, for better or worse, a success.
If the models just don't improve that much and cannot clean all that code, then I'll really be worried.
Even current models can write clean, maintainable code. The model is a tool. It needs to be guided. If you outsource everything to it and stop looking at the output, you get exactly the result you deserve.
On the other hand, might be an opportunity for work down the line to clean a lot of garbage that is being produced in a lot of companies going all in omin AI
I don’t think Anthropic is alone in this, all the big companies are as bad (although they still market themselves as elite). I agree with you about pivoting to security, definitely the more interesting area these days, check out ToxSec’s page, he covers some fascinating security topics relating to agents etc.
I’ve been in the software industry for 21 years and worked on several software startups. In my experience, the attitude and the culture that this article speaks about aren’t new.
All CEOs, all CTOs, and most project managers I’ve worked with have had this attitude of disregarding technical debt and shipping lower-quality code. It’s no surprise to me that that culture exists within Anthropic and I’m reasonably certain that it’s no different in any other AI company.
What makes any of this new, and much worse for the industry as a whole, and for customers, is that now that culture is automated.
What used to be the occasional “garbage in, garbage out” is now going to be a frequent “toilet flush in, toilet flush out”.
You're not the first industry veteran who tells me this. Maybe I was lucky throughout my career, working with teams that didn't ship garbage. Maybe I expect from the industry what was never really there.
Engineers don't want to ship garbage. Management does.
To be clear, my experience has been that mid-level and senior engineers generally write quality code and don’t default to shipping garbage.
The entry-level/junior engineers learn with experience but start not knowing the difference between good and bad code.
It’s really just the top-level and some mid-level management people who have the attitude that it’s more important to ship *something* than to ship something of *quality*.
I think there’s a direct correlation between how much code one has actually written professionally and the importance given to quality code.
It’s not until you’ve spent a lot of time figuring out some hard bug that you realise how important and time-saving it is to write good quality well-tested code from the start.
It's not just non-technical management anymore. Garry Tan ran YC, has a tech background, and spent March bragging about 10K lines of code per week. A developer checked the output. Slop. LOC is the KPI now, even for people who should know better
That attitude permeates everything in the tech industry now. We used to be very concerned with shipping features that we knew people wanted and worked, both functionally and in terms of usability. Now we just ship “something” so we can tell the shareholders/investors.
Hii bro I'm a 1st year cs major student and I'm from a very low tier college and i only have one question....is it okay to learn coding...because I feel I'm very very behind like I'm learning if/else statement rn....and everytime I sit and try to learn there's always a new notification from some ceo that ai is gonna replace everyone....idk what should I do...I don't have any roadmap...I'll be grateful if I get some guidance from you
Hi Vinit. I’ll be honest. The employment prospects for junior software engineers aren’t good right now, and I don’t think they will improve.
Having said that, I personally think that a degree in CS is still a good thing (any technical degree is a good thing, in my opinion). You could stay in academia or pursue a career in industries that rely heavily on CS graduates (fintech and AI companies come to mind) but you shouldn’t expect to be able to get a typical software engineering job (ie, a coding job) anymore.
My suggestion is that you make sure to learn a good deal of mathematics (linear algebra, in particular) if you want to have a better chance to get hired by tech companies to do something that is more than just coding. Coding job opportunities for junior engineers, unfortunately, are almost nonexistent now.
I wish you the best of luck.
But at the end of the I had to learn to code to know what's going on right....like for the time being my roadmap something goes like this.....learn coding (c-language currently i'm a beginner)--> data structure and algorithms -->chose one field I've to do specialisation (haven't decided yet)
Yes, learning to code is part of the CS curriculum, as is data structures and algorithms but you shouldn’t limit yourself to the default curriculum. Earlier I suggested linear algebra because it’s the math from which a lot of AI comes from.
By the way, I forgot to mention something before. You said that you’re in a low-tier college. That’s not all that important. With enough motivation, you can learn on your own topics that are not taught in your courses. In fact, it’s easier now more than ever because of open university courses. Both MIT and Stanford have excellent free online CS courses. I highly recommend that you take advantage of those.
I’ve got 2 decades of experience in startups too and I agree that disregarding tech debt is common but not universal. In my opinion it’s more common among the startups that have failed in my experience. The most successful startups I worked for (one now $10B public) would prioritize significant tech debt concerns that engineering could support with data.
At these more successful startups Martin Flowler’s “Refactoring” textbook was a must read for engineers who wanted to move up. In fact using Flowler’s “Design Stamina Hypothesis” to justify refactoring work can be a useful strategy in healthy engineering orgs. Tech debt is a very real, measurable thing that can negatively impact customers and slow down development.
I think the rise of this “tech debt doesn’t matter” mindset has come with the rise of ZIRP and zombie unicorns. The next credit cycle and AI bubble bust will shift the culture again. Hopefully towards craft.
The ZIRP connection is spot on. Cheap money made it easy to outspend bad engineering instead of fixing it.
I've been working in software engineering in Silicon Valley for 30 years. I've founded companies and I've led large teams at mega-cap tech firms too. The ZIRP era created a rot in the industry that has metastasized like a malignant tumor. The VC firms suffer from "group think" and many of the more prominent general partners seem to have untreatable social media brain poisoning.
That being said, ironically the coding agents may be what saves the industry. In the hands of a talented engineer, these tools basically eliminate the moat that big VC funding rounds used to provide. One or two talented engineers can produce high quality products without ever needing a dollar of VC money. My hope is that this ushers in a new era of software companies that build genuinely useful products rather than pursuing the rent-seeking business models that are so prevalent now.
I hope you're right. AI amplifies what you already have. In the hands of talented engineers, it's a different tool entirely.
Agreed 100%, but this caveat is **critical**: “In the hands of a talented engineer…” The gap in code quality and velocity between my JR and SR devs is wider than ever.
So, the flip side is that when AI agents are in the hands of a JR founder thats only following the vibe coding maxxers on Twitter and their investors are saying it’s the future. They dont realize how slow they’re moving and how much slower they’re getting with all that tech debt.
Like look at YC’s president, Garry Tan, right now. He’s got a CS degree he almost never used instead focusing his career prior to YC on design/product (nothing wrong with that), but he unironically thinks it’s impressive that he rolled his own CMS in 300k LOC of Rails in 6 weeks. None of the yes men around him have the heart to break it to him how a staff level Rails dev could’ve cranked out a 10x better version of Garry’s list in a week with 1/10 the Claude code tokens and LOC.
lol, Garry is my favorite meme right now
It would be grateful if you can also give me some tips...I'm just a beginner I don't have any idea about anything
botozavreg@gmail.com write me on email
Completely agree. If anything AI coding agents have made the value of good software design and architecture go through the roof. Far from lessening it, we need it more than ever.
Hii bro I'm a 1st year cs major student and I'm from a very low tier college and i only have one question....is it okay to learn coding...because I feel I'm very very behind like I'm learning if/else statement rn....and everytime I sit and try to learn there's always a new notification from some ceo that ai is gonna replace everyone....idk what should I do...I don't have any roadmap...I'll be grateful if I get some guidance from you
You need to learn both fundamentals cs and AI. I don’t think that it waste of time. The only issue that it will be much harder to find first job. AI is a good tool for learning, so if you are really passionate and interested in engineering profession everything will be fine
Sure bro thanks a lot
Before, we had cheap-ass corner-cutting and wilful blindness. Now we have cheap-ass corner-cutting and wilful blindness AT SCALE.
The whole article in 2 sentences ;)
Finding shitty typescript code out in the wild has sadly been the norm for a while now, and it predates AI generated code by about a decade.
Increasingly, before I retired two years ago, I watched the priorities go from "make this good code" to "get this out by an arbitrary deadline" (plus also "put it on the newest tech," which was also not always the best fit).
I designed and wrote a system under those constraints and it was my next-to-last major project. I made it by the deadline with a lot of commentary on potential problem points and failure modes.
My final project? A 50% rewrite/redesign when one of those problem points reared it's ugly head.
"There's never time to do it right, but there's always time to do it over."
Tech debt pays for itself, literally.
Companies don’t need perfect. They need good enough.
I am a non-professional hobby-coder. For years, I have been trying to follow all the advice for good and clean code. This caused me a lot of headaches and increased complexity. And now I have to learn here that neither the pros, nor the coding agents care about that.
This article also shattered my illusion that Claude Code is the professional harness. I use opencode and pi. But I had doubts If they are inferior to the polished product from Anthropic, coming from people with insane salaries.
But it also reinforces my suspicion that that the only thing that Americans do really well is story-telling. Almost every American product turns out to be over-hyped crap.
Keep writing clean code. The fact that the pros skip it doesn't make it wrong. It makes it rare. And rare is valuable.
Pros care about clean code - at least, most of the pros I know, including myself. The problem is that pros also face a lot of other competing incentives and constraints, and not just to ship profitable code, though that certainly is one.
Even absent profit, coordinating a bunch of people to produce and maintain well-designed code is tough, for the same reason that any large system with many participants get complicated and hard to change. Similarly, sometimes low-quality codebases can succeed in spite of that quality, at least for a time.
Like with any complicated, enduring human endeavor, you have to trust and argue that taking the time to produce good code is worth it. Just because others are code quality nihilists doesn't mean we have to be.
Code quality nihilists' is the term I've been looking for.
authors and artists are sharing a similar sentiment: vast disappointment that getting good at their craft and responding to client micromanagement needs took up so much of their effort and time, and now that these clients have switched to sloppy AI, they dont care. just so much human craft, effort and goodness flushed down by these repulsive silicon valley bros and their linkedin fungal networks.
Honestly, a lot of clean code advice is mostly meant for code with multiple contributors and that needs to be maintained over years... For a personal project the a quick way is often better (not always!)
Discipline is not an option, it's a habit. If you only write clean code when others are watching, you don't write clean code.
I wasn't speaking about someone being watching or not. My point is: is the added complexity an investment that will pay off, or a waste of time and effort?
You need to know why you're doing what you are doing!
(and yes, a professional may err on the side of writing clean code when it's not needed, to maintain the habit for it. But for an hobbyist like Dhaenor, it may be precisely the wrong habit to have, depending on the specifics of his projects)
If the advice you're following is increasing the complexity of the code, it's either bad advice or you're using "clean code" in a novel way.
As a lifelong software engineer the debt for this is only as bad as the regression test suite. If the regression test suite is bad then refactoring this will be risky. If they have a good regression test suite then it's inconsequential. I sat down with my hobby code yesterday and seeing that one of my modules had crept over 1000 lines decided it was time to refactor. A 20 minute session with Codex got it refactored. It's only effort because my regression suite for hobby code isn't built into the pipeline so I have to actually run it.
If coding effort is almost zero cost then our understanding of technical debt has to change. But QA is still real and we only know if Anthropic can manage the technical risk by looking at their QA assets and internal bug tracker.
Unlimited compute. Everything is optimised towards the relevant constraint. So, Anthropic would never pull a Deepseek because of unlimited computing access. Without hunger there is not acumen for elegance, fundamentals and, in the end, resilience. Let the limits be reached and ingenuity and pride will flow. Context is everything indeed …
DeepSeek is good example. Constraints force creativity. When you can throw unlimited compute at every problem, there's no reason to write elegant code. The 3,167-line function exists because nobody had a reason to make it smaller.
By the way, my mind wondered towards this: http://www.catb.org/jargon/html/story-of-mel.html
Honestly I’ve seen worse written by humans and shipped. Successfully deployed software is often really badly written in my experience. It’s only products with no customers that have the luxury of good code quality.
One of my questions with this is, can AI maintain its terrible code? Because if it can thats actually a step forward.
AI is pattern matching trained on our own open source code, most of which is terrible. It produces exactly what it learned. Can you make it write clean code? Yes. But you have to actually look at what it produces and not accept the first output. Most people don't.
And yes, AI can maintain its own terrible code. Until something unpredictable happens and a human needs to step in. Good luck reading that 3,167-line function at 2am during an outage.
Yeah, I wonder if Denis has ever worked on something designed and implemented by TCS or WiPro? I have, I saw class files that were 30k lines long. Have you ever had to split a Java class because it was too long for javac to compile? That's what these guys did... This wasn't at a startup either, it was a fortune 10 company.
For companies shipping 30k-line Java classes that crash javac, AI is genuinely an upgrade. Can't get worse. RIP.
Yeah, I saw one method body in a Smalltalk system in a major scientific software company that was 10k lines long. Written by a very smart person who was in a great hurry.
Your comment about your grandfather resonated with me. My dad was an electrical engineer. Worked on crossbar switches.
He adopted computers early and bought an early Mac and one of the first printers released for it. The printer driver was very broken and rarely functioned. Dad turned to me and remarked with irritation and astonishment, "I think they shipped this before it was finished!" And I said, "Dad, Dad, let me tell you about the software business."
My eye was caught by the comments about the dismissive way that a bot was reported to have closed bug reports. I don;t know whether that was explicitly programmed, or learnt behaviour from historical data but either way it would make a clear statement about the company's view of the unimportance of users.
People used to write better code because they HAD to. They didn't have the memory or processing power to fit bad code. That habit was ingrained, and passed on.
AI doesn't have that history or that concern. So what if something takes 10x the milliseconds that it should?
The only problem would be if you were running your code on systems that charge for the memory used, the processing calls made... oh, wait!
Fascinating. And terrible for the wasted kilowatts.
So many valid points of concern and disappointment.
The only thing I differ on is Anthropic being the end all be all of this industry.
As another person said, this kind of short sighted, high funded attitude was in effect long before AI became the centre of focus it has become.
Other LLM providers will see this and position themselves accordingly. Open source alternatives have a way to usurp closed models on this front as well. My guess is Qwen and Pangu will continue to gain dominance and the conversation will be less Anthropic dominant in the months ahead
Agree. Less competition means less pressure to get things right. If Qwen and open source alternatives close the gap, Anthropic will have to compete on quality, not just velocity. Right now they can afford not to.
Yeah this attitude towards quality is part of why I left the profession for good. Software Engineering is done, put a fork in it.
This is why the claim that "we still need senior engineers to write quality code" won't hold up in the long run. If they can do it faster and cheaper, they will ship it without regards to the quality.
Management wants to replace absolutely everyone. It was only the engineers who cared about code quality all along.
Back in 1982/83 I wrote a program that controlled a small model railway. Very simple. Bar code underneath each locomotive and car. I cannot recall exactly how much but the total code was way less than 1 k byte. Written in assembler. Not one single superfluous instruction. A few years later I found a bug in a C assembler. To do that I had to single step through every hex code. There was some "superfluous" code in the name of scaleability and simplifying code. But not what I considered excessive, maybe 25%?
The price dive of HW cost has prepared the ground for accepting bloated code. That in turn has made reviewing much tougher. And when speed to market is everything whatever was left of quality assurance fell by the wayside.
It shall be interesting to see what happens when the AI correction comes, and come it will.
I've been a senior software developer for decades, and in my opinion, most of those who take decisions in a company's software department commit the mistake of believing that AI is, for wacky pseudo-psychological reasons, somehow really intelligent, in the sense that it has insight in what it's doing. No, it isn't intelligent, nor has it any insight - it's an associative memory that reproduces stuff it has learned that most closely matches some input. AI doesn't "write code". It takes it from others' code it has digested.
The predecessor of this was taking code from, say, StackOverflow without cleaning it up before putting it into production code. The gurus there know what they're doing, don't they? What could possibly go wrong... Now this fragile code is part of the AI memory, and the additional problem is now that nobody knows anymore where the AI generated code really came from. But AI is intelligent and knows what it's doing, doesn't it? Well, think again...
I've always warned my young AI-infested colleagues about the security risks inherent in the unguarded reuse of AI-generated code. Thanks to the Claude leak, we see now in full size the vast shortcomings of the "100% AI code" miracle. Will the world learn from that? Nah, probably not... It probably needs a real big bang to make people rethink.
That's literally the thesis of some of my draft. 420 million GitHub repos in the training data, most of them student projects and 3 AM deadline code. The best code in the world sits behind corporate firewalls and never made it into training. AI reproduces what it learned. And what it learned is mediocre.
I don't disagree. I've also been in the industry for a long time and have tried to do things right.
The question I have is, if the models keep improving, which is not guaranteed, can the new model fix the old model's code? Can it "understand" the codebase and clean it up? If yes, then maybe they did the "right" thing of shipping now if it works, and waiting for a new model to clean it later. Claude code has been, for better or worse, a success.
If the models just don't improve that much and cannot clean all that code, then I'll really be worried.
Even current models can write clean, maintainable code. The model is a tool. It needs to be guided. If you outsource everything to it and stop looking at the output, you get exactly the result you deserve.
On the other hand, might be an opportunity for work down the line to clean a lot of garbage that is being produced in a lot of companies going all in omin AI
I don’t think Anthropic is alone in this, all the big companies are as bad (although they still market themselves as elite). I agree with you about pivoting to security, definitely the more interesting area these days, check out ToxSec’s page, he covers some fascinating security topics relating to agents etc.
Thanks for the ToxSec rec, will check it out.