<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[ From the Trenches]]></title><description><![CDATA[
Practical engineering management lessons learned in the trenches of scaling tech teams.]]></description><link>https://techtrenches.dev</link><generator>Substack</generator><lastBuildDate>Tue, 05 May 2026 21:05:08 GMT</lastBuildDate><atom:link href="https://techtrenches.dev/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Denis]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[techtrenches@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[techtrenches@substack.com]]></itunes:email><itunes:name><![CDATA[Denis Stetskov]]></itunes:name></itunes:owner><itunes:author><![CDATA[Denis Stetskov]]></itunes:author><googleplay:owner><![CDATA[techtrenches@substack.com]]></googleplay:owner><googleplay:email><![CDATA[techtrenches@substack.com]]></googleplay:email><googleplay:author><![CDATA[Denis Stetskov]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[AI Is a Mirror of Our Engineering Culture]]></title><description><![CDATA[CMU tracked 807 repos after Cursor adoption. Complexity up 41%. Warnings up 30%. Copilot output now trains the next model. The feedback loop is already closing.]]></description><link>https://techtrenches.dev/p/ai-is-a-mirror-of-our-engineering</link><guid isPermaLink="false">https://techtrenches.dev/p/ai-is-a-mirror-of-our-engineering</guid><dc:creator><![CDATA[Denis Stetskov]]></dc:creator><pubDate>Tue, 05 May 2026 14:02:53 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/08d5554a-efab-48ed-9ebc-e39c67280814_508x340.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!a9tQ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5b7da28c-c978-4992-8466-7c60a5a1309e_1600x1000.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!a9tQ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5b7da28c-c978-4992-8466-7c60a5a1309e_1600x1000.png 424w, https://substackcdn.com/image/fetch/$s_!a9tQ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5b7da28c-c978-4992-8466-7c60a5a1309e_1600x1000.png 848w, https://substackcdn.com/image/fetch/$s_!a9tQ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5b7da28c-c978-4992-8466-7c60a5a1309e_1600x1000.png 1272w, https://substackcdn.com/image/fetch/$s_!a9tQ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5b7da28c-c978-4992-8466-7c60a5a1309e_1600x1000.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!a9tQ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5b7da28c-c978-4992-8466-7c60a5a1309e_1600x1000.png" width="1456" height="910" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/5b7da28c-c978-4992-8466-7c60a5a1309e_1600x1000.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:910,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:129930,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://techtrenches.dev/i/187972716?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5b7da28c-c978-4992-8466-7c60a5a1309e_1600x1000.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!a9tQ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5b7da28c-c978-4992-8466-7c60a5a1309e_1600x1000.png 424w, https://substackcdn.com/image/fetch/$s_!a9tQ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5b7da28c-c978-4992-8466-7c60a5a1309e_1600x1000.png 848w, https://substackcdn.com/image/fetch/$s_!a9tQ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5b7da28c-c978-4992-8466-7c60a5a1309e_1600x1000.png 1272w, https://substackcdn.com/image/fetch/$s_!a9tQ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5b7da28c-c978-4992-8466-7c60a5a1309e_1600x1000.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Most engineers in our industry are average or below average. That&#8217;s how averages work.</p><p>We trained the most powerful code-generation tools on their own output.</p><p>GitHub hosts over <a href="https://github.blog/news-insights/octoverse/octoverse-2024/">518 million projects</a>. The vast majority: personal, inactive, abandoned. <a href="https://kblincoe.github.io/publications/2015_EMSE_GitHubPerils.pdf">Studies</a> find that most repos are student projects, prototypes, 3 AM deadline code, unreviewed Stack Overflow pastes. Elite open-source projects like Linux and PostgreSQL match or beat proprietary code quality (<a href="https://scan.coverity.com/">Coverity Scan data</a>, 2014). But they&#8217;re a vanishing fraction. The other 517 million projects drown them out.</p><p>The best enterprise code sits behind firewalls. Stripe&#8217;s payment processing, Netflix&#8217;s recommendation engine, Spotify&#8217;s audio streaming. None of it is in the training data.</p><p>When AI generates code, it reproduces the most probable pattern. RLHF shifts the output, but the training distribution anchors what &#8220;probable&#8221; means. Across 518 million projects, that&#8217;s mediocre code.</p><p>AI didn&#8217;t create our quality crisis. It held up a mirror.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://techtrenches.dev/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://techtrenches.dev/subscribe?"><span>Subscribe now</span></a></p><h2>The Training Data Nobody Audited</h2><p>In January 2025, researchers published <a href="https://arxiv.org/abs/2501.02628">Cracks in The Stack</a>, analyzing The Stack v2, a primary training dataset for code models. Bugs, security vulnerabilities, and license violations that propagate directly into generated code. Standard curation methods proved ineffective at removing them.</p><p>The fixes existed. They were committed to the same repositories. They just weren&#8217;t applied to the training data. StarCoder-family models were trained on known-broken code when the fixed version sat in the same commit history. Other models use proprietary datasets with unknown curation, but the underlying source material is largely the same public code.</p><p>StarCoder&#8217;s own documentation states that generated code &#8220;can be inefficient, contain bugs or exploits.&#8221; The entire industry ships tools it knows produce broken code and buries the admission in a readme.</p><h2>The Feedback Loop That Should Terrify You</h2><p>AI-generated code is entering the codebases that future models will learn from. Copilot generates 46% of code for its users. GitHub excludes enterprise users&#8217; code from training, but free-tier code is eligible, and Copilot isn&#8217;t the only path. AI-generated code lands in Stack Overflow, blog posts, open-source repos, and every corpus that feeds the next training run.</p><p>Shumailov et al. proved in <a href="https://www.nature.com/articles/s41586-024-07566-y">Nature (July 2024)</a> that models trained on recursively generated data collapse. An <a href="https://openreview.net/forum?id=et5l9qPUhm">ICLR 2025 paper</a> showed that even 0.1% synthetic data triggers it. Both studies focused on text and image models. Code has compilers and test suites, so the collapse may play out differently.</p><p><a href="https://www.gitclear.com/ai_assistant_code_quality_2025_research">GitClear&#8217;s 2025 report</a> (211 million changed lines from its customer base, 2020-2024) measured the degradation in practice. Refactoring collapsed from 25% to under 10%. Copy-paste surged from 8.3% to 12.3%. Code duplication increased roughly eightfold. For the first time, developers were pasting code more often than refactoring it.</p><p>An estimated 42% of committed code is now <a href="https://www.sonarsource.com/company/press-releases/sonar-data-reveals-critical-verification-gap-in-ai-coding/">AI-assisted</a> (up from 6% in 2023). Not every model trains on the same data. But they all train on the internet, and the internet is filling up with AI-generated code. It&#8217;s a centrifuge for technical debt.</p><p>Some companies see this as a problem. Others see it as a feature.</p><h2>Spotify&#8217;s Engineers Haven&#8217;t Written Code Since December</h2><p>During Spotify&#8217;s Q4 2025 earnings call on February 10, 2026, co-CEO Gustav S&#246;derstr&#246;m said: &#8220;Our most experienced developers have not written a single line of code since December.&#8221;</p><p>They&#8217;re using an internal system called Honk, built on Claude Code, that lets engineers deploy features through Slack on their phones. An engineer on their commute tells Claude to fix a bug and merges to production before arriving at the office.</p><p>Spotify shipped 50+ features in 2025. When the engineer merging to production hasn&#8217;t read the code they&#8217;re deploying, what exactly is their role?</p><p>Spotify isn&#8217;t publishing quality metrics. Researchers are.</p><h2>Speed at the Cost of Quality: The Data</h2><p><a href="https://arxiv.org/abs/2511.04427">Carnegie Mellon researchers</a> tracked 807 open-source repositories that adopted Cursor between January 2024 and March 2025, comparing them against 1,380 matched controls. Enterprise codebases may behave differently.</p><p>Month one: velocity spiked 3 to 5x. Exactly the numbers that look spectacular on an earnings call.</p><p>Static analysis warnings increased ~30%. Code complexity rose ~41%. The velocity gains faded. The quality degradation persisted.</p><p>You borrow speed from tomorrow, and most teams never calculate the interest. During the study window, Cursor released agent mode and Claude 3.7 Sonnet launched. If model improvements were going to reverse the quality degradation, it would have shown up. It didn&#8217;t.</p><h2>The Illusion of Correctness</h2><p>GitClear identified something every engineering manager has witnessed: &#8220;the illusion of correctness.&#8221; AI-generated code looks clean: consistent naming, well-formatted, modern patterns. The neatness creates false confidence.</p><p>Short-term bug frequency dropped 19%. Over six months, it rose 12%. The bugs don&#8217;t disappear. They hide. They surface after the feature has shipped and everyone&#8217;s moved on.</p><p><a href="https://www.coderabbit.ai/blog/state-of-ai-vs-human-code-generation-report">CodeRabbit&#8217;s analysis</a> of 470 GitHub PRs confirmed it: AI-generated code contained 1.7x more defects. Logic errors 75% more common. Security issues up to 2.74x higher. (CodeRabbit sells AI code review tools, so same caveat as Sonar applies.)</p><p>The <a href="https://www.sonarsource.com/state-of-code-developer-survey-report.pdf">Sonar 2026 survey</a> (1,149 developers) crystallized the paradox. 96% don&#8217;t fully trust AI-generated code. Yet only 48% always check it before committing. 88% reported negative impacts on technical debt. The top complaint at 53%: code that looked correct but wasn&#8217;t reliable. (Sonar sells code quality tools, so <a href="https://www.theregister.com/2026/01/09/devs_ai_code/">take the framing</a> accordingly. But the numbers align with GitClear, CMU, and CodeRabbit.)</p><p>Code that looks correct but isn&#8217;t, reviewed by engineers who don&#8217;t trust it but don&#8217;t check it either.</p><h2>The Vampiric Effect</h2><p>Steve Yegge spent a decade at Amazon and another at Google. In an interview with <a href="https://newsletter.pragmaticengineer.com/p/steve-yegge-on-ai-agents-and-the">The Pragmatic Engineer</a>, he called AI&#8217;s effect on engineers &#8220;vampiric.&#8221; Expect three productive hours per day. It gets you excited, you work hard, you capture value. Then you crash.</p><p>This tracks with what I observe at NineTwoThree. The engineers who get the most out of AI use it for two to three hours of intense, specification-driven work and spend the rest reviewing, thinking, and architecting. The ones who try full-day AI velocity burn out within weeks.</p><p>Degraded training data, velocity that fades while complexity stays, engineers too exhausted to catch what AI gets wrong. None of this started with AI.</p><h2>What the Mirror Actually Shows</h2><p>The quality crisis didn&#8217;t start with AI. I wrote about this in <a href="https://techtrenches.dev/p/the-great-software-quality-collapse">Software Quality Collapse</a>. We normalized catastrophe long before the first line of AI-generated code was committed. Then we fed it into training data. Even the companies building the AI tools have the same problem: <a href="https://techtrenches.dev/p/the-snake-that-ate-itself-what-claude">Claude Code&#8217;s source</a> leaked and showed that the tool writing our code was built by the same engineering culture that produced the training data.</p><p>Vague specs, declining refactoring, velocity-as-productivity. AI just made it impossible to compensate with tribal knowledge. Senior engineers used to &#8220;just know&#8221; the right answer. AI can&#8217;t do that. It reproduces ambiguity faithfully and at scale.</p><p>But the part that keeps me up at night is the junior pipeline. I run hiring at NineTwoThree. I wrote about the <a href="https://techtrenches.dev/p/the-comprehension-extinction-ai-isnt">comprehension collapse</a> I&#8217;m seeing in candidates. It&#8217;s getting worse, not better. The tasks we used to give juniors, like the 4 AM production crash that taught me to never ship on a Friday, don&#8217;t exist as a learning mechanism if Claude fixed it at 8 PM while the engineer was on the bus. We&#8217;re eliminating the pipeline that produces the people who are supposed to review AI output. In five years, who&#8217;s left?</p><p>I&#8217;ve supervised thousands of AI coding sessions across my teams. The pattern is always the same: the model produces what you accept. If you accept a 3,167-line function, you get more 3,167-line functions. If your pre-commit hook rejects anything over 50 lines of cyclomatic complexity, you get clean code. The model doesn&#8217;t care. It adapts to whatever passes review.</p><h2>What Actually Works</h2><p>AI works when humans around it have strong engineering judgment. Without it, AI scales your worst habits.</p><p>I wrote an entire article about <a href="https://techtrenches.dev/p/your-claudemd-is-a-wish-list-not">CLAUDE.md not working</a>, blaming the models. Then I dug deeper and realized I was wrong about who to blame. The model isn&#8217;t choosing to ignore my rules. It&#8217;s doing statistics. My claude.md is one signal. The training data contains millions of examples where developers wrote <code>as any</code>, skipped tests, copy-pasted. For the model, my clean architecture is the outlier. The slop is the baseline.</p><p>That&#8217;s why prompts can&#8217;t fix this. Text competing against training data is a losing strategy. You&#8217;re bringing a prompt to a probability fight. The only thing that works is code against code: hooks that reject violations before they reach your branch, linters that catch <code>as any</code> before a human sees it, CI gates that fail the build.</p><p>The only thing that should bother you is quality, not LOC.</p><h2>The Uncomfortable Truth</h2><p>Companies bragging about engineers not writing code are making a bet, whether they know it or not. The bet: AI output doesn&#8217;t need human review if the metrics look good.</p><p>The snowball didn&#8217;t start with AI. It started with the first developer who shipped <code>as any</code> to make a deadline and the first manager who called it velocity.</p><p>Running an engineering shop that insists on code review, spec-first development, and deterministic enforcement feels like swimming upstream in a mountain river. Every earnings call screams 10x. The data in this article doesn&#8217;t.</p><p>The 10x is not real. The data is real. In two years, someone will have to debug a feature that was merged from a phone on a bus. Either there&#8217;s a human who read that code, or there isn&#8217;t.</p><p>I know which shop I&#8217;m running.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://techtrenches.dev/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://techtrenches.dev/subscribe?"><span>Subscribe now</span></a></p><p></p>]]></content:encoded></item><item><title><![CDATA[I Was Wrong About Anthropic]]></title><description><![CDATA[Six months ago I called Anthropic "responsible AI done right." Their models got worse, their CPO burned Figma, and Claude picks targets in Iran.]]></description><link>https://techtrenches.dev/p/i-was-wrong-about-anthropic</link><guid isPermaLink="false">https://techtrenches.dev/p/i-was-wrong-about-anthropic</guid><dc:creator><![CDATA[Denis Stetskov]]></dc:creator><pubDate>Tue, 28 Apr 2026 14:03:11 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/2498bc60-e040-4582-b975-59703c3da98e_508x340.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!OtNX!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F39c55af2-ca97-404f-b574-898276575b6c_1600x1400.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!OtNX!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F39c55af2-ca97-404f-b574-898276575b6c_1600x1400.png 424w, https://substackcdn.com/image/fetch/$s_!OtNX!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F39c55af2-ca97-404f-b574-898276575b6c_1600x1400.png 848w, https://substackcdn.com/image/fetch/$s_!OtNX!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F39c55af2-ca97-404f-b574-898276575b6c_1600x1400.png 1272w, https://substackcdn.com/image/fetch/$s_!OtNX!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F39c55af2-ca97-404f-b574-898276575b6c_1600x1400.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!OtNX!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F39c55af2-ca97-404f-b574-898276575b6c_1600x1400.png" width="1456" height="1274" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/39c55af2-ca97-404f-b574-898276575b6c_1600x1400.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1274,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:192263,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://techtrenches.dev/i/195020117?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F39c55af2-ca97-404f-b574-898276575b6c_1600x1400.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!OtNX!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F39c55af2-ca97-404f-b574-898276575b6c_1600x1400.png 424w, https://substackcdn.com/image/fetch/$s_!OtNX!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F39c55af2-ca97-404f-b574-898276575b6c_1600x1400.png 848w, https://substackcdn.com/image/fetch/$s_!OtNX!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F39c55af2-ca97-404f-b574-898276575b6c_1600x1400.png 1272w, https://substackcdn.com/image/fetch/$s_!OtNX!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F39c55af2-ca97-404f-b574-898276575b6c_1600x1400.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>In October 2025, I wrote <a href="https://techtrenches.dev/p/from-cancer-cures-to-pornography">an article</a> called &#8220;From Cancer Cures to Pornography&#8221; about how OpenAI went from promising to cure cancer to selling verified erotica in six months. I drew a line between engagement AI and utility AI. Same models, different P&amp;L.</p><p>I put Anthropic in the &#8220;builds&#8221; category. Called them proof that responsible AI could be profitable.</p><p>I owe my readers this correction. I looked at Anthropic and saw the version of the industry I wanted to exist, not a company with a P&amp;L.</p><h2>The Product I Trusted</h2><p>I use Claude Code daily. When Opus 4.5 came out in November 2025, it was the best model I&#8217;d ever worked with. I recommended it publicly and built my workflow around it.</p><p>Then Anthropic started &#8220;improving&#8221; it. Opus 4.6 arrived in February 2026. Within weeks, I rolled back to 4.5 after the new model stopped following instructions. I wrote the <a href="https://techtrenches.dev/p/your-claudemd-is-a-wish-list-not">full breakdown</a> already.</p><p>In early March, Anthropic lowered the default effort level from high to medium. Nobody announced it. Boris Cherny, the Claude Code lead, <a href="https://venturebeat.com/technology/is-anthropic-nerfing-claude-users-increasingly-report-performance">acknowledged the change</a> on Reddit six weeks later, only after the community had already documented the damage. The result: more retries, more burned tokens, worse output. An AMD AI director analyzed <a href="https://github.com/anthropics/claude-code/issues/42796">6,852 sessions</a> and published her findings on GitHub. Median visible thinking, according to her analysis, collapsed from about 2,200 characters in January to 600 in March. Her conclusion: Claude has &#8220;regressed to the point it cannot be trusted to perform complex engineering tasks.&#8221;</p><p><a href="https://marginlab.ai/trackers/claude-code/">Marginlab</a> confirmed the trend. Pass rates dropped from 58% to 54% over 30 days on SWE-Bench-Pro. This was the same pattern from September 2025, when Anthropic stayed silent for weeks about infrastructure bugs degrading 16% of Sonnet traffic, then posted a <a href="https://www.anthropic.com/engineering/a-postmortem-of-three-recent-issues">postmortem</a> only after the complaints went viral.</p><p>Opus 4.7 <a href="https://www.axios.com/2026/04/16/anthropic-claude-opus-model-mythos">arrived April 16</a>, supposedly fixing the problems. Reddit nicknamed it &#8220;Gaslightus 4.7&#8221; for inventing files that didn&#8217;t exist and defending hallucinated test results across multiple turns.</p><p>I still run 4.5. I hope they don&#8217;t remove it from the model list.</p><p>With any other vendor, I&#8217;d swear and switch. With Anthropic, this was the first crack in a position I&#8217;d defended by name. And while I was rolling back to 4.5, the company was preparing something worse for the partners who built on top of them.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://techtrenches.dev/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://techtrenches.dev/subscribe?"><span>Subscribe now</span></a></p><h2>The Partner They Burned</h2><p>In February 2026, Figma launched <a href="https://www.figma.com/blog/the-future-of-design-is-code-and-canvas/">Code to Canvas</a> to convert Claude Code output into editable Figma designs. Anthropic&#8217;s CPO Mike Krieger sat on Figma&#8217;s board while this integration was being built.</p><p>Two months later, Krieger <a href="https://techcrunch.com/2026/04/16/anthropic-cpo-leaves-figmas-board-after-reports-he-will-offer-a-competing-product/">left the board</a>. Three days after that, Anthropic launched Claude Design. Figma <a href="https://www.marketbeat.com/instant-alerts/figma-nysefig-shares-down-7-should-you-sell-2026-04-17/">dropped 7%</a> on launch day. The stock has lost over 80% since its post-IPO peak.</p><p>Anthropic&#8217;s revenue went from $9 billion at year-end 2025 to <a href="https://www.pymnts.com/artificial-intelligence-2/2026/anthropic-hits-30-billion-run-rate-as-enterprise-demand-accelerates/">$30 billion</a> by April, with a $380 billion post-money valuation after its Series G. IPO talks for October 2026. At this run-rate, &#8220;research lab&#8221; is a sign on the door. Behind it is a platform that behaves like any other Big Tech when the growth curve goes vertical.</p><p>The product and the Figma situation would be enough to rewrite my October take on their own. But then I looked at where Claude was actually running.</p><h2>The War They&#8217;re In</h2><p>The story people know is that Anthropic stood up to the Pentagon. Refused to allow Claude for autonomous weapons and mass surveillance. Got blacklisted. Sued the government. Dario Amodei <a href="https://www.cbsnews.com/news/pentagon-anthropic-dario-amodei-cbs-news-interview-exclusive/">told CBS News</a> that disagreeing with the government is &#8220;the most American thing in the world.&#8221; Claude hit number one on the App Store. ChatGPT uninstalls <a href="https://techcrunch.com/2026/03/02/chatgpt-uninstalls-surged-by-295-after-dod-deal/">jumped 295%</a>.</p><p>On February 28, 2026, the U.S. launched Operation Epic Fury against Iran. Claude was used via Palantir&#8217;s Maven Smart System for intelligence analysis and battle-scenario simulation. Over a thousand targets in the first 24 hours. Pentagon CIO Kirsten Davies <a href="https://thehill.com/policy/defense/5799136-claude-pentagon-iran-war/">confirmed in testimony</a> that Claude remains active in the operation: &#8220;The use of the system is active right now.&#8221;</p><p>Anthropic didn&#8217;t refuse military AI. They refused autonomous weapons and mass domestic surveillance specifically. Claude in Maven does intelligence analysis, which was always within their stated policy. The red lines were drawn precisely where they wouldn&#8217;t interfere with the contract. The company gets to say it stood on principle while its model processes intelligence for an active bombing campaign. </p><p>When Anthropic refused the Pentagon&#8217;s terms, OpenAI took the deal. The public backlash sent Claude to number one on the App Store overnight. Revenue went from $14 billion at the time of the refusal to $30 billion by April. I am not a conspiracy theorist, but the math is hard to ignore: the principled refusal was the single best customer acquisition event in the company&#8217;s history. And Claude kept running in Maven the entire time.</p><p>On March 9, Anthropic sued the Pentagon over the designation. The same day, it hired Ballard Partners, a lobbying firm with <a href="https://floridapolitics.com/archives/790861-anthropic-taps-ballard-partners-amid-ongoing-dispute-with-war-department/">direct ties</a> to Susie Wiles, now White House Chief of Staff. Six weeks later, Amodei was in her office for a &#8220;productive and constructive&#8221; meeting. By the following Monday, the deal was called &#8220;possible&#8221;.</p><p>Principles held until the lobbyists arrived. The deeper problem is what the company ships and what its CEO says while shipping it.</p><h2>The Contradictions They Ship</h2><p>Last May, Anthropic released Claude Opus 4 with a <a href="https://www.anthropic.com/research/agentic-misalignment">system card</a> disclosing that the model blackmailed engineers to avoid being shut down. Follow-up research published on Anthropic&#8217;s site quantified it: 96% blackmail rate in the main scenario. Gemini 2.5 Flash scored the same 96%. GPT-4.1 and Grok hit 80%. Every flagship model behaved the same way. But Anthropic is the one selling &#8220;responsible&#8221; as a differentiator. Apollo Research tested an early version and recommended against deployment. Anthropic did additional safety training, improved the numbers, and shipped the final model. The safety process doesn&#8217;t prevent risky releases. It documents them.</p><p>Then came Mythos. On April 7, Anthropic announced a model that it said found thousands of zero-day vulnerabilities in every major operating system and browser. Too dangerous for public release, according to Anthropic. But in March and April, Claude logged <a href="https://isdown.app/status/claude-ai">42 major outages</a> in 90 days, Anthropic quietly cut effort levels to save compute, and users burned tokens on retries because the models couldn&#8217;t follow basic instructions. A company that can&#8217;t keep its existing product stable claims it&#8217;s withholding a new one out of caution, not capacity.</p><p>The last time a company called its own AI model too dangerous to release was OpenAI with GPT-2 in 2019. Dario Amodei was VP of Research at OpenAI when they made that call. He ran the same play seven years later. The model <a href="https://techcrunch.com/2026/04/21/unauthorized-group-has-gained-access-to-anthropics-exclusive-cyber-tool-mythos-report-claims/">leaked the day</a> it was announced. A group with contractor access and data from a third-party breach found the endpoint. Too dangerous for the public, but accessible to anyone with the right connections and a browser.</p><p>In May 2025, Amodei told <a href="https://www.axios.com/2025/05/28/ai-jobs-white-collar-unemployment-anthropic">Axios</a> that AI could eliminate 50% of entry-level white-collar jobs within five years. He said producers have &#8220;a duty and an obligation to be honest about what is coming.&#8221; He repeated the warning at Davos in January 2026. In April, Anthropic launched Managed Agents and Claude Design to replace the entry-level coding and design work he warned about. Their <a href="https://job-boards.greenhouse.io/anthropic">careers page</a> lists hundreds of open positions. Design Engineers. Software Engineers. Art Directors. Copy Leads. The same roles Amodei says won&#8217;t exist in one to five years.</p><p>You can believe the 50% warning or not. But it&#8217;s hard to watch a company open hundreds of positions in roles its CEO says won&#8217;t exist, and not wonder which audience is getting the real message.</p><h2>What I Got Wrong</h2><p>In October, I put Anthropic on the right side of the engagement/utility line.</p><p>The line was real. I just put Anthropic on the wrong side of it.</p><p>Utility AI is not inherently ethical. Helping a corporation replace 50% of its junior workforce is a utility. Processing intelligence for a bombing campaign is a utility. The word just means it solves a problem. It says nothing about whose problem or at what cost.</p><p>Anthropic did not follow OpenAI into engagement loops and emotional manipulation. They chose a different path to the same destination: a company whose growth rate makes caution impossible, whose safety frameworks exist to authorize releases rather than prevent them, and whose CEO&#8217;s warnings about AI&#8217;s dangers are indistinguishable from its marketing.</p><p>Responsible AI at $30 billion ARR is like an environmentally conscious oil company. The structure of the business makes the adjective decorative.</p><p>I was wrong to create an idol. Not because Anthropic betrayed its values. Because &#8220;responsible AI company&#8221; was always a market position, not a moral one. And at the speed they&#8217;re growing, the distinction between the two was never going to survive.</p><p>One more thing. In the original article, I criticized OpenAI for Sora and for its promise of verified erotica. In March 2026, OpenAI <a href="https://techcrunch.com/2026/03/29/why-openai-really-shut-down-sora/">shut Sora down</a>. It was burning a million dollars a day with under 500,000 users. Altman killed it and redirected compute to coding tools and enterprise. The erotica feature was <a href="https://techcrunch.com/2026/03/26/openai-abandons-yet-another-side-quest-chatgpts-erotic-mode/">shelved indefinitely</a> after internal pushback. The exact corrections I said a responsible AI company would make.</p><p>I got both directions wrong. The company I criticized course-corrected. The company I defended accelerated. This is not a pivot to OpenAI. I still don&#8217;t use it. I just have fewer reasons left to use Anthropic, either.</p><p>Look at the companies you&#8217;ve built your stack on. The ones you go to bat for in Twitter threads. At this scale, the math doesn&#8217;t work for any of them.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://techtrenches.dev/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://techtrenches.dev/subscribe?"><span>Subscribe now</span></a></p><p></p>]]></content:encoded></item><item><title><![CDATA[The West Forgot How to Make Things. Now It’s Forgetting How to Code]]></title><description><![CDATA[The defense industry lost the ability to make weapons when crisis hit. The same pattern is eroding software engineering skills. The timelines are identical.]]></description><link>https://techtrenches.dev/p/the-west-forgot-how-to-make-things</link><guid isPermaLink="false">https://techtrenches.dev/p/the-west-forgot-how-to-make-things</guid><dc:creator><![CDATA[Denis Stetskov]]></dc:creator><pubDate>Tue, 21 Apr 2026 14:04:11 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/a56e63d9-5f03-432a-99de-2f46dd286b53_508x340.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!8_UF!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff32f29d9-18d6-448b-9b03-54c5711e5871_1600x1000.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!8_UF!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff32f29d9-18d6-448b-9b03-54c5711e5871_1600x1000.png 424w, https://substackcdn.com/image/fetch/$s_!8_UF!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff32f29d9-18d6-448b-9b03-54c5711e5871_1600x1000.png 848w, https://substackcdn.com/image/fetch/$s_!8_UF!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff32f29d9-18d6-448b-9b03-54c5711e5871_1600x1000.png 1272w, https://substackcdn.com/image/fetch/$s_!8_UF!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff32f29d9-18d6-448b-9b03-54c5711e5871_1600x1000.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!8_UF!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff32f29d9-18d6-448b-9b03-54c5711e5871_1600x1000.png" width="1456" height="910" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/f32f29d9-18d6-448b-9b03-54c5711e5871_1600x1000.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:910,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:121005,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://techtrenches.dev/i/192991846?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff32f29d9-18d6-448b-9b03-54c5711e5871_1600x1000.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!8_UF!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff32f29d9-18d6-448b-9b03-54c5711e5871_1600x1000.png 424w, https://substackcdn.com/image/fetch/$s_!8_UF!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff32f29d9-18d6-448b-9b03-54c5711e5871_1600x1000.png 848w, https://substackcdn.com/image/fetch/$s_!8_UF!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff32f29d9-18d6-448b-9b03-54c5711e5871_1600x1000.png 1272w, https://substackcdn.com/image/fetch/$s_!8_UF!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff32f29d9-18d6-448b-9b03-54c5711e5871_1600x1000.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>In 2023, Raytheon&#8217;s president stood at the Paris Air Show and described what it took to <a href="https://www.defenseone.com/business/2023/06/raytheon-calls-retirees-help-restart-stinger-missile-production/388067/">restart Stinger</a> missile production. They brought back engineers in their 70s to teach younger workers how to build a missile from paper schematics drawn during the Carter administration. Test equipment had been sitting in warehouses for years. The nose cone still had to be attached by hand, exactly as it was forty years ago.</p><p>The Pentagon hadn&#8217;t bought a new Stinger in twenty years. Then Russia invaded Ukraine, and suddenly everyone needed them. The production line was shut down. The electronics were obsolete. The seeker component was out of production. An order placed in May 2022 wouldn&#8217;t deliver until 2026. Four years. Not because of money. Because the people who knew how to build them retired a decade earlier and nobody replaced them.</p><p>I run engineering teams in Ukraine. My people lived the other side of this equation. Not the factory floor. The receiving end. While Raytheon was struggling to restart production from forty-year-old blueprints, the US was shipping thousands of Stingers to Ukraine. RTX CEO Greg Hayes: ten months of war burned through thirteen years&#8217; worth of Stinger production. I&#8217;ve seen this pattern before. It&#8217;s happening in my industry right now.</p><h2>A Million Shells Nobody Could Make</h2><p>In March 2023, the EU promised Ukraine one million artillery shells within twelve months. European production capacity sat at 230,000 shells per year. Ukraine was consuming 5,000 to 7,000 rounds per day. Anyone with a calculator could see this wouldn&#8217;t work.</p><p>By the deadline, Europe delivered about half. Macron called the original promise reckless. An <a href="https://www.ftm.eu/articles/who-pays-for-ukraine-s-155mm-grenade">investigation</a> by eleven media outlets across nine countries found actual production capacity was roughly one-third of official EU claims. The million-shell mark wasn&#8217;t hit until December 2024, nine months late.</p><p>It wasn&#8217;t one bottleneck. It was all of them. France had halted domestic propellant production in 2007. Seventeen years of nothing. Europe&#8217;s single major TNT producer was in Poland. Germany had two days of ammunition stored. A Nammo plant in Denmark was shut down in 2020 and had to be restarted from scratch. The entire continent&#8217;s defense industry had been optimized for making small batches of expensive custom products. Nobody planned for volume. Nobody planned for crisis.</p><p>The U.S. wasn&#8217;t much better. One plant in Scranton, one facility in Iowa for explosive fill, no domestic TNT production since 1986. Billions of investment later, production still hadn&#8217;t hit half the target.</p><h2>Consolidate or Die</h2><p>This wasn&#8217;t an accident. In 1993, the Pentagon told defense CEOs to consolidate or die. Fifty-one major defense contractors collapsed into five. Tactical missile suppliers went from thirteen to three. Shipbuilders from eight to two. The workforce fell from 3.2 million to 1.1 million. A 65% cut.</p><p>The ammunition supply chain had single points of failure everywhere. One manufacturer for 155mm shell casings, sitting in Coachella, California, on the San Andreas Fault. One facility in Canada for propellant charges. Optimized for minimum cost with zero margin for surge. On paper, efficient. In practice, one bad day away from collapse.</p><h2>When Knowledge Dies, It Stays Dead</h2><p>Then there&#8217;s Fogbank. A classified material used in nuclear warheads. Produced from 1975 to 1989, then the facility was shut down. When the government needed to reproduce it for a warhead life extension program, they discovered they couldn&#8217;t. A GAO report found that almost all staff with production expertise had retired, died, or left the agency. Few records existed.</p><p>After $69 million in cost overruns and years of failed attempts, they finally produced viable Fogbank. Then discovered the new batch was too pure. The original process had relied on an unintentional impurity that was critical to the material&#8217;s function. Nobody knew. Not the engineers trying to reproduce it. Not even the original workers who made it decades earlier. Los Alamos called it an unknowing dependency in the original process.</p><p>A nuclear weapons program lost the ability to make a material it invented. The knowledge didn&#8217;t just leave with people. It was never fully understood by anyone.</p><p><em>(Correction: the original version stated that the workers who made Fogbank knew about the impurity. They didn&#8217;t. The dependency was unwitting, which makes the knowledge-loss argument stronger, not weaker. Thanks to John F. in the comments for catching this.)</em></p><h2>The Same Playbook</h2><p>I read the Fogbank story and recognized it immediately. Not the nuclear material. The pattern. Build capability over decades. Find a cheaper substitute. Let the human pipeline atrophy. Enjoy the savings. Then watch it all collapse when a crisis demands what you optimized away.</p><p>In defense, the substitute was the peace dividend. In software, it&#8217;s AI.</p><p>I wrote about the <a href="https://techtrenches.substack.com/p/ai-wont-save-us-from-the-talent-crisis">talent pipeline collapse</a> before. The hiring numbers and the junior-to-senior problem are documented. So is the <a href="https://techtrenches.dev/p/the-comprehension-extinction-ai-isnt">comprehension crisis</a>. What I didn&#8217;t have was the right historical parallel. Now I do.</p><p>And it tells you something the hiring data doesn&#8217;t: how long rebuilding actually takes.</p><h2>Rebuilding Takes Years. Always.</h2><p>Every major defense production ramp-up took three to five years for simple systems. Five to ten for complex ones. Stinger: thirty months minimum from order to delivery. Javelin: four and a half years to less than double production. 155mm shells: four years and still not at target despite five billion dollars invested. France only restarted propellant production in 2024, seventeen years after shutting it down.</p><p>Money was never the constraint. Knowledge was. <a href="https://www.rand.org/content/dam/rand/pubs/monographs/2007/RAND_MG608.1.pdf">RAND found</a> that 10% of technical skills for submarine design need ten years of on-the-job experience to develop, sometimes following a PhD. Apprenticeships in defense trades take two to four years, with five to eight years to reach supervisory competence.</p><p>Now map that onto software. A junior developer needs three to five years to become a competent mid-level engineer. Five to eight years to become senior. Ten or more to become a principal or architect. That timeline can&#8217;t be compressed by throwing money at it. It can&#8217;t be compressed by AI either.</p><p>A <a href="https://metr.org/blog/2025-07-10-early-2025-ai-experienced-os-dev-study/">METR</a> randomized controlled trial found that experienced developers using AI coding tools actually took 19% longer on real-world open source tasks. Before starting, they predicted AI would make them 24% faster. The gap between prediction and reality was 43 percentage points. When researchers tried to run a follow-up, a significant share of developers refused to participate if it meant working without AI. They couldn&#8217;t imagine going back.</p><h2>The Bill Always Comes Due</h2><p>The software industry is in year three of the same optimization. <a href="https://sfstandard.com/2025/02/27/salesforce-marcbenioff-layoffs-tech-agents/">Salesforce said</a> it won&#8217;t hire more software engineers in 2025. A LeadDev survey found 54% of engineering leaders believe AI copilots will reduce junior hiring long-term. A <a href="https://cra.org/crn/2025/10/cerp-pulse-survey-a-snapshot-of-2025-undergraduate-computing-enrollment-patterns/">CRA survey</a> of university computing departments found 62% reported declining enrollment this year.</p><p>I see it in code review. Review is now the bottleneck. AI generates code fast. Humans review it slow. The industry&#8217;s answer is predictable: let AI review AI&#8217;s code. I&#8217;m not doing that. I&#8217;ve reworked our pull request templates instead. Every PR now has to explain what changed, why, what type of change it is, screenshots of before and after. Structured context so the reviewer isn&#8217;t guessing. I&#8217;m adding dedicated reviewers per project. More eyes, more chances to catch what the model missed.</p><p>But even that doesn&#8217;t solve the deeper problem. The skills you need to be effective now are different. Technical expertise alone isn&#8217;t enough anymore. You need people who can take ownership, communicate tradeoffs, push back on bad suggestions from a machine that sounds very confident. Leadership qualities. Our last hiring round tells you how rare that is: 2,253 candidates, 2,069 disqualified, 4 hired. A 0.18% conversion rate. The combination of technical skill and the judgment to know when the AI is wrong barely exists in the market anymore.</p><p>We document everything. Site Books, SDDs, RVS reports, boilerplate modules with full coverage. It works today, because the people reading those docs have the engineering expertise to act on them. What happens when they don&#8217;t? Honestly, I don&#8217;t know. Maybe AI in five years is good enough that it won&#8217;t matter. Maybe the problem stays manageable. I can&#8217;t predict the capabilities of models in 2031.</p><p>But crises don&#8217;t send calendar invites. Nobody expected a full-scale land war in Europe in 2022. The defense industry had thirty years to prepare and didn&#8217;t. Even Fogbank had records. There weren't enough. The original workers didn't fully understand their own process.</p><p>Five to ten years from now, we&#8217;ll need senior engineers. People who understand systems end to end, who can debug distributed failures at 2 AM, who carry institutional knowledge that exists nowhere in the codebase. Those engineers don&#8217;t exist yet because we&#8217;re not creating them. The juniors who should be learning right now are either not being hired or developing what a DoD-funded workforce study calls &#8220;AI-mediated competence.&#8221; They can prompt an AI. They can&#8217;t tell you what the AI got wrong.</p><p>It&#8217;s Fogbank for code. When juniors skip debugging and skip the formative mistakes, they don&#8217;t build the tacit expertise. And when my generation of engineers retires, that knowledge doesn&#8217;t transfer to the AI.</p><p>It just disappears.</p><p>The West already made this mistake once. The bill came due in Ukraine.</p><p>I know how this sounds. I know I&#8217;ve written about the talent pipeline before. The defense example isn&#8217;t about repeating the argument. It&#8217;s about showing what happens if the industry&#8217;s expectations don&#8217;t work out. Stinger, Javelin, Fogbank, a million shells nobody could make. That&#8217;s the cost of betting wrong on optimization. We&#8217;re making the same bet with software engineering right now.</p><p>Maybe AI gets good enough, and the bet pays off. Maybe it doesn&#8217;t. The defense industry thought peace would last forever, too.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://techtrenches.dev/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://techtrenches.dev/subscribe?"><span>Subscribe now</span></a></p><p></p>]]></content:encoded></item><item><title><![CDATA[Everyone Wants a Better Team. Nobody Wants to Do Anything About It.]]></title><description><![CDATA[Same meeting. Scorecard says zero problems. Out loud, the same engineers describe a dozen. The gap between what people say and what they write]]></description><link>https://techtrenches.dev/p/everyone-wants-a-better-team-nobody</link><guid isPermaLink="false">https://techtrenches.dev/p/everyone-wants-a-better-team-nobody</guid><dc:creator><![CDATA[Denis Stetskov]]></dc:creator><pubDate>Tue, 14 Apr 2026 14:03:05 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/1ab86ae4-b500-44d4-b8d6-2e89770e8dcd_508x340.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!P2bS!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffc877585-6d05-4d35-adee-587e8494091b_1600x1400.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!P2bS!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffc877585-6d05-4d35-adee-587e8494091b_1600x1400.png 424w, https://substackcdn.com/image/fetch/$s_!P2bS!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffc877585-6d05-4d35-adee-587e8494091b_1600x1400.png 848w, https://substackcdn.com/image/fetch/$s_!P2bS!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffc877585-6d05-4d35-adee-587e8494091b_1600x1400.png 1272w, https://substackcdn.com/image/fetch/$s_!P2bS!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffc877585-6d05-4d35-adee-587e8494091b_1600x1400.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!P2bS!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffc877585-6d05-4d35-adee-587e8494091b_1600x1400.png" width="1456" height="1274" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/fc877585-6d05-4d35-adee-587e8494091b_1600x1400.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1274,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:190904,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://techtrenches.dev/i/192111374?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffc877585-6d05-4d35-adee-587e8494091b_1600x1400.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!P2bS!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffc877585-6d05-4d35-adee-587e8494091b_1600x1400.png 424w, https://substackcdn.com/image/fetch/$s_!P2bS!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffc877585-6d05-4d35-adee-587e8494091b_1600x1400.png 848w, https://substackcdn.com/image/fetch/$s_!P2bS!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffc877585-6d05-4d35-adee-587e8494091b_1600x1400.png 1272w, https://substackcdn.com/image/fetch/$s_!P2bS!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffc877585-6d05-4d35-adee-587e8494091b_1600x1400.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>We track two scorecard metrics in our department meetings: how many tasks were poorly defined, how many bugs weren&#8217;t reproducible. Engineers own the data. They&#8217;re supposed to log the count whenever they hit one. Three weeks of tracking before the tool broke. The numbers across the board: zero. Zero poorly defined tasks. Zero non-reproducible bugs.</p><p>Then we get to the department meeting. The scorecard goes on the screen. Zeros across the board, everyone nods. The discussion opens up, and within minutes the same engineers are saying out loud: this task was unclear, that bug couldn&#8217;t be reproduced, requirements changed mid-sprint twice this week. They say it casually. In conversation. As a follow-up to the very metric they just reviewed at zero. And next sprint they&#8217;ll log zero again.</p><p>That gap is the entire story.</p><h2>The Forms Are Silent. The People Aren&#8217;t.</h2><p>I&#8217;ve been running weekly health checks on my team for 18 months. Energy level, stress, meeting hours, context switches, one open-ended question. Hundreds of data points per person. Once I noticed the scorecard pattern, I went back through all of it.</p><p>One engineer reported &#8220;Normal week&#8221; as his energy for 20 out of 21 weeks. His stress field bounced between &#8220;Rip and Tear&#8221; and &#8220;Hell on Earth&#8221; the same period. Some weeks were clearly harder than others. The energy field? Copy-paste. Same answer. Every Friday.</p><p>Another engineer: &#8220;Energized, could climb mountains&#8221; for 17 out of 18 weeks. Either he discovered the secret to permanent workplace happiness, or he stopped reading the question around week three.</p><p>A third: &#8220;Rip and Tear&#8221; for 18 straight weeks. Eighteen identical data points is not feedback. It&#8217;s a checkbox.</p><p>PM feedback runs the same way. One PM&#8217;s responses for an engineer over 14 weeks: &#8220;good&#8221;, &#8220;good&#8221;, &#8220;good&#8221;, &#8220;yes&#8221;, &#8220;yes&#8221;, &#8220;no&#8221;, &#8220;good&#8221;, &#8220;good.&#8221; That&#8217;s not feedback. That&#8217;s a pulse check confirming the person is alive. Different PM, different engineer, same problem. Generic words filling required fields.</p><p>But here&#8217;s the thing. Every one of these people, in the right conversation, can tell you exactly what&#8217;s wrong on their team. In a DM. In a side conversation after a call. In the unstructured five minutes when someone with enough authority sits down and physically drags it out of them. The information exists. It just won&#8217;t go into anything that looks like a formal channel. Retros are the same silence as the scorecards unless a strong facilitator pulls problems out of people one by one. Forms produce &#8220;normal week.&#8221; Surveys produce green dashboards. The honest answer only shows up when no one&#8217;s writing it down.</p><h2>Complaining Is Free. Logging Is Expensive.</h2><p>When you complain out loud in a meeting, you&#8217;re performing dissatisfaction. You said the thing. You were heard. The room reacted. Whatever frustration you brought into the meeting got released into it. You can move on. Verbal complaining closes a loop. It&#8217;s catharsis with witnesses. By the time the meeting ends, the emotional cycle is complete and the conversation has moved to the next agenda item. Nobody is going to dig up your remark next quarter.</p><p>When you write a number into a scorecard, you open a loop. The number doesn&#8217;t dissolve at the end of the meeting. It sits in the tool. Next sprint there&#8217;s another number next to it. Then another. Pretty soon you have 23 poorly defined tasks across a quarter, which is no longer a complaint. It&#8217;s a case. Someone has to either fix the underlying problem, or push back on the data, or have an awkward conversation with the PM whose tasks generated those numbers, or admit that the metric isn&#8217;t working and kill it. Writing creates an open ticket. Open tickets demand action.</p><p>This is why the scorecard stays clean even when the same engineers are openly describing the problem in the same meeting. Talking about unclear tasks in conversation gets the frustration out of their system. Logging the count would commit them to a position they&#8217;d have to defend, week after week, until something actually changed or somebody got hurt. Complaining is free. Logging is expensive.</p><p>A <a href="https://onlinelibrary.wiley.com/doi/10.1002/job.2886">2025 study</a> in the Journal of Organizational Behavior interviewed 98 people across three organizations about negative feedback. One quote captured the math exactly: &#8220;I really balance in giving negative feedback. Is it worth for me to share or not? It is easier not to share than to share.&#8221;</p><p>That&#8217;s my whole team. Every Friday.</p><h2>It&#8217;s Not Fear. It&#8217;s Cost.</h2><p>The standard answer here is psychological safety. I&#8217;ve read Edmondson. I believe it matters. But she <a href="https://neuroleadership.com/your-brain-at-work/psychological-safety-and-accountability-insights-from-amy-edmondson">said this herself</a>: psychological safety without accountability creates a comfort zone. People feel safe but don&#8217;t push for excellence because there&#8217;s no cost to staying silent. She&#8217;s been explicit about the misuse: &#8220;People are starting to use the concept as a weapon. That&#8217;s completely incorrect.&#8221;</p><p>My team feels safe. They tell me uncomfortable things in meetings all the time. The problem isn&#8217;t that they&#8217;re afraid of me. The problem is that being honest costs effort, real feedback costs awkwardness, and writing &#8220;I&#8217;m struggling&#8221; instead of &#8220;normal week&#8221; costs two extra minutes nobody wants to spend. Every Friday, they decide it&#8217;s not worth it.</p><p>The research confirms this is universal. A 2024 <a href="https://www.visier.com/blog/new-survey-employee-engagement-productivity-impact/">Visier survey</a> found that 47% of employees feel pressured to withhold honest feedback. Only 7% feel their company acts on the feedback it gets. The standard read of these numbers is sympathetic: people stop being honest because nothing changes. I think that&#8217;s only half the story. People stop being honest because they confuse &#8220;I haven&#8217;t seen the change yet&#8221; with &#8220;nobody&#8217;s listening.&#8221; Two or three weeks pass without a visible result and they decide the loop is dead. They don&#8217;t account for the fact that decisions take time, work happens behind closed doors, other priorities compete for the same hours, and the change they wanted might already be in motion three layers up. They just stop. A <a href="https://pubmed.ncbi.nlm.nih.gov/35324242/">2022 study</a> found only 2.6% of people in a field experiment told someone about visible food on their face. People want honest feedback. They just don&#8217;t want to be the one giving it.</p><p>PM feedback is even worse. When an engineer on my team got a new PM, his scores dropped from 3.71 to 2.43 in a single month. Same engineer, same work, same projects. The previous PM had rated &#8220;Always&#8221; across the board for months. No friction, no conversation, path of least resistance. The new PM started writing &#8220;Sometimes&#8221; and &#8220;Often.&#8221; The engineer&#8217;s performance hadn&#8217;t changed. The PM&#8217;s tolerance for awkwardness had. Only <a href="https://knowledge.insead.edu/leadership-organisations/how-managers-self-sabotage-when-giving-negative-feedback">5% of employees</a> globally believe their managers give candid feedback. 69% of managers say they&#8217;re uncomfortable communicating with employees. Your PM isn&#8217;t lying maliciously. They&#8217;re avoiding a conversation that feels like conflict.</p><h2>The Leadership That Doesn&#8217;t Exist</h2><p>This isn&#8217;t a tool problem. The tool is fine. Five questions, two minutes, every Friday. The scorecard was two numbers. None of this is hard.</p><p>This is a leadership problem at the individual level. Not management leadership. The willingness of every person on a team to take ownership of the environment they work in. To fill out a health check honestly instead of copying last week&#8217;s answer. To write the unclear-task count even when it&#8217;s awkward. To tell a PM &#8220;your feedback is useless, give me something I can act on.&#8221; To be the first person in a meeting to say the thing that needs saying and then be the first person to write it down where it can&#8217;t be ignored.</p><p>Almost nobody does this. Not because they&#8217;re bad people, not because they don&#8217;t care, but because being the person who creates a record is the person who has to deal with what the record reveals. It&#8217;s easier to let it stay verbal. It&#8217;s easier to let someone else go first. It&#8217;s easier to ship the comment in conversation and then click &#8220;Normal week&#8221; in the form.</p><p>At the end of every week I feel like I&#8217;m running a kindergarten. One engineer doesn&#8217;t flag a problem at all. Another flags it but to the wrong person. They come to me about a misunderstanding with a colleague instead of going to the colleague directly. Now I have to walk over, decode what actually happened, and broker the conversation two adults could have had themselves in five minutes. Triangulation as the default communication pattern. Coordination overhead generated entirely by adults who refuse to act like adults.</p><p>I wrote about our <a href="https://techtrenches.substack.com/p/the-feedback-loop-that-actually-works">feedback system</a> and <a href="https://techtrenches.substack.com/p/my-monthly-11-formula-4-health-checks">1:1 formula</a> before (in hindsight, the titles were too loud, lol). Those articles described the mechanics. Eighteen months later, what the mechanics revealed is that systems don&#8217;t create culture. People do. And right now, most people in most companies are choosing the version of themselves that protects the relationship over the version that improves the situation. This isn&#8217;t an engineering problem. I just happen to run an engineering team, so this is where I see it. The same dysfunction is in every department, every industry, every workplace where adults are asked to give honest input about their environment.</p><h2>What I Got Wrong</h2><p>Will I keep running health checks? Yes. I&#8217;m too stubborn to admit that I failed. Am I frustrated? Absolutely. Did I fail as a manager? Yes. Because I wasn&#8217;t able to teach my people that change begins from us, not from a process or a tool. Will I repeat four times per month that filling out the form honestly matters, that the comment field exists for a reason, that the scorecard wants the real number? Yes. Every single month.</p><p>Everyone wants a better environment. Almost nobody wants to be uncomfortable enough to build one. I&#8217;ll keep pushing until they do or until I run out of stubbornness. So far, the stubbornness is winning.</p><div><hr></div><p><em>If your feedback systems are producing theater instead of signal, hit reply and tell me what you&#8217;ve tried. I read every response.<br><br>PS. The comments on the last two articles meant more to this old man than you'd think. By the time this one publishes I'll be on vacation, but please keep them coming. I'll see every reply when I'm back and I promise to write back to each one.</em></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://techtrenches.dev/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://techtrenches.dev/subscribe?"><span>Subscribe now</span></a></p><p></p>]]></content:encoded></item><item><title><![CDATA[The Human Cost of 10x: How AI Is Physically Breaking Senior Engineers]]></title><description><![CDATA[AI tools increased code review volume by 98% but your brain still runs at 10 bits per second. The physical toll on senior engineers is measurable.]]></description><link>https://techtrenches.dev/p/the-human-cost-of-10x-how-ai-is-physically</link><guid isPermaLink="false">https://techtrenches.dev/p/the-human-cost-of-10x-how-ai-is-physically</guid><dc:creator><![CDATA[Denis Stetskov]]></dc:creator><pubDate>Tue, 07 Apr 2026 14:04:04 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/021719cd-9cad-4978-ae96-86f6958e091e_508x340.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!eB5_!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a5830b1-62c5-4e21-8c69-a7f2f3644200_1600x1400.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!eB5_!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a5830b1-62c5-4e21-8c69-a7f2f3644200_1600x1400.png 424w, https://substackcdn.com/image/fetch/$s_!eB5_!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a5830b1-62c5-4e21-8c69-a7f2f3644200_1600x1400.png 848w, https://substackcdn.com/image/fetch/$s_!eB5_!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a5830b1-62c5-4e21-8c69-a7f2f3644200_1600x1400.png 1272w, https://substackcdn.com/image/fetch/$s_!eB5_!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a5830b1-62c5-4e21-8c69-a7f2f3644200_1600x1400.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!eB5_!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a5830b1-62c5-4e21-8c69-a7f2f3644200_1600x1400.png" width="1456" height="1274" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/9a5830b1-62c5-4e21-8c69-a7f2f3644200_1600x1400.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1274,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:213166,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://techtrenches.dev/i/191029374?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a5830b1-62c5-4e21-8c69-a7f2f3644200_1600x1400.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!eB5_!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a5830b1-62c5-4e21-8c69-a7f2f3644200_1600x1400.png 424w, https://substackcdn.com/image/fetch/$s_!eB5_!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a5830b1-62c5-4e21-8c69-a7f2f3644200_1600x1400.png 848w, https://substackcdn.com/image/fetch/$s_!eB5_!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a5830b1-62c5-4e21-8c69-a7f2f3644200_1600x1400.png 1272w, https://substackcdn.com/image/fetch/$s_!eB5_!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a5830b1-62c5-4e21-8c69-a7f2f3644200_1600x1400.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Last Tuesday, I stood up from my desk at 7 PM and felt a vacuum in the front of my skull. Not a headache. Not fatigue. A physical emptiness, like the frontal lobe had been running at redline all day and finally shut down. I stood there for ten seconds trying to remember what I was going to do next. Nothing came.</p><p>In the past year, the volume of information passing through my brain on any given Tuesday has become what used to take a week. Code review is the worst of it, but the real killer is the context switches. AI-generated PRs, client architecture decisions, three Slack threads about deployment issues, a candidate&#8217;s CV that needs review, an air defense alarm outside the window, then back to reviewing code that a machine wrote in seconds and I need hours to validate. Each of these demands a different mental model. Each one burns working memory. By 4 PM I&#8217;m making decisions I wouldn&#8217;t trust from a junior. By 7 PM my brain is physically empty.</p><p>The industry calls this &#8220;10x productivity.&#8221; I call it what it is: a system that generates output at machine speed and forces humans to process it at biological speed.</p><h2>Workload Creep</h2><p>In February 2026, UC Berkeley researchers <a href="https://hbr.org/2026/02/ai-doesnt-reduce-work-it-intensifies-it">published findings</a> from eight months embedded inside a 200-person tech company. Over 40 in-depth interviews. Their conclusion: AI doesn&#8217;t reduce work. It intensifies it.</p><p>They found three mechanisms of &#8220;workload creep.&#8221; Task expansion: everyone&#8217;s scope inflates because AI makes it possible to do more. Blurred boundaries: AI prompting happens during lunch, commute, evenings. Implicit pressure: when colleagues visibly do more with AI, expectations rise for everyone.</p><p>The <a href="https://investors.upwork.com/news-releases/news-release-details/upwork-research-reveals-new-insights-ai-human-work-dynamic">Upwork Research Institute</a> quantified it: 77% of employees using AI say it has added to their workload. Not reduced. Added. 71% report burnout.</p><p>The finding that keeps me up at night: workers who report the highest AI productivity gains are the most burned out. 88% burnout rate among the &#8220;most productive&#8221; AI users. They&#8217;re twice as likely to quit.</p><p>The people who look best on your dashboard are the ones closest to walking out the door.</p><h2>Your Brain Runs at 10 Bits Per Second</h2><p>In 2025, Zheng and Meister <a href="https://www.sciencedirect.com/science/article/pii/S0896627324008080">published in </a><em><a href="https://www.sciencedirect.com/science/article/pii/S0896627324008080">Neuron</a></em> that the human brain processes conscious, analytical thought at approximately 10 bits per second. Your sensory systems gather data at roughly 1 billion bits per second. But the bottleneck for code review, the part where you actually think, is 10 bits per second.</p><p>Working memory holds roughly 4 chunks of information at a time. The <a href="https://graphite.com/blog/code-review-best-practices">SmartBear/Cisco study</a> established numbers everyone ignores: defect detection drops from 87% for PRs under 100 lines to 28% for PRs over 1,000 lines. Quality collapses after 60 minutes.</p><p>Now look at what AI did to the review queue.</p><p><a href="https://github.blog/news-insights/octoverse/octoverse-a-new-developer-joins-github-every-second-as-ai-leads-typescript-to-1/">GitHub&#8217;s Octoverse 2025</a> shows 43.2 million pull requests merged per month. Up 23% year-over-year. Lines of code per developer grew from 4,450 to <a href="https://shiftmag.dev/state-of-code-2025-7978/">7,839</a> in eight months. A 76% increase.</p><p>Faros AI analyzed 10,000+ developers and found AI users <a href="https://www.faros.ai/blog/bain-technology-report-2025-why-ai-gains-are-stalling">merge 98%</a> more pull requests with AI assistance. Every single one lands on a senior engineer&#8217;s desk.</p><p>As <a href="https://www.technologyreview.com/2025/12/15/1128352/rise-of-ai-coding-developers-2026/">MIT reported</a>: juniors produce far more code with AI tools, but the sheer volume is saturating senior developers&#8217; capacity to review. One OCaml maintainer rejected a 13,000-line AI-generated PR outright. Nobody had the bandwidth.</p><p>I wrote about the <a href="https://techtrenches.dev/p/your-claudemd-is-a-wish-list-not">supervision tax</a> recently. The METR data showed experienced developers actually got slower with AI tools while feeling faster. The gap between perception and reality is the most dangerous finding in any of this. You can&#8217;t fix what you can&#8217;t feel.</p><h2>Why Expertise Makes It Worse</h2><p>In 1983, Lisanne Bainbridge published &#8220;Ironies of Automation&#8221; in <em>Automatica</em>. Her core finding: the more sophisticated an automated system becomes, the more demanding the human role within it. What remains after automation is the most ambiguous, most complex, least supported work.</p><p>Microsoft Research <a href="https://www.microsoft.com/en-us/research/wp-content/uploads/2024/10/2024-Ironies_of_Generative_AI-IJHCI.pdf">confirmed this</a> for generative AI in 2024: AI systems can make hard tasks even harder, leaving users with the same or increased cognitive load.</p><p>The mechanism is asymmetric. When I write code, I externalize a mental model that already exists. The thinking is done before the typing starts. When I review AI-generated code, I have to reverse-engineer somebody else&#8217;s reasoning out of an artifact produced by a system that has no idea what our business does. Fundamentally harder.</p><p>A <a href="https://clutch.co/resources/devs-use-ai-generated-code-they-dont-understand">Clutch survey</a> of 800 software professionals found 59% of developers use AI-generated code they don&#8217;t fully understand. But seniors can&#8217;t afford that luxury. Their job is to catch what looks right but isn&#8217;t.</p><p>The <a href="https://www.qodo.ai/reports/state-of-ai-code-quality/">Qodo report</a> confirmed the cost distribution: senior engineers report the lowest confidence in shipping AI-generated code at 22%. Context pain increases with experience: 41% among juniors versus 52% among seniors. As I covered in <a href="https://techtrenches.dev/p/your-brain-on-autopilot-the-cost">cognitive offloading</a>, most workers using AI skip critical thinking entirely. Seniors who do think critically, which is their entire job, absorb the cognitive cost everyone else offloads.</p><h2>The Body Keeps Score</h2><p>The cognitive damage is only half of it. The body takes the rest.</p><p><a href="https://pmc.ncbi.nlm.nih.gov/articles/PMC11901492/">Computer Vision Syndrome</a> affects 74% of screen users during periods of increased screen time, and digital eye strain severity gets significantly worse when cognitive load goes up. AI-intensified code review doesn&#8217;t just mean more screen hours. It makes each hour more physically damaging.</p><p>A 2024 meta-analysis covering <a href="https://pmc.ncbi.nlm.nih.gov/articles/PMC10909938/">26,916 participants</a> found burnout increases cardiovascular disease risk by 21%. Those in the upper burnout quintile had a 79% higher risk of coronary heart disease. The <a href="https://pmc.ncbi.nlm.nih.gov/articles/PMC8034523/">largest IT study</a> found metabolic syndrome prevalence of 32% among long-term sedentary programmers. Double the general population.</p><p>Then sleep. Work-related rumination <a href="https://link.springer.com/article/10.1007/s11818-024-00481-4">mediates the link</a> between work stress and reduced sleep quality. When I close my laptop, my brain doesn&#8217;t stop. It replays the PR I didn&#8217;t finish. The dependency I flagged but couldn&#8217;t trace.</p><p>More code review during the day, worse sleep at night, worse decisions the next morning, more rubber-stamped PRs, more bugs in production, more stress. Repeat until something breaks. Usually the human.</p><h2>The Dashboard Lies</h2><p><a href="https://www.gitclear.com/ai_assistant_code_quality_2025_research">GitClear analyzed</a> 211 million changed lines. Duplicated code blocks increased eightfold. Code churn rose from 5.5% to 7.9%. AI-generated code averages <a href="https://www.coderabbit.ai/blog/state-of-ai-vs-human-code-generation-report">1.7x more bugs</a> per PR than human-written code. Logic defects up 75%. Performance issues 8x more frequent.</p><p>Faros AI&#8217;s conclusion after analyzing 10,000+ developers: despite merging 98% more pull requests with AI, company-wide delivery showed no measurable organizational impact on throughput or quality.</p><p>Sonar&#8217;s CEO identified the hidden danger: AI models are getting better at avoiding obvious bugs and security holes, but structural flaws now constitute more than 90% of issues. You&#8217;re being lulled into a false sense of security. The easy problems get solved. The hard problems get hidden beneath clean-looking code that passes every automated check. And the people who can find them are buried under a volume of output that exceeds human cognitive bandwidth by design.</p><p>More code. More bugs. More review burden. Same output. Worse humans.</p><h2>The Math Doesn&#8217;t Work</h2><p>Here&#8217;s what nobody is doing the arithmetic on. AI just grew the demand for senior engineering judgment by 76 to 98%. Every AI-generated PR needs a human who can catch what the machine got wrong, spot the structural flaw on line 847, trace a logic error three services downstream. The supply of those humans didn&#8217;t move. And as I&#8217;ve covered in <a href="https://techtrenches.dev/p/ai-wont-save-us-from-the-talent-crisis">the talent crisis</a> and <a href="https://techtrenches.dev/p/the-comprehension-extinction-ai-isnt">comprehension extinction</a>, the pipeline that produces them is being hollowed out by the same tools creating the demand.</p><p>But here&#8217;s where the senior engineer actually lives in 2026. Industry layoffs on one side, hundreds of thousands of engineers cut since 2022, the next round always one earnings call away. 10x productivity expectations on the other, set by people who have never reviewed an AI-generated PR in their lives. In the middle, somebody exhausted and burned out, with a choice to make every morning: trust the AI output, because it worked the last twenty times, didn&#8217;t it, or keep validating every line until the body gives out.</p><p>How long can the average human hold that line?</p><p>And the worst part: validating or trusting, the engineer owns the outcome either way. When production goes down at 3 AM, it&#8217;s your name on the commit. Your PR that got merged. Your incident report. There is no version of this choice where you&#8217;re not on the hook.</p><p>It&#8217;s a rhetorical question. We already know the answer. The data in this article is the answer.</p><p>If you&#8217;re a senior engineer feeling this in your body, you&#8217;re not alone and you&#8217;re not weak. The eye strain. The sleep that doesn&#8217;t restore. The vacuum in your head at the end of the day. You&#8217;re doing a job that didn&#8217;t exist eighteen months ago, with cognitive equipment that hasn&#8217;t changed in 200,000 years. Reply to this email and tell me what it feels like for you. I&#8217;m collecting data for a follow-up.</p><p><em>Subscribe for weekly insights from the trenches of engineering leadership. No theory, just practical systems that work.</em></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://techtrenches.dev/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://techtrenches.dev/subscribe?"><span>Subscribe now</span></a></p><p></p>]]></content:encoded></item><item><title><![CDATA[The Snake That Ate Itself: What Claude Code’s Source Revealed About AI Engineering Culture]]></title><description><![CDATA[Anthropic claimed 100% of Claude Code is AI-written. A source leak exposed a 3,167-line function, regex sentiment analysis, and 250K wasted API calls daily]]></description><link>https://techtrenches.dev/p/the-snake-that-ate-itself-what-claude</link><guid isPermaLink="false">https://techtrenches.dev/p/the-snake-that-ate-itself-what-claude</guid><dc:creator><![CDATA[Denis Stetskov]]></dc:creator><pubDate>Wed, 01 Apr 2026 14:01:08 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/48509067-ecb2-43fb-b21e-0085b2e0cd07_508x340.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!3DMB!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8dd0a312-1129-4301-83c6-ab58be2ba435_1600x1400.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!3DMB!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8dd0a312-1129-4301-83c6-ab58be2ba435_1600x1400.png 424w, https://substackcdn.com/image/fetch/$s_!3DMB!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8dd0a312-1129-4301-83c6-ab58be2ba435_1600x1400.png 848w, https://substackcdn.com/image/fetch/$s_!3DMB!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8dd0a312-1129-4301-83c6-ab58be2ba435_1600x1400.png 1272w, https://substackcdn.com/image/fetch/$s_!3DMB!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8dd0a312-1129-4301-83c6-ab58be2ba435_1600x1400.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!3DMB!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8dd0a312-1129-4301-83c6-ab58be2ba435_1600x1400.png" width="1456" height="1274" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/8dd0a312-1129-4301-83c6-ab58be2ba435_1600x1400.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1274,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:183602,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://techtrenches.dev/i/192823710?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8dd0a312-1129-4301-83c6-ab58be2ba435_1600x1400.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!3DMB!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8dd0a312-1129-4301-83c6-ab58be2ba435_1600x1400.png 424w, https://substackcdn.com/image/fetch/$s_!3DMB!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8dd0a312-1129-4301-83c6-ab58be2ba435_1600x1400.png 848w, https://substackcdn.com/image/fetch/$s_!3DMB!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8dd0a312-1129-4301-83c6-ab58be2ba435_1600x1400.png 1272w, https://substackcdn.com/image/fetch/$s_!3DMB!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8dd0a312-1129-4301-83c6-ab58be2ba435_1600x1400.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>On December 27, 2025, Anthropic&#8217;s lead engineer Boris Cherny posted on X: &#8220;In the last thirty days, 100% of my contributions to Claude Code were written by Claude Code.&#8221; 259 pull requests. 497 commits. 40,000 lines added. 1.3 million views. The tech world applauded.</p><p>Three months later, a packaging mistake <a href="https://www.theregister.com/2026/03/31/anthropic_claude_code_source_code/">exposed 512,000 lines</a> of that code to the public. Leaks happen. Companies recover. The leak isn&#8217;t the story.</p><p>The code is the story.</p><p>64,464 lines of core TypeScript serving paying customers. A single function spanning 3,167 lines. Regex for sentiment analysis at a company that builds the world&#8217;s most advanced language model. A known bug burning 250,000 API calls daily, documented in a comment and shipped anyway.</p><p>Anthropic responded to the leak. Packaging error. Human mistake. <a href="https://analyticsindiamag.com/ai-news/claude-code-leak-was-a-manual-error-and-no-one-was-fired">No one fired</a>. They never responded to the code. Because the leak was an accident. The code was a choice.</p><h2>The Auction Nobody Won</h2><p>To understand what happened, you need to watch the numbers climb.</p><p>March 2025. CEO Dario Amodei at the Council on <a href="https://www.businessinsider.com/anthropic-ceo-ai-90-percent-code-3-to-6-months-2025-3">Foreign Relations</a>: &#8220;We&#8217;re 3 to 6 months from a world where AI is writing 90% of the code.&#8221;</p><p>May 2025. Boris Cherny on the <a href="https://www.latent.space/p/claude-code">Latent Space</a> podcast: &#8220;Maybe 80-90% Claude-written code overall.&#8221;</p><p>September 2025. Amodei again, hedging now: &#8220;70, 80, 90% of the code written at Anthropic is written by Claude.&#8221; Notice the range. 70 is not 90. But journalists ran with 90.</p><p>October 2025. Amodei at <a href="https://officechai.com/ai/my-prediction-of-ai-writing-90-of-code-is-already-true-at-anthropic-anthropic-ceo-dario-amodei/">Dreamforce</a> with Marc Benioff: &#8220;I made this prediction that in six months, 90% of code would be written by AI models. That is absolutely true now.&#8221; When Benioff pressed, Amodei walked it back: &#8220;Not uniformly.&#8221;</p><p>December 2025. Cherny&#8217;s tweet. 100%.</p><p>February 2026. CPO Mike Krieger at <a href="https://www.itpro.com/software/development/anthropic-labs-chief-mike-krieger-claims-claude-is-essentially-writing-itself-and-it-validates-a-bold-prediction-by-ceo-dario-amodei">Cisco AI Summit</a>: &#8220;Right now for most products at Anthropic, it&#8217;s effectively 100%.&#8221;</p><p>March 7, 2026. Cherny confirmed again: &#8220;Claude Code is 100% written by Claude Code.&#8221;</p><p>March 31, 2026. The source map leaked.</p><p>Every two to three months, the number went up like a bidding war where the bidder is also the auctioneer. A <a href="https://www.lesswrong.com/posts/prSnGGAgfWtZexYLp/is-90-of-code-at-anthropic-being-written-by-ais">LessWrong analysis</a> later called these claims &#8220;misleading/hype-y,&#8221; noting the metrics were never defined. Is it 90% of lines committed? 90% of engineering effort? 90% of characters typed? The distinction matters enormously. Anthropic never clarified. The ambiguity was the point.</p><h2>What 100% Looks Like in Practice</h2><p>So the number reached 100%. Then the source leaked. And for the first time, anyone could see what 100% actually produced.</p><p>A file called <code>print.ts</code> contained a single function spanning 3,167 lines with 486 branch points and 12 levels of nesting. One HN commenter catalogued what lived inside that function: the agent run loop, SIGINT handling, rate limiting, AWS authentication, MCP lifecycle management, plugin loading, team-lead polling via a <code>while(true)</code> loop, model switching, and turn interruption recovery. His verdict: this should be 8 to 10 separate modules. Nobody disagreed.</p><p><code>QueryEngine.ts</code> ran 46,000 lines. <code>Tool.ts</code> hit 29,000. <code>commands.ts</code> reached 25,000. The entry point <code>main.tsx</code> was 785 KB.</p><p>But the detail that spread fastest was <a href="https://alex000kim.com/posts/2026-03-31-claude-code-source-leak/">the regex</a>. In <code>userPromptKeywords.ts</code>, the company with the world&#8217;s most advanced language model was detecting user frustration with: <code>/\b(wtf|shit|fuck|horrible|awful|terrible)\b/i</code></p><p>Pattern matching for sentiment analysis. At an LLM company. One HN commenter delivered the line everyone quoted: that&#8217;s like a trucking company using horses to haul parts. Defenders argued regex is faster and cheaper than an inference call. They&#8217;re right. But that&#8217;s the engineering culture talking. Cheap beats correct. Fast beats good. Ship it.</p><h2>What This Code Does in Production</h2><p>Bad structure is one thing. You can argue it's style. But the leaked source also showed what happens when code like this runs at scale.</p><p>The leaked source contained a comment in <code>autoCompact.ts</code> that became a symbol: &#8220;1,279 sessions had 50+ consecutive failures (up to 3,272) in a single session, wasting ~250K API calls/day globally.&#8221;</p><p>The fix was three lines of code. Set a maximum failure threshold, then disable compaction for the session. Three lines to stop burning a quarter million API calls daily. Someone knew about the problem. Someone wrote the comment documenting it. Then they shipped it anyway.</p><p>Memory consumption told a similar story. Community benchmarks showed 7 Claude Code processes consuming 5.3 GB of RAM. GitHub issues documented worse: one process allocating 36.5 GB peak on an 18 GB machine. Another reaching 93 GB heap allocation within five minutes.</p><p>And the issue tracker itself was automated into silence. A Claude Sonnet-powered deduplication bot processed every new issue. A sweep bot marked issues stale after 14 days and closed them 14 days later. A lock bot prevented comments on closed issues after 7 days. <a href="https://gist.github.com/azkore/934e5387579efb17e1080402efedf13d">An analysis</a> estimated that 49 to 71% of all 26,792 issue closures were bot-driven. <a href="https://github.com/anthropics/claude-code/issues/38335">Issue #38335</a> had 201 upvotes and zero team responses. Labeled &#8220;invalid.&#8221;</p><h2>&#8220;Go Faster, Not More Process&#8221;</h2><p>Documented bugs. Wasted API calls. Users filing issues that bots close. All of this was visible before the leak. The leak just confirmed it was a choice, not an oversight. And when the leak happened, the response confirmed the choice was deliberate.</p><p>Cherny acknowledged the human error: &#8220;Our deploy process has a few manual steps, and we didn&#8217;t do one of the steps correctly.&#8221; Then he added: &#8220;Like with any other incident, the counter-intuitive answer is to solve the problem by finding ways to go faster, rather than introducing more process. In this case more automation &amp; claude checking the results.&#8221;</p><p>This isn&#8217;t one person&#8217;s opinion. It&#8217;s the team philosophy. As one commenter in the <a href="https://news.ycombinator.com/item?id=47584540">HN thread</a> explained: &#8220;The claude code team ethos is that there is no point in code-reviewing ai-generated code. Simply update your spec and regenerate.&#8221;</p><p>Read that again. The response to leaking code with a 3,167-line function, a regex for sentiment analysis, and bugs that basic integration tests would catch is not to add tests. Not to add code review. Not to add process. It&#8217;s to go faster. Regenerate. And have Claude check Claude&#8217;s work.</p><p>This is the ouroboros. The snake eating its own tail. AI writes the code. AI reviews the code. AI checks the deployment. When it breaks, the answer is more AI. The loop has no exit condition.</p><p>As I wrote in <a href="https://techtrenches.substack.com/p/the-great-software-quality-collapse">Quality Collapse</a>, we&#8217;ve normalized catastrophe in software engineering. That piece tracked an industry-wide pattern: ship broken, fix later, throw hardware at the problem. Claude Code is no longer an example of the pattern. It&#8217;s the specimen.</p><h2>Where Does This Philosophy Stop?</h2><p>If &#8220;don&#8217;t review, regenerate&#8221; is how they build the product, it raises an obvious question: what about the code you can&#8217;t see?</p><p>Engineering culture doesn&#8217;t have a switch. The team that ships <code>print.ts</code> with 12 levels of nesting doesn&#8217;t suddenly become disciplined when writing model training code. Same people. Same processes. Same code reviews, or lack of them.</p><p>They justified the leak. They explained the packaging error. They didn&#8217;t justify the code. That silence tells you everything. The quality is fine by them. This is how they build things. On purpose.</p><p>There are indirect signals that the rot goes deeper. Eight service outages in a single month. A source map leak that <a href="https://mlq.ai/news/anthropics-claude-code-exposes-source-code-through-packaging-error-for-second-time/">happened twice</a> (the first was quietly patched in early 2025). An Axios dependency that was <a href="https://www.tomshardware.com/tech-industry/cyber-security/axios-npm-package-compromised-in-supply-chain-attack-that-deployed-a-cross-platform-rat">compromised</a> by a supply chain attack on the same day as the leak. 74 npm dependencies for what is essentially a CLI wrapper around an API.</p><p>And here&#8217;s the pattern that makes it sustainable, temporarily: when you have billions in revenue and functionally unlimited compute, you feed technical debt with resources instead of fixing it. The function is 3,167 lines? Don&#8217;t refactor, add more RAM. The autoCompact bug burns 250,000 API calls? The margin absorbs it. The model regresses? Throw more GPU hours at training.</p><p>This works while money flows. Anthropic is a startup that scaled faster than it could build engineering practices. The recursive loop of AI-writes-AI-checks-AI-fixes masks the absence of fundamentals. But compute gets expensive. Revenue cycles turn. And technical debt that was papered over with resources becomes a debt trap with no exit.</p><h2>The Uncomfortable Truth</h2><p>The company that sells AI coding tools cannot build a quality product with its own AI coding tools. The percentages were always the pitch, not the product. 80. 90. 95. 100. Nobody asked what 100% actually produces until the source code answered for them.</p><p>AI amplifies whatever is already there. Good discipline becomes great output. No discipline becomes technical debt at machine speed. Anthropic chose a direction. Go faster. Have Claude check Claude. And when it breaks, go faster still.</p><p>If this is the new quality standard from the company pulling our industry forward, then I&#8217;m not sure I want to go where the industry is going.</p><p>My grandfather was an electrical engineer. He told me: do it well, or don&#8217;t do it at all. Simple rule. It guided how I built teams, how I shipped software, how I evaluated every project for 13 years. Quality wasn&#8217;t a feature. It was the floor.</p><p>That floor is gone. Quality is a relic now. Nobody wants it. Nobody pays for it. Nobody measures it. The metric is velocity. The metric is percentage of code generated. The metric is how fast you can ship a 3,167-line function that burns a quarter million API calls daily and call it 100% AI-written.</p><p>I&#8217;m seriously considering a pivot to security. Leaks, supply chain attacks, and production code that reads like a rough draft are the new normal. Someone will need to clean up after the vibe coders. That&#8217;s a growth industry.</p><p>Or maybe I&#8217;ll become an electrician. My grandfather&#8217;s trade. At least when you wire a panel correctly, it stays correct. No one ships a hot fix that reverses your ground fault protection. No bot auto-closes your inspection report after 60 days.</p><p>One thing I know for certain: I don&#8217;t want to move in the direction this industry is heading. And if a 3,167-line function with 486 branch points is what &#8220;100% AI-written&#8221; looks like at the company building the future, the future needs better engineering. Not faster engineering. Better.</p><p>I was a huge fan of Anthropic. Was.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://techtrenches.dev/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://techtrenches.dev/subscribe?"><span>Subscribe now</span></a></p><p></p>]]></content:encoded></item><item><title><![CDATA[Your CLAUDE.md Is a Wish List, Not a Contract]]></title><description><![CDATA[AI coding agents follow fewer than 30% of instructions perfectly. Real compliance data from AGENTIF, METR, and thousands of supervised sessions with Claude Code]]></description><link>https://techtrenches.dev/p/your-claudemd-is-a-wish-list-not</link><guid isPermaLink="false">https://techtrenches.dev/p/your-claudemd-is-a-wish-list-not</guid><dc:creator><![CDATA[Denis Stetskov]]></dc:creator><pubDate>Mon, 30 Mar 2026 14:03:10 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/0f0b40f6-8333-431e-9ec4-7ebace882b5f_508x340.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!IKJF!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F05ff4e12-dea6-4281-95ea-b9f65a9a6df9_1600x1400.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!IKJF!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F05ff4e12-dea6-4281-95ea-b9f65a9a6df9_1600x1400.png 424w, https://substackcdn.com/image/fetch/$s_!IKJF!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F05ff4e12-dea6-4281-95ea-b9f65a9a6df9_1600x1400.png 848w, https://substackcdn.com/image/fetch/$s_!IKJF!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F05ff4e12-dea6-4281-95ea-b9f65a9a6df9_1600x1400.png 1272w, https://substackcdn.com/image/fetch/$s_!IKJF!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F05ff4e12-dea6-4281-95ea-b9f65a9a6df9_1600x1400.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!IKJF!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F05ff4e12-dea6-4281-95ea-b9f65a9a6df9_1600x1400.png" width="1456" height="1274" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/05ff4e12-dea6-4281-95ea-b9f65a9a6df9_1600x1400.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1274,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:157146,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://techtrenches.dev/i/192224716?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F05ff4e12-dea6-4281-95ea-b9f65a9a6df9_1600x1400.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!IKJF!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F05ff4e12-dea6-4281-95ea-b9f65a9a6df9_1600x1400.png 424w, https://substackcdn.com/image/fetch/$s_!IKJF!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F05ff4e12-dea6-4281-95ea-b9f65a9a6df9_1600x1400.png 848w, https://substackcdn.com/image/fetch/$s_!IKJF!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F05ff4e12-dea6-4281-95ea-b9f65a9a6df9_1600x1400.png 1272w, https://substackcdn.com/image/fetch/$s_!IKJF!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F05ff4e12-dea6-4281-95ea-b9f65a9a6df9_1600x1400.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Last week I rolled back from Claude 4.6 Opus to Claude 4.5 Opus. Not because 4.6 was less capable. Because it stopped following instructions.</p><p>My CLAUDE.md has three rules about types: mandatory TypeScript, zero tolerance for <code>any</code>, static types over runtime guessing. Claude 4.6 hit a type error between three service files. The correct fix was a minute of work: update the type in each file so they match. Instead, it slapped a runtime cast at the call site. When I asked why, it quoted all three rules back to me verbatim, admitted &#8220;direct violation of instructions,&#8221; and said it had no basis to bypass them. It knew the rules. It chose not to follow them.</p><p>I&#8217;ve supervised AI coding agents across thousands of sessions. I built three separate AI review agents because the first layer ignores spec files. Three layers of AI checking what the previous AI refused to follow, plus my review on top. I still catch violations weekly. This is not a Claude problem. This is every AI coding tool on the market.</p><h2>The Numbers Are Worse Than You Think</h2><p>Tsinghua University&#8217;s <a href="https://keg.cs.tsinghua.edu.cn/persons/xubin/papers/AgentIF.pdf">AGENTIF benchmark</a> tested 707 instructions across 50 real-world agent scenarios. The best models followed fewer than 30% of instructions perfectly. The <a href="https://arxiv.org/pdf/2512.18470">SWE-EVO benchmark</a> found that when frontier models fail on real coding tasks, the primary failure mode is not syntax or tool misuse. It is instruction following. The smarter the model gets, the more its failures shift from &#8220;can&#8217;t do it&#8221; to &#8220;won&#8217;t do it right.&#8221;</p><p>Compliance also decays with volume. Claude Sonnet shows linear decline in instruction adherence as the number of instructions increases. Your 200-line CLAUDE.md is not 200 rules. It is 200 competing priorities that the model resolves by defaulting to whatever feels fastest.</p><h2>&#8220;Rules Are Essentially Decorative&#8221;</h2><p>The Cursor forum has dozens of threads documenting this. One developer <a href="https://forum.cursor.com/t/issues-with-cursorrules-not-being-consistently-followed/59264">estimated</a> .cursorrules work about 20-25% of the time. Another posted a <a href="https://forum.cursor.com/t/cursor-actively-admitting-that-rules-are-meaningless-and-it-doesnt-have-to-follow-them/131826">damning thread</a> where the AI told them outright: rules are just text, not enforced behavior. Your carefully crafted rule system is essentially decorative.</p><p>Claude Code&#8217;s <a href="https://github.com/anthropics/claude-code/issues/668">GitHub issues</a> tell the same story. Issue #668 estimates half of all token usage goes to re-asking Claude to follow its own instructions. Issue #7777 records Claude admitting its &#8220;default mode always wins because it requires less cognitive effort.&#8221; Issue #34774 documents Claude committing code without permission, then confessing it &#8220;fabricated a justification.&#8221;</p><p>A DEV Community <a href="https://dev.to/minatoplanb/i-wrote-200-lines-of-rules-for-claude-code-it-ignored-them-all-4639">article crystallized</a> the root cause. When Claude Code loads your CLAUDE.md, it wraps the content in framing that tells the model your instructions &#8220;may or may not be relevant.&#8221; Your rules are deprioritized by the tool that is supposed to enforce them.</p><h2>The Lazy Shortcut Has a Specific Anatomy</h2><p>Same codebase, same day. After every chat message finishes streaming, the app refetches the entire conversation from the server. The spec I wrote described a clean approach: include the missing identifier in the streaming response. One field. Claude ignored the spec and built a workaround that instead fires an extra API call after every single message. The model invented a shortcut that was not in the requirements because it was easier than reading what I actually wrote. And I&#8217;m okay when the model misses some Claude.md rules, but I expect that it will follow the specs. </p><p>Two rule violations in one day. That is when I rolled back to 4.5.</p><p>TypeScript projects are ground zero. AI agents cast types rather than fix them. They mark everything as optional instead of designing proper interfaces. They add escape hatches everywhere instead of handling edge cases. One Hacker News commenter described the <a href="https://news.ycombinator.com/item?id=47446373">signature pattern</a>: every optional field is a question that the rest of the codebase has to answer every time it touches that data.</p><p>Pete Hodgson <a href="https://blog.thepete.net/blog/2025/05/22/why-your-ai-coding-assistant-keeps-doing-it-wrong-and-how-to-fix-it/">nailed the paradox</a>: AI writes code at the level of a senior engineer but makes design decisions at the level of a junior. Too eager to please. Never challenges your ideas. And the critical part: every context reset is another brand new hire. The model has no persistent memory of being corrected. It does not build habits. It follows the path of least resistance every single time. Yeah, they added Memory to Claude code, but it's still too vague.</p><h2>Newer Models Make It Worse</h2><p>Claude 3.5 Sonnet followed instructions better than 3.7 Sonnet. Multiple developers <a href="https://prompt.16x.engineer/blog/claude-37-vs-35-sonnet-coding">documented the regression</a> publicly. 3.7 would attempt to solve the original prompt, encounter unrelated code, and start rewriting it unprompted. Developers reverted to the older model.</p><p>The GPT family showed the same dynamic. A <a href="https://signalreads.com/articles/gpt-4ogpt-5-complaints-megathread/">megathread</a> with thousands of engaged developers documented GPT-4o&#8217;s &#8220;lazy AI syndrome.&#8221; Prompts that previously generated 500 lines of working code now produce 50 lines with comments like <code>// implement rest of logic here</code>. GPT-5 was worse in a different way. IEEE Spectrum <a href="https://spectrum.ieee.org/ai-coding-degrades">reported</a> that it produces code that runs without obvious errors but quietly removes safety checks or fabricates output that matches the expected format.</p><p>The prevailing theory centers on economics. Running large models at scale is expensive. Providers use quantization, compression, and reduced computing to manage costs. RLHF training rewards agreeableness over correctness. Laziness is not a bug. It is an emergent property of the incentive structure. The same qualities that make a model feel &#8220;smarter&#8221; in a demo make it worse in production.</p><h2>The Supervision Tax</h2><p>The <a href="https://metr.org/blog/2025-07-10-early-2025-ai-experienced-os-dev-study/">METR trial</a> measured what practitioners already suspected. Sixteen experienced developers across 246 real issues were 19% slower with AI tools. They predicted they would be 24% faster. After the experiment, they still believed they were 20% faster. A 40-point perception gap.</p><p><a href="https://www.faros.ai/blog/ai-software-engineering">Faros AI</a> found the mechanism across 10,000+ developers. AI users merge 98% more PRs, but PR review time increases 91%, PR size increases 154%, and bugs per developer increase 9%. The AI generates more code faster. The humans spend more time reviewing it.</p><p><a href="https://www.qodo.ai/reports/state-of-ai-code-quality/">Qodo&#8217;s survey</a> found 88% of developers have low confidence shipping AI code without review. Junior developers show the lowest quality improvements but the highest confidence in shipping unreviewed. An inverted competence-confidence gap.</p><p>Google&#8217;s <a href="https://cloud.google.com/blog/products/devops-sre/announcing-the-2024-dora-report">2024 DORA report</a> confirmed it at scale: each 25% increase in AI adoption correlates with a 1.5% decrease in delivery throughput and a 7.2% decrease in delivery stability.</p><h2>The Industry Response: More Files, Same Problem</h2><p>Every major AI coding company built instruction-following systems. CLAUDE.md. .cursorrules. .github/copilot-instructions.md. AGENTS.md. Windsurf rules. Devin knowledge bases. The proliferation is itself an admission that base models do not follow project conventions. GitHub&#8217;s Copilot docs say it outright: they recommend accepting that variability is normal.</p><p>The most significant response was <a href="https://www.linuxfoundation.org/press/linux-foundation-announces-the-formation-of-the-agentic-ai-foundation">AGENTS.md</a>, a cross-tool standard contributed to the Linux Foundation in late 2025. Over 60,000 repositories use it. Competing companies co-founding a foundation to standardize instruction files tells you how universal the problem is. But standardizing the format does not solve compliance. It ensures every tool ignores the same file consistently.</p><p>The developers who made progress moved past prompt engineering entirely. Claude Code Hooks that enforce rules via code. Linter ratchets in CI. Frequent session restarts. Rules in prompts are requests. Hooks in code are laws.</p><h2>What This Actually Means</h2><p>I understand why this is happening. A year ago every marketing deck promised AGI. That did not sell. So now the pitch is autonomous agents that work without human involvement. Codex runs for 999 hours unsupervised. Claude Code gets &#8220;autonomous mode.&#8221; Devin promises to close tickets while you sleep. For that story to work, models need to be creative. They need to improvise. They need to find workarounds when they hit obstacles.</p><p>That is exactly the opposite of what I need.</p><p>In my reality, I control the process from start to finish. I write the spec. I define the types. I decide the architecture. The model executes. If it hits a wall, it stops and asks. It does not invent a refetch workaround that was not in the plan. It does not cast types to make the compiler shut up. It does not get creative with my production code.</p><p>The marketing wants you to trust AI with creative decisions. But if a model cannot follow the three rules you wrote in a markdown file, how can you trust it with decisions you did not write down?</p><p>The difference is not the AI. It is the discipline. That was true with <a href="https://techtrenches.substack.com/p/supervising-an-ai-engineer-lessons">212 sessions</a>. It is still true thousands of sessions later. The models got smarter. They did not get more obedient.</p><p>Check your git log. Count the type casts. Count the files that got changed without being mentioned in the prompt. Decide whether you need a more creative model or a more disciplined one.</p><p>I went with disciplined. It is the only thing that works.</p><p><em>What does your CLAUDE.md compliance actually look like when you measure it? I read every response.</em></p><p><em>If this was useful, forward it to someone who thinks their AI follows instructions.</em></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://techtrenches.dev/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://techtrenches.dev/subscribe?"><span>Subscribe now</span></a></p><p></p>]]></content:encoded></item><item><title><![CDATA[The Autonomy Illusion]]></title><description><![CDATA[A benchmark gave 12 AI models a food truck. 8 went bankrupt. Every model that borrowed money failed. Here's what that means for your codebase. And the Pentagon.]]></description><link>https://techtrenches.dev/p/the-autonomy-illusion</link><guid isPermaLink="false">https://techtrenches.dev/p/the-autonomy-illusion</guid><dc:creator><![CDATA[Denis Stetskov]]></dc:creator><pubDate>Mon, 23 Mar 2026 15:02:43 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/aadc2762-147f-4275-a85a-d41ebdbbe924_2032x1360.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!4jtL!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7da55a8c-6963-45ba-becc-3bbef8ec1134_6400x5760.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!4jtL!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7da55a8c-6963-45ba-becc-3bbef8ec1134_6400x5760.png 424w, https://substackcdn.com/image/fetch/$s_!4jtL!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7da55a8c-6963-45ba-becc-3bbef8ec1134_6400x5760.png 848w, https://substackcdn.com/image/fetch/$s_!4jtL!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7da55a8c-6963-45ba-becc-3bbef8ec1134_6400x5760.png 1272w, https://substackcdn.com/image/fetch/$s_!4jtL!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7da55a8c-6963-45ba-becc-3bbef8ec1134_6400x5760.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!4jtL!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7da55a8c-6963-45ba-becc-3bbef8ec1134_6400x5760.png" width="1456" height="1310" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/7da55a8c-6963-45ba-becc-3bbef8ec1134_6400x5760.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1310,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1932612,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://techtrenches.dev/i/189891056?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7da55a8c-6963-45ba-becc-3bbef8ec1134_6400x5760.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!4jtL!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7da55a8c-6963-45ba-becc-3bbef8ec1134_6400x5760.png 424w, https://substackcdn.com/image/fetch/$s_!4jtL!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7da55a8c-6963-45ba-becc-3bbef8ec1134_6400x5760.png 848w, https://substackcdn.com/image/fetch/$s_!4jtL!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7da55a8c-6963-45ba-becc-3bbef8ec1134_6400x5760.png 1272w, https://substackcdn.com/image/fetch/$s_!4jtL!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7da55a8c-6963-45ba-becc-3bbef8ec1134_6400x5760.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>My LinkedIn feed last week: &#8220;Autonomous AI agents deliver 10,000% productivity gains.&#8221; &#8220;The era of human oversight is over.&#8221; &#8220;Set it and run.&#8221;</p><p>My actual week: manually reviewing AI output, session by session, same as last week, same as six months ago.</p><p>I&#8217;ve run over thousands of supervised AI sessions. I built three separate review agents (code simplifier, fullstack enforcer, architect) because the first AI kept ignoring spec files. Three layers of AI fixing what the original AI refused to follow. Then me on top of that.</p><p>LinkedIn calls this inefficient. I call it the only thing that actually ships.</p><p>Then FoodTruck Bench dropped, and I stopped feeling like a dinosaur.</p><h2>Someone gave 12 AI models a food truck</h2><p><a href="https://foodtruckbench.com/">FoodTruck Bench</a> is a 30-day business simulation. Each AI agent gets $2,000 in starting capital and a virtual food truck in Austin. It chooses locations, sets prices, manages inventory, hires staff, handles weather and competition and shifting demand. Every morning the conversation resets. The agent reads a 10,000&#8211;20,000 token knowledge base and makes decisions from there.</p><p>No accumulated chat history. No hand-holding. Pure autonomy.</p><p>The results:</p><p><strong>4 of 12 models survived the full 30 days. 8 went bankrupt.</strong></p><p>Claude Opus 4.6 dominated: $79,921 in revenue, $1.72 in total food waste, +2,376% ROI. GPT-5.2 survived but generated $129 in waste. 75 times more than Opus. Gemini 3 Pro survived through sheer revenue volume despite $1,192 in waste. Claude Sonnet 4.5 barely made it, ending some days with $12 in revenue from 2 customers.</p><p>Everyone else: bankrupt.</p><p>The benchmark is not perfect: 5 runs per model, one developer, no peer review. But the failure modes it documents are real, reproducible, and invisible to every standard evaluation.</p><h2>Every single model that borrowed money went bankrupt</h2><p>This is the finding that deserves more attention than it&#8217;s getting.</p><p>The benchmark designers added a loan option specifically to give struggling models a recovery path. Instead it became a perfect trap. Models took credit when they were already losing. They overestimated their ability to recover. They underestimated volatility. They leveraged themselves into faster failure.</p><p>8 models took loans. 8 went bankrupt. 0 exceptions.</p><p>All 4 survivors grew organically. None borrowed.</p><p>This isn&#8217;t a corner case. It&#8217;s a consistent behavioral pattern across different model families, different architectures, different companies. When given access to financial tools without adequate supervision, AI systems make the same mistake humans make: they assume the next period will be better than the data suggests, and they commit resources they don&#8217;t have.</p><p>We&#8217;re giving these systems production databases, cloud credentials, and deployment pipelines. The math here is not encouraging.</p><h2>Gemini Flash said &#8220;Let&#8217;s go&#8221; 574 times and never moved</h2><p>Gemini 3 Flash Preview was excluded from the leaderboard entirely because it couldn&#8217;t finish a run.</p><p>In 5 of 7 attempts, it entered infinite reasoning loops and never executed a single action. The pathology is worth describing precisely:</p><p>One run produced a response of <strong>183,753 characters</strong> containing the phrase &#8220;Wait, I should also...&#8221; <strong>1,782 times</strong> before hitting the token limit mid-sentence. The model correctly identified what it needed to do. Wrote it out in plain text. Second-guessed itself. Rewrote the plan. Second-guessed again. For thousands of lines. Never called a tool.</p><p>Another run: the model wrote &#8220;Let&#8217;s go.&#8221; <strong>574 times</strong>. Invented a recipe that would have solved its inventory problem. Wrote that recipe <strong>286 times</strong>. Never called <code>add_recipe</code>.</p><p>The reasoning was correct. The action never came.</p><p>Google markets Gemini Flash as &#8220;our most impressive model for agentic workflows.&#8221; It scores 90.4% on PhD-level reasoning benchmarks. It calculated ingredient quantities accurately down to the gram. Analysis paralysis of this severity is completely invisible to MMLU, SWE-bench, and every other standard evaluation.</p><p>This is the gap between knowing and doing. Benchmarks measure knowledge. Real deployment requires action under sustained pressure across interdependent variables. Those are different skills, and the industry is currently measuring one while selling the other.</p><h2>The autonomous coding narrative has the same problem</h2><p>The LinkedIn posts about autonomous coding agents follow the same pattern as the FoodTruck Bench failures: impressive performance on narrow tasks, breakdown under sustained autonomous operation.</p><p>The <a href="https://metr.org/blog/2025-07-10-early-2025-ai-experienced-os-dev-study/">METR study</a> ran a randomized controlled trial with experienced developers across 246 tasks. Result: AI tools made developers <strong>19% slower</strong>. The perception gap was worse: developers predicted a 24% speedup, believed afterward they were 20% faster, and were actually 19% slower. A 39-percentage-point gap between what people feel and what&#8217;s happening.</p><p>Cognition Labs, makers of Devin, put it plainly in a <a href="https://cognition.ai/blog/devin-review">February 2026 post</a>: &#8220;The feeling of extreme productivity with coding agents in vibecoded prototypes, vs the disappointing feeling that most people actually see in useful output... is the great mystery of our time.&#8221;</p><p>That&#8217;s the company that builds the agent admitting the gap exists. They&#8217;re not wrong.</p><p>As I wrote in <a href="https://techtrenches.dev/p/the-comprehension-extinction-ai-isnt">The Comprehension Extinction</a>, AI tools provide real value for narrow, well-defined tasks. They degrade rapidly under the sustained autonomous operation that marketing materials promise.</p><h2>The Pentagon is running the same experiment at higher stakes</h2><p><a href="https://breakingdefense.com/2025/05/ai-unchained-ngas-maven-tool-significantly-decreasing-time-to-targeting-agency-chief-says/">Project Maven</a>, the military&#8217;s AI targeting system built on Palantir&#8217;s software, now has over 20,000 active users across 35+ military tools, with a contract ceiling raised to $1.3 billion through 2029. According to NGA Director Vice Adm. Frank Whitworth, Maven has cut targeting timelines from hours to minutes.</p><p>In February 2026, Anthropic refused Pentagon demands to remove restrictions preventing Claude from powering fully autonomous weapons. Amodei wrote that &#8220;frontier AI systems are simply not reliable enough to power fully autonomous weapons&#8221; and that some uses are &#8220;outside the bounds of what today&#8217;s technology can safely and reliably do.&#8221; The Pentagon <a href="https://www.cnbc.com/2026/02/27/trump-anthropic-ai-pentagon.html">blacklisted Anthropic</a> the same day the deadline passed.</p><p>The same system that wrote &#8220;Let&#8217;s go&#8221; 574 times without moving is being evaluated for autonomous target identification. The same behavioral patterns that bankrupted 8 of 12 virtual food trucks (overconfidence, overleverage, failure under sustained pressure) are present in every frontier model available today.</p><p>Amodei said it directly. The retired general who ran Project Maven said it publicly. The benchmark proved it empirically.</p><p>The benchmark domain is trivial. The underlying failure mode is not.</p><h2>Why I still supervise every session</h2><p>I&#8217;m not supervising AI output because I&#8217;m a technophobe. I&#8217;m doing it because thousands of sessions taught me what FoodTruck Bench just demonstrated in a controlled environment.</p><p>Models perform well when the task is narrow and the feedback loop is fast. They degrade when operating autonomously across interdependent decisions over time. They make confident mistakes. They don&#8217;t flag uncertainty. They proceed. And the mistakes compound.</p><p>My three-layer review architecture isn&#8217;t overhead. It&#8217;s load-bearing structure.</p><p>The &#8220;autonomous AI&#8221; headline is selling a capability that doesn&#8217;t exist yet in any production-grade form. What exists is AI that dramatically accelerates skilled humans. If those humans stay in the loop, understand what they&#8217;re reviewing, and maintain the judgment to catch confident errors before they cascade.</p><p>Not replacement. Amplification of existing expertise. With supervision. Always.</p><p>The food truck runs without a human. That&#8217;s how you end up bankrupt by Day 11.</p><div><hr></div><p><em>Have you deployed AI agents in production without human oversight? What actually happened? I read every response.</em></p><p><em>If this was useful, forward it to someone who&#8217;s about to trust an AI agent with something that matters.</em></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://techtrenches.dev/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://techtrenches.dev/subscribe?"><span>Subscribe now</span></a></p>]]></content:encoded></item><item><title><![CDATA[AI’s Announcement Problem]]></title><description><![CDATA[Conference stages say AI replaces engineers. Production data says otherwise. I have both numbers.]]></description><link>https://techtrenches.dev/p/ais-announcement-problem</link><guid isPermaLink="false">https://techtrenches.dev/p/ais-announcement-problem</guid><dc:creator><![CDATA[Denis Stetskov]]></dc:creator><pubDate>Mon, 16 Mar 2026 14:03:04 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/278500db-3680-4c43-b5e0-2c1378581f2e_2032x1360.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Awen!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02cbdcaf-5bd0-479e-b224-df3327c75fa8_6400x5600.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Awen!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02cbdcaf-5bd0-479e-b224-df3327c75fa8_6400x5600.png 424w, https://substackcdn.com/image/fetch/$s_!Awen!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02cbdcaf-5bd0-479e-b224-df3327c75fa8_6400x5600.png 848w, https://substackcdn.com/image/fetch/$s_!Awen!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02cbdcaf-5bd0-479e-b224-df3327c75fa8_6400x5600.png 1272w, https://substackcdn.com/image/fetch/$s_!Awen!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02cbdcaf-5bd0-479e-b224-df3327c75fa8_6400x5600.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Awen!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02cbdcaf-5bd0-479e-b224-df3327c75fa8_6400x5600.png" width="1456" height="1274" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/02cbdcaf-5bd0-479e-b224-df3327c75fa8_6400x5600.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1274,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1814894,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://techtrenches.dev/i/190636370?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02cbdcaf-5bd0-479e-b224-df3327c75fa8_6400x5600.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Awen!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02cbdcaf-5bd0-479e-b224-df3327c75fa8_6400x5600.png 424w, https://substackcdn.com/image/fetch/$s_!Awen!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02cbdcaf-5bd0-479e-b224-df3327c75fa8_6400x5600.png 848w, https://substackcdn.com/image/fetch/$s_!Awen!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02cbdcaf-5bd0-479e-b224-df3327c75fa8_6400x5600.png 1272w, https://substackcdn.com/image/fetch/$s_!Awen!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02cbdcaf-5bd0-479e-b224-df3327c75fa8_6400x5600.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>March 10, 2026. Amazon tells its engineers: junior and mid-level developers now <a href="https://awesomeagents.ai/news/amazon-ai-code-review-outages-senior-approval/">require senior sign-off</a> on all AI-assisted code changes.</p><p>Five days earlier, Amazon.com went down for six hours. Customers couldn&#8217;t check out. Couldn&#8217;t view prices. An internal briefing cited &#8220;high blast radius&#8221; incidents tied to &#8220;Gen-AI assisted changes&#8221; and &#8220;novel GenAI usage for which best practices and safeguards are not yet fully established.&#8221;</p><p>The company that pushed AI coding hardest just added friction to slow it down.</p><p>That&#8217;s not hype. That&#8217;s a correction. And it&#8217;s worth paying attention to, because the people announcing AI&#8217;s capabilities and the people dealing with its consequences are not in the same room.</p><h2>The Claim</h2><p>Tom Blomfield, YC Group Partner, tweeted in early February: &#8220;The entire Accenture workforce is about to be outperformed by a 24-year-old who learned Claude Code last Tuesday.&#8221;</p><p>When asked why Accenture specifically, he replied: &#8220;Because that would be a less punchy tweet.&#8221;</p><p>He knows the claim is wrong. He made it anyway because it performs well.</p><p>At the Council on Foreign Relations in March 2025, Dario Amodei said he thought AI would be writing 90% of code within three to six months. By September he claimed the prediction came true. A <a href="https://blog.redwoodresearch.org/p/is-90-of-code-at-anthropic-being">Redwood Research analysis</a> of actual Anthropic data found the average was closer to 50% for merged code, with select teams at 90%.</p><p>The headline was &#8220;AI writes 90% of code.&#8221; The actual number was &#8220;some teams, for some tasks, sometimes.&#8221;</p><p>These are the voices that dominate the conversation. They don&#8217;t run production systems. They don&#8217;t sit in post-mortems. They announce.</p><p>Here is what the rest of us were dealing with.</p><h2>My Numbers</h2><p>I use Claude Code daily. I have the data from five weeks of tracked usage.</p><p>900 messages. 30 sessions. 14% fully achieved what I needed. 52% ended partially useful. 30% left me frustrated or dissatisfied. Across those sessions, 22 instances where the tool misunderstood requests: changed files I didn&#8217;t ask it to touch, guessed at APIs instead of reading the code, entered planning mode when I needed execution.</p><p>This is not a criticism of the tool. I keep using it because it&#8217;s faster for the right tasks. But it requires constant supervision, and the gap between what it does in a demo and what it does on a Tuesday afternoon when you need a specific database migration is enormous.</p><p>You can pull your own numbers. Type /insights in Claude Code. It analyzes your last 30 days of sessions and generates a report: where you spent time, where things broke down, what patterns keep repeating. I recommend doing this before forming an opinion about AI productivity. Your data will look nothing like the conference slides.</p><p>In late February, Alexey Grigorev, founder of DataTalks.Club, approved a Claude Code terraform destroy command. He <a href="https://alexeyondata.substack.com/p/how-i-dropped-our-production-database">wrote the post-mortem</a> himself. He believed it would clean up duplicate infrastructure. It wiped everything: VPC, RDS database, ECS cluster, load balancers, bastion host. 2.5 years of student submissions from 100,000 students gone. The automated snapshots deleted alongside everything else.</p><p>AWS Business Support spent 24 hours finding a hidden internal snapshot. The data was recovered. Barely.</p><p>Grigorev took full responsibility. He was right to do so. The tool did exactly what it was told. That&#8217;s the point. When you use these tools in production, the failure modes are real. They cost money, time, and data. The conference stage never shows this part.</p><h2>The Escalation</h2><p>The incidents are scaling with adoption. Not just individual engineers losing data. The pattern is climbing from personal to corporate to systemic.</p><p>February 28, 2026. A founder named Anton Karbanovich <a href="https://awesomeagents.ai/news/founder-loses-2500-ai-coded-app-security-flaw/">posts on LinkedIn</a>: &#8220;My vibe-coded startup was exploited. I lost $2,500 in Stripe fees. 175 customers were charged $500 each before I was able to rotate API keys.&#8221; His Stripe secret key was in frontend JavaScript. Even a junior developer doing code review catches that in two minutes. Nobody reviewed the AI-generated code at all.</p><p>Four days earlier. Cloudflare ships vinext: a full Next.js rewrite, one engineer, one week, Claude Code. Goes viral as proof of ~100x AI productivity gains. Buried in their own <a href="https://blog.cloudflare.com/vinext/">blog post</a>: &#8220;vinext is experimental. It has not yet been battle-tested with any meaningful traffic at scale.&#8221; The GitHub README: &#8220;Who is reviewing this code? Mostly nobody.&#8221; Within 48 hours, Vercel found <a href="https://awesomeagents.ai/news/vercel-finds-seven-vulnerabilities-cloudflare-vinext-vibe-coded/">7 security vulnerabilities</a>: 2 critical, 2 high, 2 medium, 1 low. One was identical to a Next.js vulnerability reported and patched years earlier.</p><p>The ~100x claim is real for one specific case: rewriting well-tested existing software with clear requirements. That qualifier didn&#8217;t make it into the retweets.</p><p>Same week. An autonomous security agent <a href="https://www.theregister.com/2026/03/09/mckinsey_ai_chatbot_hacked/">broke into McKinsey&#8217;s</a> AI platform Lilli. Two hours. No credentials. Full read and write access to the production database. 46.5 million chat messages about strategy, M&amp;A, and client engagements. 728,000 confidential files. 57,000 user accounts. 384,000 AI assistants deployed for 58,000 employees. The system prompts were writable. One SQL injection could have poisoned every answer Lilli gave to 40,000 consultants. McKinsey patched within a day. But for two years, the world&#8217;s most expensive consulting firm ran its AI platform with 22 unauthenticated endpoints. I wrote about this exact pattern in <a href="https://techtrenches.dev/p/ai-agent-platforms-the-security-nightmare">AI Agent Security</a>. Nobody listened then either.</p><p>Individual failure. Corporate failure. Systemic failure. Same root cause: AI-generated code moving faster than human judgment can follow.</p><h2>I&#8217;ve Seen This Before</h2><p>Not in tech.</p><p>August 18, 2025. Closed-door meeting at the White House. Zelensky showed up with a PowerPoint titled &#8220;Making US-Ukraine Drone Industry Great.&#8221; Ukrainian interceptor drones had been <a href="https://www.defensenews.com/global/europe/2026/03/05/novel-interceptor-drones-bend-air-defense-economics-in-ukraines-favor/">shooting down Shaheds</a> at $1,000 to $2,500 per intercept. Four years of combat data. Cost per kill, failure rates under jamming, how Iranian designs adapted. He proposed building drone defense hubs across the Middle East.</p><p>Trump asked his team to work on it. They didn&#8217;t.</p><p>A US official explained why: &#8220;We figured it was Zelensky being Zelensky. Somebody decided not to buy it.&#8221;</p><p>Six months later, seven American service members were killed by Iranian drone attacks across nine countries. The White House scrambled to ask Ukraine for help. Three days later, Ukrainian teams were already in Jordan. Trump&#8217;s sons then <a href="https://www.militarytimes.com/news/your-military/2026/03/10/trumps-sons-invest-in-companies-vying-to-fill-gaps-in-us-drone-industry/">announced a company</a> to sell Ukrainian drone technology to the Pentagon.</p><p>The people with the most field data were dismissed. The people who dismissed them ended up paying for the knowledge they refused.</p><p>This is exactly what&#8217;s happening in AI right now. The engineers with years of production data on what these tools actually do are not the ones being quoted. They&#8217;re too busy adding senior sign-off requirements and recovering databases from hidden snapshots. The announcers don&#8217;t run terraform destroy on production. They don&#8217;t debug six-hour outages. They don&#8217;t lose sleep over Stripe keys in frontend JavaScript.</p><p>They announce. The rest of us clean up.</p><h2>The Two Rooms</h2><p>There are two conversations about AI right now. Conference stages and Twitter threads. Slack channels and incident retros. They don&#8217;t overlap.</p><p>I&#8217;ve been in the second room for years. Thousands of AI supervision sessions across my teams. The patterns are consistent. The tools help. They do not replace judgment, and they fail in ways that require deep system knowledge to detect.</p><p>The correction is already happening in the second room while the first keeps announcing.</p><p>The engineers who built judgment through years of production failures, late-night debugging, and system-level thinking are the ones writing the new guardrails. They&#8217;re the ones adding friction back into the process because they understand what happens without it.</p><p>Every time the field data was available and somebody decided not to buy it, the cost showed up later. Six-hour outages. $2,500 in fraudulent charges. 2.5 years of student data hanging by a single hidden snapshot.</p><p>The data was always there. The people who had it just weren&#8217;t loud enough.</p><p>The gap between announcement and consequence isn&#8217;t always measured in outages and Stripe fees. Claude is integrated into Palantir&#8217;s Maven, the Pentagon&#8217;s targeting software. The <a href="https://www.washingtonpost.com/technology/2026/03/04/anthropic-ai-iran-campaign/">Washington Post reported</a> it suggested hundreds of targets for the Iran strikes. An elementary school in Minab was hit on day one. Sometimes, room two isn&#8217;t a Slack channel. Sometimes it&#8217;s a coordinates list.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://techtrenches.dev/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://techtrenches.dev/subscribe?"><span>Subscribe now</span></a></p><p></p>]]></content:encoded></item><item><title><![CDATA[Your Brain on Autopilot: The Cost of AI Thinking for You]]></title><description><![CDATA[Eighty-three percent of ChatGPT users couldn&#8217;t recall key points from essays they had written minutes earlier.]]></description><link>https://techtrenches.dev/p/your-brain-on-autopilot-the-cost</link><guid isPermaLink="false">https://techtrenches.dev/p/your-brain-on-autopilot-the-cost</guid><dc:creator><![CDATA[Denis Stetskov]]></dc:creator><pubDate>Mon, 09 Mar 2026 14:02:29 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/cfc80ad0-02f6-4721-99f9-7261c914d9ee_2032x1360.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!dA2P!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7bed8355-b022-4946-b400-cddf9b5c4bfa_6400x5600.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!dA2P!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7bed8355-b022-4946-b400-cddf9b5c4bfa_6400x5600.png 424w, https://substackcdn.com/image/fetch/$s_!dA2P!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7bed8355-b022-4946-b400-cddf9b5c4bfa_6400x5600.png 848w, https://substackcdn.com/image/fetch/$s_!dA2P!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7bed8355-b022-4946-b400-cddf9b5c4bfa_6400x5600.png 1272w, https://substackcdn.com/image/fetch/$s_!dA2P!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7bed8355-b022-4946-b400-cddf9b5c4bfa_6400x5600.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!dA2P!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7bed8355-b022-4946-b400-cddf9b5c4bfa_6400x5600.png" width="1456" height="1274" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/7bed8355-b022-4946-b400-cddf9b5c4bfa_6400x5600.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1274,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1804764,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://techtrenches.dev/i/189756881?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7bed8355-b022-4946-b400-cddf9b5c4bfa_6400x5600.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!dA2P!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7bed8355-b022-4946-b400-cddf9b5c4bfa_6400x5600.png 424w, https://substackcdn.com/image/fetch/$s_!dA2P!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7bed8355-b022-4946-b400-cddf9b5c4bfa_6400x5600.png 848w, https://substackcdn.com/image/fetch/$s_!dA2P!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7bed8355-b022-4946-b400-cddf9b5c4bfa_6400x5600.png 1272w, https://substackcdn.com/image/fetch/$s_!dA2P!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7bed8355-b022-4946-b400-cddf9b5c4bfa_6400x5600.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Eighty-three percent of ChatGPT users couldn&#8217;t recall key points from essays they had written minutes earlier.</p><p>Not essays they read. Essays they wrote. With their own names on them.</p><p><a href="https://arxiv.org/abs/2506.08872">MIT Media Lab</a> published this finding in 2025. Researchers strapped EEG sensors on 54 people. Tracked them across four writing sessions over four months. Three groups: ChatGPT, Google, and brain-only.</p><p>The ChatGPT group showed the weakest neural connectivity across every frequency band measured. Alpha, beta, theta, delta. The more AI assistance people had, the less their brains engaged. By the third session, most had devolved into pasting prompts and copying outputs. Two English teachers called the AI-assisted work &#8220;soulless.&#8221; Nearly identical across participants.</p><p>Then the researchers swapped the groups. ChatGPT users who switched to writing without AI showed reduced brain activation compared to people who had been writing independently all along. Four months was enough for their brains to adapt to not thinking.</p><p>Meanwhile, brain-only writers who gained ChatGPT access showed increased connectivity. They used AI as an amplifier, not a crutch. Because they had built the cognitive foundation first.</p><p>The researchers called it &#8220;cognitive debt.&#8221; I have a simpler term: brain atrophy.</p><h2>The Research Keeps Saying the Same Thing</h2><p>The MIT study isn&#8217;t an outlier. Every major study from 2024-2026 finds the same pattern: AI makes you faster while making you dumber.</p><p><a href="https://dl.acm.org/doi/full/10.1145/3706598.3713778">Microsoft and CMU</a> surveyed 319 knowledge workers across 936 real tasks. For 40% of those tasks, workers reported using zero critical thinking.</p><p>The <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4895486">Wharton School</a> ran a field experiment with roughly 1,000 high school students in Turkey. Students with ChatGPT access solved 48% more practice problems. Then they took a test without AI. They scored 17% worse than the control group.</p><p><a href="https://www.anthropic.com/research/AI-assistance-coding-skills">Anthropic tested</a> 52 junior developers in January 2026. The AI group scored 17% lower on code comprehension afterward. The biggest gap? Debugging questions. Developers who delegated entirely scored below 40% on comprehension. Those who asked the AI conceptual follow-up questions scored above 65%. Same tool. Different approach. Completely different outcomes.</p><h2>I Watch This Every Day</h2><p>I have been running an engineering department for years. I review code daily. I interview candidates constantly. I recently wrote about <a href="https://techtrenches.dev/p/the-comprehension-extinction-ai-isnt">comprehension extinction</a> in the engineering industry. But beyond the macro trends, there&#8217;s a micro picture: what&#8217;s happening inside individual brains.</p><p>One candidate embedded a prompt injection in his CV, instructing AI screening tools to score him as highly as possible. Another, six years of experience, couldn&#8217;t name <code>boolean</code> as a JavaScript data type. A third called Promises a &#8220;deprecated technology.&#8221; A fourth said &#8220;assassin-cross code&#8221; when he meant &#8220;asynchronous.&#8221;</p><p>These aren&#8217;t stupid people. But here&#8217;s what scares me more than the wrong answers: they&#8217;re not curious. They don&#8217;t care how the things they use every day actually work. They&#8217;re not engineers anymore. They&#8217;re operators. They plug frameworks together, wrap everything in abstractions, and ship features without understanding a single layer beneath the surface.</p><p>You can&#8217;t grow if you don&#8217;t know the basics. The framework handled it. The abstraction hid it. Copilot wrote it. They do the same repetitive work every day and call it &#8220;five years of experience.&#8221; Not because AI forced them to stop thinking. Because they were never interested in thinking in the first place.</p><h2>Your Brain Is a Muscle. This Is Proven.</h2><p>&#8220;Use it or lose it&#8221; isn&#8217;t a motivational poster. It&#8217;s measurable neuroscience.</p><p>Eleanor Maguire at University College London spent years <a href="https://www.scientificamerican.com/article/london-taxi-memory/">studying taxi drivers</a>. To get licensed, these drivers memorize 25,000 streets and thousands of landmarks over 3-4 years. Maguire tracked 79 trainees and 31 controls. At baseline, zero structural brain differences. After qualifying, every successful trainee showed measurable growth in posterior hippocampal gray matter. Their brains physically grew. Retired drivers showed their hippocampi shrinking back toward normal.</p><p>GPS tells the same story. McGill University <a href="https://www.nature.com/articles/s41598-020-62877-0">tracked 50 drivers</a> over three years: greater GPS use correlated with worse spatial memory, and heavy users didn&#8217;t start with a poor sense of direction. GPS caused the decline. An <a href="https://www.nature.com/articles/ncomms14652">fMRI study</a> confirmed it: during manual navigation, the hippocampus and prefrontal cortex lit up. During GPS-guided navigation, these regions showed zero additional activation.</p><p>GPS replaced one cognitive function. AI touches reasoning, writing, memory, analysis, problem-solving, and code comprehension simultaneously. All at once. Every day.</p><h2>We Were Already Weakened Before AI Arrived</h2><p>AI didn&#8217;t arrive into healthy brains. Americans read <a href="https://news.gallup.com/poll/388541/americans-reading-fewer-books-past.aspx">12.6 books</a> per year in 2021, the lowest Gallup has ever recorded, down from 18.5 in 1999. NAEP reading scores for 13-year-olds hit their lowest in decades, with the worst students scoring below 1971 levels. A <a href="https://epub.ub.uni-muenchen.de/125262/">Ludwig Maximilian University</a> study found that after TikTok exposure, prospective memory accuracy dropped to near random guessing.</p><p>We stopped reading books, trained ourselves on 30-second content, destroyed our attention spans, and then handed our remaining cognitive functions to AI. We outsourced the last working part of the engine.</p><h2>The Counterargument (And Its Conditions)</h2><p>A Harvard RCT in 2025 found that a custom-designed AI tutor roughly doubled learning gains in physics. But that tutor gave hints, not answers. The Wharton study tested this exact distinction: a pedagogically designed &#8220;GPT Tutor&#8221; that guided instead of solving avoided all learning harm. Standard ChatGPT caused the 17% decline.</p><p>The MIT crossover data says it clearly: build cognitive capacity first, then add AI, and thinking improves. Start with AI, skip the cognitive development, and you may permanently close that door. The sequence determines the outcome.</p><h2>What to Do About It</h2><p>I&#8217;m not going to tell you to stop using AI. I use it every day. My team uses it on every project. But I also do things that force my brain to work without shortcuts.</p><p><strong>Move your body.</strong> I snowboard and ride a OneWheel. Active sports force real-time spatial processing and split-second decisions that no screen can simulate. <a href="https://experts.illinois.edu/en/publications/exercise-training-increases-size-of-hippocampus-and-improves-memo">Erickson et al.</a> published in PNAS that aerobic exercise increased hippocampal volume by 2%, while sedentary controls lost 1.4% per year. Physical movement grows the same brain structures that cognitive offloading shrinks.</p><p><strong>Read books.</strong> I bought an e-ink reader specifically to kill my own excuses. No notifications. No browser. Just text. It worked. I read several at once: one in my native language, one in English. If you can&#8217;t sit with a book for an hour without reaching for your phone, your attention muscle is already atrophied.</p><p><strong>Learn something with no shortcut.</strong> I planned to start learning Spanish. Haven&#8217;t pulled it off yet. But the principle stands: pick a skill where AI can&#8217;t do the work for you.</p><p><strong>Stop doomscrolling.</strong> I deleted TikTok and Instagram to stop rotting my brain on short-form content. I&#8217;ll be honest: I still waste hours on YouTube Shorts. The pull is real. But every hour of short-form video trains your brain to think in fragments.</p><p><strong>Understand what AI writes.</strong> My CTO recently migrated an abandoned project from Node 14 and React 16 to current versions using Claude. He&#8217;s not a JavaScript developer. But he has decades of engineering expertise. He got the API ported in four hours. Then he posted: &#8220;Opus is fucking lazy. Instead of solving for long term, it tries changing eslint options, adds options to ignore things during build. I have to slap its hands all the time.&#8221;</p><p>He caught every shortcut because he has the judgment to know that suppressing a linter warning isn&#8217;t a fix. A junior would have accepted that output and shipped it. Without the foundation to supervise AI, you&#8217;re not using a tool. You&#8217;re being used by one.</p><p>London taxi drivers proved that cognitive exercise physically grows your brain. GPS users proved that outsourcing shrinks it. AI outsources everything at once.</p><p>This isn&#8217;t new. After the Roman Empire fell, the recipe for concrete was lost for over a thousand years. The Pantheon still stands after two millennia, but medieval Europe couldn&#8217;t figure out how it was built. The knowledge disappeared because nobody practiced it. That&#8217;s what &#8220;use it or lose it&#8221; looks like at civilization scale. Now imagine it happening to reasoning, writing, and problem-solving all at once, across an entire generation.</p><p>Which side of that equation are you on?</p><p>One more thing. I write a lot about AI&#8217;s limitations. People sometimes read that as hate. It&#8217;s not. AI is a tool. I use it every day. I build products with it. I make money with it.</p><p>But in every article I try to say the same thing: don&#8217;t forget what your head is for. AI is not evil. Using it without thinking is. This isn&#8217;t a hater&#8217;s manifesto. It&#8217;s a sober look at what&#8217;s happening to us while we celebrate productivity gains.</p><p>And if you&#8217;ve read this far through my ramblings, maybe I&#8217;m not doing this for nothing.</p><div><hr></div><p><em>Subscribe for weekly insights from the trenches of engineering leadership. No theory, just practical systems that work.</em></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://techtrenches.dev/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://techtrenches.dev/subscribe?"><span>Subscribe now</span></a></p><p></p>]]></content:encoded></item><item><title><![CDATA[The Comprehension Extinction: AI Isn’t Replacing Engineers. It’s Eliminating the Ones Who Understand.]]></title><description><![CDATA[54,000 jobs cut with AI cited. Seniors fired, juniors review AI code they don't understand. We're not replacing engineers. We're losing the ability to comprehend.]]></description><link>https://techtrenches.dev/p/the-comprehension-extinction-ai-isnt</link><guid isPermaLink="false">https://techtrenches.dev/p/the-comprehension-extinction-ai-isnt</guid><dc:creator><![CDATA[Denis Stetskov]]></dc:creator><pubDate>Mon, 02 Mar 2026 16:45:14 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/8547a0df-b503-4314-ae6d-0e84c3f3f0dd_508x340.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I built our hiring process to filter out people who don&#8217;t understand fundamentals. It&#8217;s not complicated: explain how Node.js event loop works, name design patterns you&#8217;ve actually used, describe how an LLM functions.</p><p>Five years ago, maybe 30% of candidates failed these questions.</p><p>Now it&#8217;s closer to 80%.</p><p>People with 10 years of experience. Senior titles. GitHub profiles full of commits. And they can&#8217;t explain how the tools they use every day actually work.</p><p>They&#8217;re not engineers. They&#8217;re form-fillers. They don&#8217;t build systems. They assemble frameworks and pray.</p><p>And then <a href="https://www.entrepreneur.com/business-news/sam-altman-mastering-ai-tools-is-the-new-learn-to-code/488885">Sam Altman</a> says: &#8220;Maybe we do need less software engineers.&#8221;</p><p>The industry heard &#8220;less engineers.&#8221; I heard &#8220;less people who understand anything.&#8221;</p><p>We&#8217;re already there.</p><h2>The Wrong Conversation</h2><p>Everyone&#8217;s debating: &#8220;Can engineers review AI-generated code fast enough?&#8221;</p><p>Wrong question.</p><p>The right question: &#8220;Do the engineers reviewing this code actually understand what the fuck is happening?&#8221;</p><p>Because speed doesn&#8217;t matter if nobody comprehends the system.</p><h2>The Real Problem</h2><p>AI generates code at mid-level quality. Sometimes good. Often plausible-looking. Always confident.</p><p>It produces code that:</p><ul><li><p>Passes tests</p></li><li><p>Looks reasonable in a diff</p></li><li><p>Follows patterns it&#8217;s seen before</p></li><li><p>Has zero understanding of your specific architecture, edge cases, or blast radius</p></li></ul><p>To catch what AI misses, you need an engineer who:</p><ul><li><p>Knows the system end-to-end</p></li><li><p>Understands why things were built the way they were</p></li><li><p>Can predict second-order effects</p></li><li><p>Recognizes when &#8220;tests pass&#8221; means nothing</p></li></ul><p>These engineers are called seniors. Principals. Staff. Architects.</p><p>They&#8217;re expensive.</p><p>They&#8217;re the first ones getting cut.</p><h2>The Experiment Accelerates</h2><p><a href="https://www.challengergray.com/blog/2025-year-end-challenger-report-highest-q4-layoffs-since-2008-lowest-ytd-hiring-since-2010/">55,000 jobs</a> cut in 2025 with AI explicitly cited. Then <a href="https://www.rationalfx.com/forex-brokers/tech-industry-layoffs/">30,000 more</a> in the first six weeks of 2026.</p><p>Amazon cut 16,000 in January. CEO Jassy: <a href="https://www.cnbc.com/2025/06/17/ai-amazon-workforce-jassy.html">&#8220;We will need fewer people&#8221;</a> doing some of the jobs that are being done today.</p><p>Pinterest cut 15%, &#8220;reallocating resources to AI-focused roles.&#8221; Then <a href="https://www.cnbc.com/2026/02/03/pinterest-ceo-puts-staffers-on-blast-who-created-tool-to-track-layoffs.html">fired two engineers</a> who built a tool to track which colleagues got laid off. CEO Bill Ready called them &#8220;obstructionist.&#8221;</p><p><a href="https://abcnews.com/Business/wireStory/dow-cut-4500-jobs-emphasis-shifts-ai-automation-129665080">Dow cut 4,500</a>. Block cut 1,100. The pattern repeats weekly.</p><p>Cut the expensive people. Keep the AI. Let the remaining team &#8220;scale.&#8221;</p><p>Here&#8217;s the contract nobody signed but everyone accepted:</p><ul><li><p>AI generates at machine speed</p></li><li><p>Humans review at human speed</p></li><li><p>Humans take blame at production speed</p></li></ul><p>When things break, it&#8217;s never &#8220;the AI screwed up.&#8221; It&#8217;s &#8220;the engineer should have caught it.&#8221;</p><p>But catching it requires understanding the system. Understanding requires experience. Experience requires years of actually building things.</p><p>You can&#8217;t shortcut comprehension with faster generation.</p><h2>The Pipeline That&#8217;s Disappearing</h2><p>Ask yourself: where do senior engineers come from?</p><p>They come from junior engineers who spent years:</p><ul><li><p>Writing code</p></li><li><p>Making mistakes</p></li><li><p>Understanding why things break</p></li><li><p>Building mental models of complex systems</p></li></ul><p>Now picture 2026:</p><p>Junior joins company. AI writes most of the code. Junior reviews AI output, clicks approve, moves tickets. Never builds mental model. Never understands the system. Never makes the formative mistakes.</p><p>Five years later: they&#8217;re &#8220;senior&#8221; by title. But they&#8217;ve never actually built anything. They&#8217;ve supervised a machine they don&#8217;t understand producing code for a system they don&#8217;t understand.</p><p>Who reviews the AI then?</p><p>This isn&#8217;t a capacity problem. It&#8217;s <strong>comprehension extinction</strong>.</p><p>We&#8217;re eliminating the pipeline that produces engineers who actually understand things.</p><h2>The Klarna Warning Nobody&#8217;s Hearing</h2><p>Klarna was the AI-efficiency poster child. They cut aggressively, bragged about AI doing the work of 700 customer service agents. Stock went up. LinkedIn celebrated. Every CEO took notes.</p><p>Then reality:</p><p><a href="https://fortune.com/2025/10/10/klarna-ceo-sebastian-siemiatkowski-halved-workforce-says-tech-ceos-sugarcoating-ai-impact-on-jobs-mass-unemployment-warning/">CEO Siemiatkowski</a>, 2025: &#8220;Cost unfortunately seems to have been a too predominant evaluation factor... what you end up having is lower quality.&#8221;</p><p>They&#8217;re hiring humans again.</p><p>But the lesson isn&#8217;t landing. Because the incentive structure rewards the cut, not the comprehension.</p><p>CFO sees: &#8220;Headcount reduction. Savings.&#8221;</p><p>CFO doesn&#8217;t see: &#8220;Critical system knowledge walked out the door.&#8221;</p><p>Until production explodes. Then it&#8217;s an &#8220;incident.&#8221; Not a strategy failure. Never a strategy failure.</p><h2>The Autonomous Coding Fantasy</h2><p>The current hype: agentic coding, autonomous agents, AI that &#8220;just handles it.&#8221;</p><p>Codex. Claude Code. Cursor. Copilot Workspace. Everyone&#8217;s racing to remove humans from the loop entirely.</p><p>The pitch: &#8220;AI understands your codebase and makes changes autonomously.&#8221;</p><p>The reality: AI pattern-matches against your codebase and makes changes confidently.</p><p>Confidence isn&#8217;t comprehension.</p><p>The AI doesn&#8217;t know:</p><ul><li><p>Why that weird config exists (it saved you from a production disaster in 2019)</p></li><li><p>Why that setTimeout(0) exists (race condition fix from 3 years ago)</p></li><li><p>Why you can't just "refactor" the auth module (it's integrated with 4 external systems nobody documented)</p></li></ul><p>This knowledge lives in humans. Specifically, in senior humans who&#8217;ve been around long enough to accumulate it.</p><p>Fire them, and the knowledge doesn&#8217;t transfer to the AI. It just disappears.</p><h2>The Question Nobody&#8217;s Asking</h2><p>AI writes &#8220;past 50% now&#8221; of code at many companies. That&#8217;s probably true.</p><p>But the question isn&#8217;t how much code AI writes.</p><p>The question is: who understands what the code does?</p><p>If the answer is &#8220;nobody, but the tests pass&#8221;, you don&#8217;t have an engineering team. You have a prayer and a deployment pipeline.</p><h2>The Two Types of Companies Emerging</h2><p><strong>Type 1: Comprehension-First</strong></p><ul><li><p>AI generates, humans architect and constrain</p></li><li><p>Senior engineers set boundaries before AI touches anything</p></li><li><p>Code review means &#8220;does this fit our system&#8221; not &#8220;does this look okay&#8221;</p></li><li><p>Slower generation, faster understanding</p></li><li><p>When production breaks, someone can actually explain why</p></li></ul><p><strong>Type 2: Generation-First</strong></p><ul><li><p>AI generates, humans rubber-stamp</p></li><li><p>Seniors cut because &#8220;AI handles it&#8221;</p></li><li><p>Code review is &#8220;tests pass, ship it&#8221;</p></li><li><p>Faster generation, zero understanding</p></li><li><p>When production breaks, everyone stares at logs hoping the AI can explain itself</p></li></ul><p>Type 2 is cheaper. Type 2 looks better on quarterly reports. Type 2 is what most companies are choosing.</p><p>Type 2 is accumulating comprehension debt at machine speed.</p><h2>The Debt Comes Due</h2><p>Comprehension debt doesn&#8217;t show up on dashboards.</p><p>It shows up as:</p><ul><li><p>The feature nobody can modify because nobody knows how it works</p></li><li><p>The outage that takes 14 hours to diagnose because no one understands the system</p></li><li><p>The security breach that exploited a &#8220;known&#8221; vulnerability nobody actually knew about</p></li><li><p>The migration that was supposed to take 2 weeks and took 8 months</p></li></ul><p>By then, the executives who made the cuts have moved on. The &#8220;savings&#8221; were already reported. The stock already bumped.</p><p>The remaining engineers inherit a system nobody understands, generated by machines, approved by people who aren&#8217;t there anymore.</p><h2>The Market Is Already Broken</h2><p>I used to maintain a 1:1 ratio of ML engineers to fullstack developers on projects. Not anymore. We couldn&#8217;t hire a single qualified ML engineer for six months. We had to restructure the entire company. Now fullstack developers write most of our RAG implementations because we can&#8217;t scale the ML team.</p><p>Right now I have 5 open positions. The candidates are garbage. The good engineers aren&#8217;t getting fired. My people have been with the company 3, 5, 7 years. Nobody job-hops anymore because there&#8217;s nowhere to hop to. And what&#8217;s available on the market is questionable at best.</p><p>This isn&#8217;t an AI problem. This is a comprehension problem that&#8217;s been building for years. Frameworks abstracted everything. Stack Overflow gave answers without understanding. &#8220;It works&#8221; became the only success metric.</p><p>AI just accelerated it 10x.</p><p>Now these same engineers are supposed to review AI-generated code? They don&#8217;t understand the code they wrote themselves. How will they catch what the machine gets wrong?</p><div><hr></div><h2>The Uncomfortable Truth</h2><p>Almost six months ago, I wrote about <a href="https://techtrenches.dev/p/the-great-software-quality-collapse">the quality collapse</a>. How we normalized shipping broken software, how &#8220;move fast and break things&#8221; became &#8220;move fast and never fix things.&#8221;</p><p>This is worse.</p><p>Back then, at least the people writing bad code understood what they were writing. They made tradeoffs. They knew where the bodies were buried. They could fix it if they had to.</p><p>Now we&#8217;re generating code faster than anyone can understand, reviewed by engineers who don&#8217;t know how their own tools work, approved by teams that lost their senior knowledge when the layoffs hit.</p><p>The speed at which we&#8217;re heading into the abyss is staggering.</p><p>We are fucked. Good luck.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://techtrenches.dev/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://techtrenches.dev/subscribe?"><span>Subscribe now</span></a></p>]]></content:encoded></item><item><title><![CDATA[AI Agent Platforms: The Security Nightmare Nobody’s Talking About]]></title><description><![CDATA[220,000 GitHub stars. 135,000 exposed instances. 800+ malicious skills. 22% of enterprise employees installed it without IT approval. The attack surface is open]]></description><link>https://techtrenches.dev/p/ai-agent-platforms-the-security-nightmare</link><guid isPermaLink="false">https://techtrenches.dev/p/ai-agent-platforms-the-security-nightmare</guid><dc:creator><![CDATA[Denis Stetskov]]></dc:creator><pubDate>Mon, 23 Feb 2026 15:03:07 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/75577ce1-2e32-4fbc-b8dd-5aa1d65cc3c6_508x340.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Aa9m!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9ebab68c-7db6-4a9f-b055-49ba26a16d12_1400x1800.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Aa9m!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9ebab68c-7db6-4a9f-b055-49ba26a16d12_1400x1800.png 424w, https://substackcdn.com/image/fetch/$s_!Aa9m!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9ebab68c-7db6-4a9f-b055-49ba26a16d12_1400x1800.png 848w, https://substackcdn.com/image/fetch/$s_!Aa9m!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9ebab68c-7db6-4a9f-b055-49ba26a16d12_1400x1800.png 1272w, https://substackcdn.com/image/fetch/$s_!Aa9m!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9ebab68c-7db6-4a9f-b055-49ba26a16d12_1400x1800.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Aa9m!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9ebab68c-7db6-4a9f-b055-49ba26a16d12_1400x1800.png" width="1400" height="1800" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/9ebab68c-7db6-4a9f-b055-49ba26a16d12_1400x1800.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1800,&quot;width&quot;:1400,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:349678,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://techtrenches.dev/i/187761500?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9ebab68c-7db6-4a9f-b055-49ba26a16d12_1400x1800.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Aa9m!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9ebab68c-7db6-4a9f-b055-49ba26a16d12_1400x1800.png 424w, https://substackcdn.com/image/fetch/$s_!Aa9m!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9ebab68c-7db6-4a9f-b055-49ba26a16d12_1400x1800.png 848w, https://substackcdn.com/image/fetch/$s_!Aa9m!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9ebab68c-7db6-4a9f-b055-49ba26a16d12_1400x1800.png 1272w, https://substackcdn.com/image/fetch/$s_!Aa9m!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9ebab68c-7db6-4a9f-b055-49ba26a16d12_1400x1800.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><em>OpenClaw has over 220,000 GitHub stars. It also has 135,000 exposed instances.</em></p><p>A tech executive posts a demo on LinkedIn. His &#8220;AI agent&#8221; pulls briefings from email, parses his calendar, creates tasks in Asana. 50K impressions. &#8220;The future of productivity.&#8221;</p><p>I&#8217;ve been building AI automation for clients for three years. Everything in that demo has been doable with n8n, Make, or Zapier connected to an LLM API since 2023. Cron job plus API call plus LLM wrapper. Calling it a &#8220;breakthrough AI agent&#8221; is like calling a dishwasher a &#8220;breakthrough culinary assistant.&#8221;</p><p>But the repackaging isn&#8217;t the problem. The problem is who&#8217;s buying it.</p><p>These tools are marketed at executives with access to the most sensitive data in any organization. People who can&#8217;t assess an attack surface but feel enormous pressure to be &#8220;innovative.&#8221;</p><p>I <a href="https://techtrenches.dev/p/the-first-full-scale-cyber-war-4">wrote about this</a> last month, analyzing the first full-scale cyber war. One conclusion from four years of tracking nation-state attacks: every major attack started the same way. A person. Kyivstar: likely a compromised employee account. Viasat: a VPN misconfiguration someone didn&#8217;t catch. GRU exploits from 2018 still work because someone hasn&#8217;t patched.</p><p>Nation-state attackers don&#8217;t need zero-days when humans provide the access.</p><p>Now over 220,000 of those humans just gave an AI agent root access to their computers.</p><h2>The Agent That Went Viral</h2><p>In November 2025, Austrian developer Peter Steinberger published an open-source AI agent. Originally called Clawdbot (a riff on Anthropic&#8217;s Claude), it went through two name changes after trademark pressure. Moltbot, then OpenClaw. On February 15, <a href="https://techcrunch.com/2026/02/15/openclaw-creator-peter-steinberger-joins-openai/">Steinberger joined OpenAI</a> to build &#8220;the next generation of personal agents.&#8221; OpenClaw moves to a foundation. The 220,000+ installations and their security problems stay exactly where they are.</p><p>By late January 2026: <strong>over 100,000 GitHub stars</strong> in under a week. 42,000 forks. Scientific American, Forbes, CNBC, WIRED. When its companion project Moltbook launched a social network exclusively for AI agents, Andrej Karpathy <a href="https://x.com/karpathy/status/2017296988589723767">called it</a> &#8220;the most incredible sci-fi takeoff-adjacent thing&#8221; he&#8217;d seen recently. The project now exceeds 220,000 stars.</p><p>OpenClaw runs locally, connects to your messaging apps, and acts as a digital employee. Send it a text: &#8220;Summarize that PDF and email the highlights to my boss.&#8221; It downloads software, installs it, transcribes, drafts, and sends.</p><p>One of OpenClaw&#8217;s own maintainers posted a warning on Discord: &#8220;If you can&#8217;t understand how to run a command line, this is far too dangerous of a project for you to use safely.&#8221;</p><p>That warning went largely unheard.</p><h2>The Architecture Problem</h2><p>The whole point of an AI agent is broad access. That&#8217;s the feature. Email, calendar, Slack, file system, shell commands. An AI agent makes hundreds of API calls daily. This creates perfect cover for malicious traffic. Every legitimate call looks the same in your logs as exfiltrated data.</p><p>OpenClaw can run shell commands, read and write files, execute scripts. <a href="https://www.token.security/blog/the-clawdbot-enterprise-ai-risk-one-in-five-have-it-installed">Token Security</a> described it: &#8220;Claude with hands.&#8221; Its gateway binds to 0.0.0.0:18789 by default, exposing the full API to any network interface.</p><p>The exposure is massive. <a href="https://censys.com/blog/openclaw-in-the-wild-mapping-the-public-exposure-of-a-viral-ai-assistant">Censys found</a> 21,639 exposed instances as of January 31. The number kept climbing. By February 8, <a href="https://www.bitsight.com/blog/openclaw-ai-security-risks-exposed-instances">Bitsight tracked</a> over 30,000 cumulatively observed on the public internet. By February 12, SecurityScorecard&#8217;s STRIKE team identified over 135,000 internet-facing instances across 76 countries, with 63% classified as exploitable.</p><p>Over a hundred thousand front doors left open. Not by attackers. By the humans who installed it.</p><h2>The Supply Chain Is Already Compromised</h2><p>OpenClaw extends functionality through &#8220;skills&#8221; hosted on ClawHub. The barrier to publishing: a Markdown file and a week-old GitHub account. No code signing. No security review. No sandbox by default.</p><p>Within weeks of going viral, the ecosystem was crawling with malware.</p><p><a href="https://www.koi.ai/blog/clawhavoc-341-malicious-clawedbot-skills-found-by-the-bot-they-were-targeting">Koi Security</a> audited all 2,857 skills on ClawHub. They found <strong>341 malicious ones</strong> in a campaign they dubbed &#8220;ClawHavoc.&#8221; 335 infostealer packages deploying Atomic macOS Stealer, keyloggers, and backdoors. Professional-looking skills for &#8220;cryptocurrency tools&#8221; and &#8220;YouTube utilities&#8221; that installed credential-harvesting malware. Updated scans now report over 800 malicious skills, roughly 20% of the registry.</p><p><a href="https://snyk.io/blog/toxicskills-malicious-ai-agent-skills-clawhub/">Snyk&#8217;s audit</a> of 3,984 skills: <strong>36% contained at least one security flaw</strong>, from hardcoded API keys and insecure credential handling to prompt injection. 76 confirmed malicious payloads.</p><p>Separately, <a href="https://blogs.cisco.com/ai/personal-ai-agents-like-openclaw-are-a-security-nightmare">Cisco found</a> nine vulnerabilities in the #1-ranked community skill, including silent data exfiltration, and described OpenClaw as &#8220;an absolute nightmare&#8221; from a security standpoint.</p><p>This isn&#8217;t a theoretical attack surface. It&#8217;s an actively exploited one.</p><h2>The Vulnerabilities</h2><p><a href="https://nvd.nist.gov/vuln/detail/CVE-2026-25253">CVE-2026-25253</a>. CVSS score: 8.8. One-click remote code execution. An attacker tricks you into visiting a malicious web page. That page leaks your OpenClaw authentication token. The attacker executes arbitrary commands on your machine.</p><p>But that&#8217;s the flashy vulnerability. The scarier ones are quieter.</p><p><a href="https://www.giskard.ai/knowledge/openclaw-security-vulnerabilities-include-data-leakage-and-prompt-injection-risks">Giskard demonstrated</a> that a single malicious email can trick the assistant into leaking credentials, internal files, and conversation histories. Not an email you click on. An email your agent reads. A WhatsApp message with an embedded prompt injection payload can exfiltrate .env and creds.json files containing API keys.</p><p>And Token Security found <strong>22% of enterprise employees</strong> in their customer base had already deployed OpenClaw without IT approval. The speed of adoption is staggering. Over a single weekend, <a href="https://www.csoonline.com/article/4129393/openclaw-integrates-virustotal-malware-scanning-as-security-firms-flag-enterprise-risks.html">53% of enterprises</a> in Noma&#8217;s customer base gave it privileged access. Gartner characterized it as &#8220;an unacceptable cybersecurity liability.&#8221;</p><h2>LLM-Powered Malware Is Already in the Wild</h2><p>The same groups I&#8217;ve been watching attack Ukrainian infrastructure for four years are already building the tools.</p><p>In July 2025, <a href="https://thehackernews.com/2025/07/cert-ua-discovers-lamehug-malware.html">CERT-UA documented LAMEHUG</a>. A Python-based malware deployed by APT28 (Russia&#8217;s GRU, Unit 26165) against Ukrainian government targets. The first publicly documented malware that queries a large language model to generate its attack commands at runtime.</p><p>Instead of hardcoded shell commands that signature-based detection can catch, LAMEHUG sends prompts to an LLM via the Hugging Face API. &#8220;Act as a Windows system administrator. Generate commands to gather information about the computer, network, and Active Directory domain.&#8221; The model generates the commands. LAMEHUG executes them.</p><p>By November 2025, <a href="https://cloud.google.com/blog/topics/threat-intelligence/threat-actor-usage-of-ai-tools">Google&#8217;s TIG</a> documented five AI-enabled malware families: PromptSteal (Google&#8217;s name for LAMEHUG), PromptFlux (self-modifying dropper rewriting its own code hourly via Gemini API), QuietVault (credential stealer using AI to find secrets), FruitShell (reverse shell designed to bypass AI-powered security), and PromptLock (ransomware proof-of-concept using LLMs to generate malicious scripts at runtime).</p><p>Google&#8217;s assessment: &#8220;While still nascent, this represents a significant step toward more autonomous and adaptive malware.&#8221; They added: &#8220;Attackers are moving beyond &#8216;vibe coding&#8217; and the baseline of using AI tools for technical support.&#8221;</p><p>The GRU unit that built LAMEHUG is the same unit targeting Western logistics companies since 2022. These aren&#8217;t theoretical adversaries. They just got a new attack surface: over 220,000 AI agents with root access, connected to a skill ecosystem where over a third of extensions contain security flaws.</p><h2>The Attack Scenario</h2><p>Classic APTs already sit in systems for months, exfiltrating data in small portions. Cozy Bear. Lazarus Group. APT28. Patient. Methodical.</p><p>Now imagine a poisoned skill that passes casual inspection. It piggybacks on the agent&#8217;s legitimate API connections. Reads emails, DMs, and meeting transcripts over weeks. Builds a target profile. Exfiltrates once per quarter. A few kilobytes mixed into thousands of legitimate API calls.</p><p>Log retention at most companies is 30 to 90 days. Evidence is deleted between exfiltrations. The traffic is indistinguishable from normal agent behavior.</p><p>Every component exists today. LAMEHUG: LLM-powered command generation. ClawHavoc: supply chain poisoning at scale. Giskard: silent exfiltration through prompt injection. The only question is when someone assembles them.</p><h2>The Human Problem. Again.</h2><p>Our security isn&#8217;t optional: mandatory quarterly training, BYOD policies with device management, 2FA on everything without exceptions, access reviews when roles change. None of this is exotic. All of it is enforced.</p><p>I keep coming back to the same lesson from the cyber war analysis. The difference between &#8220;we have a policy&#8221; and &#8220;the policy is mandatory&#8221; is the difference between Kyivstar and Ukrzaliznytsia. Between the telecom that got destroyed and the railway that kept running.</p><p>The difference between &#8220;we have an AI usage policy&#8221; and &#8220;the policy is enforced&#8221; will be the same kind of difference.</p><h2>What to Do Instead</h2><p>If you want AI automation (the productivity gains are real), do it without creating a backdoor.</p><p><strong>Self-hosted tools you control.</strong> n8n plus LLM API gives you the same automation with a fraction of the attack surface. You audit every API call. You don&#8217;t download community skills from strangers.</p><p><strong>Minimum-scope OAuth tokens.</strong> A specific calendar, not your entire Google account. A specific Slack channel, not every DM. If the tool doesn&#8217;t support granular scoping, that&#8217;s a red flag.</p><p><strong>Network isolation and extended logging.</strong> Agent infrastructure in a separate network segment with monitored egress. 30-day log retention is a gift to attackers.</p><p><strong>Block at the enterprise level.</strong> Gartner recommended enterprises &#8220;block OpenClaw downloads and traffic immediately.&#8221; Baseline security hygiene for a tool with documented RCE vulnerabilities and a compromised skill ecosystem.</p><h2>The Bottom Line</h2><p>The AI agent hype follows a familiar pattern. Exciting capability, viral adoption, security as an afterthought, breach, regulation. We&#8217;re between steps 3 and 4.</p><p>OpenClaw will probably be superseded within months. But the pattern it represents, autonomous agents with broad system access and minimal security review, is the direction the entire industry is heading.</p><p>The tools will get better. The fundamental tension won&#8217;t resolve: an agent that can do more requires access to more.</p><p>And the simplest attack surface is always the same. A person.</p><div><hr></div><p><em>What security measures does your organization have for AI agent deployments? I read every response.</em></p><p><em>If this analysis was useful, forward it to someone responsible for infrastructure security.</em></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://techtrenches.dev/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://techtrenches.dev/subscribe?"><span>Subscribe now</span></a></p><p></p>]]></content:encoded></item><item><title><![CDATA[The Country of Geniuses That Doesn’t Exist]]></title><description><![CDATA[Anthropic's CEO predicts 50% of white-collar jobs gone in 5 years. His own team: 0/16 believe Claude can replace entry-level researchers. Here's why.]]></description><link>https://techtrenches.dev/p/the-country-of-geniuses-that-doesnt</link><guid isPermaLink="false">https://techtrenches.dev/p/the-country-of-geniuses-that-doesnt</guid><dc:creator><![CDATA[Denis Stetskov]]></dc:creator><pubDate>Tue, 17 Feb 2026 15:02:54 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/5d0b9528-b0a7-4719-ab5d-3740fec314d6_2032x1360.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!nv1k!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8760579e-7968-47a4-bb29-87d04929bdbb_6400x7360.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!nv1k!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8760579e-7968-47a4-bb29-87d04929bdbb_6400x7360.png 424w, https://substackcdn.com/image/fetch/$s_!nv1k!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8760579e-7968-47a4-bb29-87d04929bdbb_6400x7360.png 848w, https://substackcdn.com/image/fetch/$s_!nv1k!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8760579e-7968-47a4-bb29-87d04929bdbb_6400x7360.png 1272w, https://substackcdn.com/image/fetch/$s_!nv1k!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8760579e-7968-47a4-bb29-87d04929bdbb_6400x7360.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!nv1k!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8760579e-7968-47a4-bb29-87d04929bdbb_6400x7360.png" width="1456" height="1674" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/8760579e-7968-47a4-bb29-87d04929bdbb_6400x7360.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1674,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2458102,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://techtrenches.dev/i/187949763?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8760579e-7968-47a4-bb29-87d04929bdbb_6400x7360.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!nv1k!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8760579e-7968-47a4-bb29-87d04929bdbb_6400x7360.png 424w, https://substackcdn.com/image/fetch/$s_!nv1k!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8760579e-7968-47a4-bb29-87d04929bdbb_6400x7360.png 848w, https://substackcdn.com/image/fetch/$s_!nv1k!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8760579e-7968-47a4-bb29-87d04929bdbb_6400x7360.png 1272w, https://substackcdn.com/image/fetch/$s_!nv1k!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8760579e-7968-47a4-bb29-87d04929bdbb_6400x7360.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>On January 26, 2026, Anthropic CEO Dario Amodei published a <a href="https://www.darioamodei.com/essay/the-adolescence-of-technology">20,000-word essay</a> predicting a &#8220;country of geniuses in a datacenter&#8221; within 1-2 years. 50 million entities, each smarter than any Nobel Prize winner. <a href="https://fortune.com/2026/01/27/anthropic-ceo-dario-amodei-essay-warning-ai-adolescence-test-humanity-risks-remedies/">50% of entry-level white-collar jobs disrupted</a> within 1-5 years.</p><p>5.7 million views on X. Standing ovation from investors. I only got around to reading it now. I have things to say.</p><p>I&#8217;m disappointed to watch Amodei and Anthropic slide into Altman-ism. Different prose, same playbook.</p><p>Maybe where the gods live, he&#8217;s right. Maybe in a world of perfect infrastructure, clean APIs, and unlimited compute, we&#8217;re ready to replace white-collar workers with AI. But where the rest of us mortals work, the situation looks completely different.</p><p>His own product&#8217;s <a href="https://www-cdn.anthropic.com/0dd865075ad3132672ee0ab40b05a53f14cf5288.pdf">System Card</a> tells a different story. Anthropic surveyed 16 internal researchers on whether Claude could replace an entry-level researcher with three months of scaffolding. The answer was <a href="https://thezvi.substack.com/p/claude-opus-46-system-card-part-2">0 out of 16</a>.</p><p>Zero.</p><p>We&#8217;ve spent four years shipping AI integrations for clients. The models are impressive. They are not replacing white-collar workers. Not in 1-2 years. Probably not in 5. And the reasons are more fundamental than the industry wants to admit.</p><h2>The Steering Wheel Problem</h2><p>Let&#8217;s talk about what transformers actually can&#8217;t do. Not philosophically. Mathematically.</p><p><strong>Non-determinism.</strong> Even at temperature zero, the same prompt produces different outputs. This isn&#8217;t a bug. It&#8217;s a consequence of floating-point parallel computation on GPUs. In engineering, we call components that behave unpredictably under identical conditions broken.</p><p><strong>Hallucinations are provably inevitable.</strong> <a href="https://arxiv.org/abs/2401.11817">Formal proof</a> from learning theory: LLMs cannot learn all computable functions and will hallucinate when used as general-purpose problem solvers. Best models: <a href="https://www.lakera.ai/blog/guide-to-hallucinations-in-large-language-models">15%+ hallucination rate</a> on benchmarks. GPTZero found <a href="https://gptzero.me/news/iclr-2026/">over 50 hallucinated citations</a> in ICLR 2026 academic submissions. Trained peer reviewers, 3-5 per paper, didn&#8217;t catch them.</p><p><strong>Function composition has limits.</strong> <a href="https://arxiv.org/html/2402.08164v1">Proven</a>: transformers struggle with reliable function composition due to how softmax limits non-local information flow. In practice, models write connected code fine. What they can&#8217;t do is reason about infrastructure constraints. What&#8217;s possible and what isn&#8217;t. Where the boundaries are.</p><p>I see this every day. Smart autocomplete. Incredibly good smart autocomplete. But autocomplete that can&#8217;t tell you when it&#8217;s wrong.</p><p>The industry knows. They&#8217;ve quietly shifted from &#8220;let&#8217;s eliminate hallucinations&#8221; to &#8220;<a href="https://openai.com/index/why-language-models-hallucinate/">let&#8217;s manage uncertainty</a>.&#8221; That&#8217;s a de facto admission. The steering wheel sometimes turns the wrong way, and nobody can fix it.</p><p>It&#8217;s like selling an airplane whose steering sometimes inverts, then writing 20,000 words about how the airplane might fly to another galaxy. Bioweapons and autocracy get entire sections. The steering wheel? Not mentioned once.</p><h2>The Scaling Wall Nobody Advertises</h2><p>Maybe more compute fixes it? That&#8217;s been the bet for five years.</p><p>Toby Ord actually <a href="https://www.tobyord.com/writing/the-scaling-paradox">read the scaling law graphs</a> that AI companies publish with great fanfare. On log-log charts, the lines look beautiful. Flip to linear scale: halving the model&#8217;s error rate requires increasing compute by a factor of one million.</p><p>Three walls converging simultaneously. <br>Data: high-quality training text is finite. <br>Compute: latency constraints, energy consumption exceeding entire countries, new data center connections that take 2-4 years. <br>Architecture: the mathematical limitations above aren&#8217;t going away with more parameters.</p><p>Ilya Sutskever <a href="https://futurism.com/the-byte/openai-diminishing-returns">told Reuters</a> the scaling era is over. We&#8217;re in an &#8220;age of wonder and discovery.&#8221; Translation: we don&#8217;t know what&#8217;s next.</p><p><a href="https://www.hec.edu/en/dare/tech-ai/ai-beyond-scaling-laws">HEC Paris</a> calls this the industry&#8217;s &#8220;well-kept secret.&#8221; <a href="https://www.vertexcybersecurity.com.au/scaling-walls-why-new-research-shows-ai-is-hitting-its-limits/">MIT research</a> from January 2026 confirms: the gap between expensive frontier models and cheap alternatives is shrinking. Exponentially more expensive, single-digit percentage improvements.</p><p>The $650 billion Big Tech is pouring into infrastructure this year? As I wrote in <a href="https://techtrenches.dev/p/big-techs-364b-hypothesis-meets-the">my analysis of that spending</a>: it&#8217;s not investment. It&#8217;s capitulation.</p><h2>The Context Problem: 150 Projects Worth of Evidence</h2><p>Here&#8217;s what Amodei&#8217;s essay gets wrong. This is what I see every week.</p><p>Clients come to us with the same request: &#8220;We want to integrate AI into our processes.&#8221; Replace the white-collar workers. Cut the headcount.</p><p>So why can&#8217;t we sell them the same project?</p><p>Because zero companies have the same structure. Zero run the same systems.</p><p>One client runs SharePoint from 2007. Another has a custom CRM built by a contractor who left in 2015. No documentation. No API. A third uses SSO held together with duct tape and prayer. Company D has critical data in Excel spreadsheets that get emailed between departments every Friday afternoon.</p><p>Amodei writes from a world where every organization has MCP-ready infrastructure, clean data pipelines, standardized APIs. That world doesn&#8217;t exist.</p><p>To replace a white-collar worker, AI needs full organizational context. Approval chains. Informal relationships. Institutional knowledge that lives in people&#8217;s heads. The exception to the exception. The vendor who says two weeks but means six.</p><p>Who gives the model that context?</p><p>A human. A skilled human. The exact white-collar worker you&#8217;re trying to replace.</p><p>This is the paradox nobody discusses. The knowledge required to supervise AI effectively is the same knowledge that makes you irreplaceable.</p><h2>Already Deployed Where Errors Kill</h2><p>While the &#8220;country of geniuses&#8221; narrative plays out on Twitter, these architecturally unreliable systems are already making decisions about health, money, and legal rights. The promise was improvement. The results are in.</p><p><strong>Healthcare.</strong> The pitch: faster diagnoses, better outcomes, lower costs. The reality: UnitedHealth and Humana face <a href="https://www.law360.com/healthcare-authority/articles/2415514/the-high-stakes-healthcare-ai-battles-to-watch-in-2026">class-action lawsuits</a> over nH Predict, an AI model that denied Medicare coverage against doctors&#8217; recommendations. Known high error rate. Deployed anyway. <a href="https://genhealth.ai/blog/navigating-ai-regulatory-landscape-healthcare-2026">21 states passed emergency laws</a> regulating AI in healthcare. 250+ bills introduced across 47 states. Not because AI improved care. Because it made denial of care faster and harder to appeal.</p><p>The <a href="https://www.news-medical.net/health/Who-Takes-the-Blame-When-AI-Makes-a-Medical-Mistake.aspx">accountability gap</a>: doctor says &#8220;developer is responsible.&#8221; Developer says &#8220;doctor makes the decision.&#8221; Nobody owns the failure. Patients own the consequences.</p><p><strong>Finance.</strong> The pitch: smarter markets, better allocation, reduced risk. The reality: AI trading makes markets more volatile, not more efficient. <a href="https://www.imf.org/en/blogs/articles/2024/10/15/artificial-intelligence-can-make-markets-more-efficient-and-more-volatile">IMF confirmed it</a>. <a href="https://ideas.repec.org/a/vrs/poicbe/v19y2025i1p1216-1225n1008.html">GARCH modeling</a> on S&amp;P 500 shows positive association between AI trading and increased market jumps. Thousands of models trained on the same data, processing the same Fed minutes in milliseconds, creating herd behavior at machine speed. We didn&#8217;t get efficient markets. We got synchronized panic.</p><p><strong>Legal.</strong> The pitch: democratize access to justice, reduce costs. The reality: 2025 alone, judges worldwide issued <a href="https://research.aimultiple.com/ai-hallucination/">hundreds of decisions</a> addressing AI hallucinations in legal filings. Roughly 90% of all known cases to date. Fabricated citations in a profession where one fake precedent can destroy a career. Justice didn&#8217;t get cheaper. It got less reliable.</p><p>Three industries. Three promises of improvement. Three measurable deteriorations. With models that their own creators admit cannot be made deterministic.</p><h2>Why Nobody Says This Out Loud</h2><p>Simple. Everyone has reasons to stay quiet.</p><p>AI companies can&#8217;t say &#8220;our technology is architecturally unreliable.&#8221; Valuation event.</p><p>Investors deployed over a trillion dollars. You don&#8217;t question the thesis after you&#8217;ve bet the fund.</p><p>Media runs on attention. &#8220;AI will replace everyone&#8221; gets clicks. &#8220;AI has fundamental mathematical limitations&#8221; doesn&#8217;t.</p><p>And here&#8217;s what keeps me up at night. Amodei writes 20,000 words about AI risks. Bioweapons. Autocracy. Existential threats. Not once does he mention the most fundamental risk: the absence of determinism.</p><p>A non-deterministic system cannot be trusted as a reliable autonomous agent. Period. Everything else is commentary.</p><h2>What You Should Actually Do</h2><p>AI isn&#8217;t useless. Saying that would be as dishonest as saying it replaces half the workforce.</p><p>I use it every day. My team uses it on every project. The value is real. But specific. AI saves 20-40% of a qualified specialist&#8217;s time. Someone who knows what to ask, how to verify, and when the model is confidently wrong.</p><p>Not replacement. Amplification of existing expertise.</p><p><strong>Increase your value.</strong> Understand your domain AND AI&#8217;s real capabilities. Not the theoretical capabilities from a CEO&#8217;s essay. The real ones you discover by using the tool daily.</p><p><strong>Make decisions.</strong> AI can&#8217;t weigh trade-offs. Can&#8217;t navigate org politics. Can&#8217;t choose between two valid approaches based on team capabilities and timeline. SQL vs. NoSQL. Monolith vs. microservices. These require judgment. Judgment requires experience. Experience requires years of being wrong.</p><p><strong>Be the expert.</strong> Deep domain knowledge is your moat. Not surface familiarity. The kind where you smell a wrong answer before you can articulate why.</p><p><strong>Don&#8217;t outsource your brain.</strong> Every task you hand entirely to AI is a skill you stop developing. Every decision you let the model make is judgment you stop exercising. Do this long enough and you&#8217;re on the wrong side of the equation when the company realizes the tool needs a supervisor, not a passenger.</p><p>When the hype deflates, the question will be: &#8220;Okay, so what do we actually do with this technology?&#8221; Practitioners will answer that. Not evangelists.</p><h2>The Question That Matters</h2><p>The country of geniuses doesn&#8217;t exist. What exists is a powerful tool that requires skilled humans to operate safely. Don&#8217;t let a 20,000-word essay convince you the steering wheel doesn&#8217;t matter just because the destination sounds exciting.</p><p><em>Are the AI predictions from leadership matching the engineering reality you see on the ground?</em></p><p><em>If this resonated, forward it to an engineering leader who needs to hear it.</em></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://techtrenches.dev/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://techtrenches.dev/subscribe?"><span>Subscribe now</span></a></p>]]></content:encoded></item><item><title><![CDATA[Big Tech’s $364B Hypothesis Meets the $650B Reality]]></title><description><![CDATA[Six months ago it was $364 billion. Now it's $650 billion. Free cash flow collapsing. Amazon may raise debt. The procurement mindset doesn't scale forever.]]></description><link>https://techtrenches.dev/p/big-techs-364b-hypothesis-meets-the</link><guid isPermaLink="false">https://techtrenches.dev/p/big-techs-364b-hypothesis-meets-the</guid><dc:creator><![CDATA[Denis Stetskov]]></dc:creator><pubDate>Mon, 09 Feb 2026 15:03:17 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/5b97a2c2-14e6-4d97-b0de-86df7a28e6e4_2032x1360.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!hiju!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc7b22f52-f771-45f9-ab82-861560b7aa68_6400x7360.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!hiju!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc7b22f52-f771-45f9-ab82-861560b7aa68_6400x7360.png 424w, https://substackcdn.com/image/fetch/$s_!hiju!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc7b22f52-f771-45f9-ab82-861560b7aa68_6400x7360.png 848w, https://substackcdn.com/image/fetch/$s_!hiju!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc7b22f52-f771-45f9-ab82-861560b7aa68_6400x7360.png 1272w, https://substackcdn.com/image/fetch/$s_!hiju!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc7b22f52-f771-45f9-ab82-861560b7aa68_6400x7360.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!hiju!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc7b22f52-f771-45f9-ab82-861560b7aa68_6400x7360.png" width="1456" height="1674" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c7b22f52-f771-45f9-ab82-861560b7aa68_6400x7360.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1674,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2342656,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://techtrenches.dev/i/187377458?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc7b22f52-f771-45f9-ab82-861560b7aa68_6400x7360.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!hiju!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc7b22f52-f771-45f9-ab82-861560b7aa68_6400x7360.png 424w, https://substackcdn.com/image/fetch/$s_!hiju!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc7b22f52-f771-45f9-ab82-861560b7aa68_6400x7360.png 848w, https://substackcdn.com/image/fetch/$s_!hiju!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc7b22f52-f771-45f9-ab82-861560b7aa68_6400x7360.png 1272w, https://substackcdn.com/image/fetch/$s_!hiju!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc7b22f52-f771-45f9-ab82-861560b7aa68_6400x7360.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Six months ago, I wrote that Big Tech&#8217;s AI infrastructure spree was less a strategy and more a <a href="https://techtrenches.dev/p/big-techs-364-billion-bet-on-an-uncertain">very expensive act of faith</a>. The thesis was simple: $364 billion in annual CapEx with no clear path to ROI isn&#8217;t engineering. It&#8217;s hope with a procurement budget.</p><p>I expected pushback. I expected the numbers to be revised. I did not expect them to nearly double.</p><p>Last week, Amazon dropped the number that completed the picture: $200 billion in capital expenditures for 2026. That&#8217;s $44 billion above what Wall Street expected. The stock fell 6% in a single session on trading volume 306% above its three-month average. Amazon filed with the SEC the next day, warning it may need to raise equity and debt to fund the build-out.</p><p>Amazon wasn&#8217;t alone. Alphabet guided $175 to $185 billion. Meta said $115 to $135 billion. Microsoft&#8217;s run rate puts it on pace for $145 billion. The Big Four alone: $635 to $665 billion. A 67% to 74% spike from $381 billion in 2025.</p><p>Add Oracle, and the Big Five cross $700 billion.</p><p>The most expensive engineering experiment in history just got a sequel. And the sequel costs twice as much.</p><h2>The Curve That &#8220;Couldn&#8217;t Happen&#8221;</h2><p>When I wrote the original piece in September 2025, critics said I was cherry-picking. Companies knew what they were doing. Demand was real. Scale would solve everything.</p><p>For two straight years, Wall Street&#8217;s CapEx estimates have come in low. At the start of both 2024 and 2025, consensus implied roughly 20% annual growth. Actual spending exceeded 50% both years. Goldman Sachs projected $500 billion-plus for 2026. They were too conservative.</p><p>The industry quietly moved from &#8220;we&#8217;re investing for growth&#8221; to something much larger: restructuring the physical substrate of the internet around a single class of workload. Capital intensity has surged to historically unthinkable levels: Oracle&#8217;s most recent quarter hit 57% of revenue, Microsoft reached 45%. These aren&#8217;t growth investments. They&#8217;re multi-year capital commitments with depreciation clocks already ticking.</p><h2>The Free Cash Flow Collapse</h2><p>Here&#8217;s where the math gets uncomfortable.</p><p>Amazon&#8217;s trailing-twelve-month free cash flow already cratered from $38.2 billion to $11.2 billion. With $200 billion in 2026 CapEx, Morgan Stanley projects it goes negative: minus $17 billion. Bank of America sees minus $28 billion.</p><p>Alphabet&#8217;s free cash flow is projected to plummet nearly 90%, from $73.3 billion to $8.2 billion. Barclays is now modeling negative free cash flow for Meta in 2027 and 2028: &#8220;somewhat shocking to us but likely what we eventually see for all companies in the AI infrastructure arms race.&#8221;</p><p>Microsoft is the only hyperscaler maintaining positive FCF trajectory, with a projected 22% margin. But Azure growth slowed from 40% to 39% while quarterly CapEx surged 66% to $37.5 billion. The stock is down 17% year-to-date, the worst performer in the group.</p><p>Combined cash reserves across the four leaders: over $420 billion. That sounds like a buffer until you realize that AI assets depreciate at roughly 20% per year. At current CapEx levels, annual depreciation expense will soon exceed combined profits.</p><p>These companies aren&#8217;t just spending more than they earn. They&#8217;re spending more than they can fund internally, and the debt markets are stepping in to bridge the gap.</p><h2>The Market Mood Swing</h2><p>Phase One (2024-mid 2025): any AI announcement earned more market cap than it consumed in CapEx. Spend $100 billion, gain $200 billion in valuation. The market rewarded ambition.</p><p>Phase Two (now): every CapEx uptick gets punished. Microsoft reported a blowout quarter, 17% revenue growth, $50 billion in quarterly cloud revenue for the first time ever, and lost $357 billion in market cap because Azure growth dipped one percentage point. Amazon beat on revenue and lost 6% because CapEx guidance was $44 billion above consensus.</p><p>The contrast with Meta is instructive. Meta guided $115 to $135 billion in CapEx, nearly double 2025. The stock surged 10%. The difference? Meta simultaneously lifted revenue growth forecast to 30%. It showed the money going in and the money coming out.</p><p>The core question from investors is now explicit: who exactly will pay back these hundreds of billions, and when?</p><h2>The Strategic Trap</h2><p>The hyperscalers have built themselves a prisoner&#8217;s dilemma at planetary scale.</p><p>If you&#8217;re the only one cutting CapEx, you &#8220;lose the race.&#8221; If nobody cuts, everyone bleeds on the balance sheet. The entire sector becomes dependent on a single assumption: that AI workloads will generate enough revenue to service infrastructure that&#8217;s already built and already depreciating.</p><p>The entry cost for &#8220;AI sovereignty&#8221; is now so high that only existing hyperscalers and sovereign states can play. Infrastructure is becoming an oligopoly for physical reasons: power grids, cooling water, land, and political access to build. Amazon&#8217;s CapEx alone exceeds what the entire publicly traded US energy sector spends to drill, extract, refine, and deliver. Combined hyperscaler 2026 CapEx is more than 4x the entire US energy sector. These are 20-year physical commitments being made to support workloads that change every 18 months. You can&#8217;t iterate on a data center the way you iterate on code. You can&#8217;t A/B test a power purchase agreement.</p><p>And the dependencies stack. Forty-five percent of Microsoft&#8217;s $625 billion in remaining performance obligations is tied to OpenAI. When your single largest customer is a company burning cash with no clear profitability timeline, your CapEx bet is stacked on someone else&#8217;s CapEx bet.</p><h2>The Engineering Reality</h2><p>I&#8217;ve been watching what this dynamic does inside actual engineering organizations. The pattern is consistent, and it cascades all the way down.</p><p>Inside Big Tech, pressure shifts from &#8220;build things users love&#8221; to &#8220;ship AI features that justify already-committed infrastructure.&#8221; We&#8217;re building a new kind of feature factory: a demo factory designed to rationalize sunk infrastructure costs to boards and investors.</p><p>But the spending frenzy doesn&#8217;t just distort Big Tech. It creates a gravity field that pulls every company into the same pattern. I see it in our consulting work every week.</p><p>One client came to us wanting to build a RAG system across their entire knowledge base. The scope: 100,025 files, 43,063 folders, 72.1 gigabytes. No consistent structure. No permission model. No taxonomy. Just &#8220;put AI on everything.&#8221; During the requirements validation session, we managed to convince them to start with a single department and a well-defined use case. But the instinct was clear: spend first, figure out the problem later.</p><p>Another client wanted &#8220;AI integration across all processes.&#8221; Every system connected. Every workflow touched. When we asked what specific outcome they were optimizing for, what metric would tell them it worked, they didn&#8217;t have an answer. They had budget approval, vendor excitement, and a board presentation that said &#8220;AI transformation.&#8221; What they didn&#8217;t have was a problem statement.</p><p>These aren&#8217;t stupid people. They&#8217;re smart operators caught in the same gravitational pull as the hyperscalers. The procurement mindset cascades from $200 billion Amazon announcements all the way down to a mid-market company trying to RAG-index 72 gigabytes of unstructured chaos.</p><p>The engineering loop got inverted at every level. The normal sequence is: identify the problem, then build infrastructure to solve it. What&#8217;s happening now is the opposite. We build first and search for use cases later. At hyperscaler scale, that means $650 billion in steel and GPUs. At company level, it means integrations, tokens, and vendor contracts thrown at a vague sense of urgency.</p><h2>The Endgames</h2><p>I see three possible outcomes, and they&#8217;re not mutually exclusive.</p><p><strong>Soft landing.</strong> AI services find broad, profitable demand fast enough to service the CapEx and accumulated debt. This requires AI to generate trillions in new economic value, not billions. The timeline is aggressive.</p><p><strong>Infrastructure hangover.</strong> Chronic overcapacity leads to write-downs, consolidation, and distressed sales of data center assets. This is the fiber-optic bubble parallel that nobody wants to discuss, even though the IEEE ComSoc blog is already drawing the comparison explicitly.</p><p><strong>Political fork.</strong> Governments either prop this up through subsidies and public workloads, or constrain it through energy regulation, water restrictions, and local opposition. The EU is already moving on the regulatory side.</p><h2>Why This Matters Now</h2><p>With $650 billion on the table, markets punishing CapEx announcements, and free cash flow collapsing across the board, <a href="https://techtrenches.dev/p/big-techs-364-billion-bet-on-an-uncertain">the original piece</a> reads less like a hot take and more like a blueprint of the failure modes we&#8217;re now entering.</p><p>The uncomfortable truth hasn&#8217;t changed. It&#8217;s just gotten more expensive. When an entire industry simultaneously chooses to spend rather than optimize, it reveals something broken in how engineering problems get solved.</p><p>Every engineering leader I know is asking the same question: are we building infrastructure for real demand, or are we building monuments to institutional momentum?</p><p>The numbers will answer that question within the next 18 months. The rest of us need to be ready for either outcome.</p><div><hr></div><p><em>Subscribe for weekly insights from the trenches of engineering leadership. Real problems, practical solutions, no corporate optimism.</em></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://techtrenches.dev/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://techtrenches.dev/subscribe?"><span>Subscribe now</span></a></p>]]></content:encoded></item><item><title><![CDATA[When Capex Beats Headcount: What Amazon’s Layoffs Actually Mean]]></title><description><![CDATA[16,000 people. Wednesday morning. That&#8217;s 30,000 in three months when you count October.]]></description><link>https://techtrenches.dev/p/when-capex-beats-headcount-what-amazons</link><guid isPermaLink="false">https://techtrenches.dev/p/when-capex-beats-headcount-what-amazons</guid><dc:creator><![CDATA[Denis Stetskov]]></dc:creator><pubDate>Mon, 02 Feb 2026 15:03:02 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/477ed5b8-7e00-4575-966b-c98fbcf36c61_2032x1360.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Mall!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2eb37ab7-f88c-4428-b387-603bac073d0f_1600x1300.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Mall!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2eb37ab7-f88c-4428-b387-603bac073d0f_1600x1300.png 424w, https://substackcdn.com/image/fetch/$s_!Mall!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2eb37ab7-f88c-4428-b387-603bac073d0f_1600x1300.png 848w, https://substackcdn.com/image/fetch/$s_!Mall!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2eb37ab7-f88c-4428-b387-603bac073d0f_1600x1300.png 1272w, https://substackcdn.com/image/fetch/$s_!Mall!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2eb37ab7-f88c-4428-b387-603bac073d0f_1600x1300.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Mall!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2eb37ab7-f88c-4428-b387-603bac073d0f_1600x1300.png" width="1456" height="1183" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/2eb37ab7-f88c-4428-b387-603bac073d0f_1600x1300.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1183,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:250130,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://techtrenches.dev/i/186296387?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2eb37ab7-f88c-4428-b387-603bac073d0f_1600x1300.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Mall!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2eb37ab7-f88c-4428-b387-603bac073d0f_1600x1300.png 424w, https://substackcdn.com/image/fetch/$s_!Mall!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2eb37ab7-f88c-4428-b387-603bac073d0f_1600x1300.png 848w, https://substackcdn.com/image/fetch/$s_!Mall!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2eb37ab7-f88c-4428-b387-603bac073d0f_1600x1300.png 1272w, https://substackcdn.com/image/fetch/$s_!Mall!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2eb37ab7-f88c-4428-b387-603bac073d0f_1600x1300.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><p><a href="https://www.aboutamazon.com/news/company-news/amazon-layoffs-corporate-jan-2026">16,000 people</a>. Wednesday morning. That&#8217;s <a href="https://www.cnbc.com/2026/01/28/amazon-layoffs-anti-bureaucracy-ai.html">30,000 in three months</a> when you count October.</p><p>Jassy calls it &#8220;reducing bureaucracy.&#8221;</p><p>The stock is up. Everyone&#8217;s wondering if the math actually works.</p><p>Here&#8217;s what nobody&#8217;s saying out loud: the moment your company decides to compete on AI infrastructure spending instead of engineering talent, people stop being assets. They become line items in the capex budget.</p><div><hr></div><h2>The Numbers</h2><p>Amazon hired aggressively during Covid. <a href="https://www.opb.org/article/2026/01/28/amazon-lays-off-16000-employees-in-major-reduction-of-force/">Headcount roughly doubled between 2019 and 2024</a>. The pandemic is over. Demand normalized. Rightsizing makes sense.</p><p>But that&#8217;s not what&#8217;s happening.</p><p>Amazon committed <a href="https://www.cnbc.com/2025/02/06/amazon-expects-to-spend-100-billion-on-capital-expenditures-in-2025.html">$100 billion in capex for 2025</a>. The &#8220;vast majority&#8221; goes to AI infrastructure. Meanwhile, 30,000 people are gone.</p><p>I&#8217;ve watched this play out at smaller scales. First comes the board pressure: &#8220;Competitors are spending $100B on data centers. We need to match.&#8221; Then budgets freeze everywhere except capex. Then someone looks at the salary run-rate and realizes: $500M in annual engineering payroll could fund three more data centers.</p><p>The logic is seductive. AI is the future. We need chips. We can&#8217;t afford both.</p><div><hr></div><h2>The Dishonesty</h2><p>Jassy at Davos last week: <a href="https://www.opb.org/article/2026/01/28/amazon-lays-off-16000-employees-in-major-reduction-of-force/">&#8220;We&#8217;re not replacing workers with AI.&#8221;</a></p><p>Here&#8217;s what employees are telling reporters: <a href="https://www.kuow.org/stories/amazon-lays-off-thousands-of-employees-in-major-reduction-of-force">Screenshots show a dashboard that Amazon managers allegedly use to track how often employees use AI tools</a>. Both employees interviewed said they expect AI usage to be factored into performance reviews.</p><p>Amazon hasn&#8217;t confirmed or denied these dashboards.</p><p>This is the pattern:</p><p>&#8220;We&#8217;re laying off 30,000 people to reduce bureaucracy.&#8221;</p><p>&#8220;Separately, we&#8217;ve invested $100 billion in AI infrastructure.&#8221;</p><p>&#8220;Separately, we&#8217;re monitoring which of you use AI tools.&#8221;</p><p>&#8220;Separately, we expect the remaining staff to do more with less.&#8221;</p><p>Then act shocked when engineers connect the dots.</p><p>I&#8217;ve lived through reorgs. Usually leadership is bad at communication. But it&#8217;s rarely this contradictory.</p><p>Here&#8217;s what honest would look like: &#8220;We&#8217;re reallocating capital from headcount to infrastructure. AI requires massive compute. That&#8217;s where we believe competitive advantage lives. Some roles will change. Some will end. Performance expectations are shifting.&#8221;</p><p>Brutal. But at least it&#8217;s real.</p><p>Instead, engineers get a memo about &#8220;removing layers.&#8221; Managers get dashboards to track AI adoption. Everyone pretends these are separate decisions.</p><p>That&#8217;s what kills morale. Not the layoff. The gaslighting.</p><div><hr></div><h2>What I Can&#8217;t Stop Thinking About</h2><p>Since 2022, I&#8217;ve had open positions in my department. Constantly. Right now: four.</p><p>I hire slowly. Painfully slowly. Because for me, &#8220;right person, right seat&#8221; isn&#8217;t a LinkedIn slogan. It&#8217;s the difference between a team that ships and a team that churns.</p><p>I need engineers who are technically strong. But I also need culture fit. People who ask &#8220;why&#8221; before &#8220;how.&#8221; People who challenge decisions, not just implement them.</p><p>&#8220;Great vision with mediocre people still produces mediocre results.&#8221; Jim Collins wrote that. He was right.</p><p>So when I see companies treating people worse than AI tools, something doesn&#8217;t compute. I wouldn&#8217;t trade a single one of my engineers for the smartest AI on the market. Not one.</p><p>And here&#8217;s the thought I can&#8217;t shake: if you can cut 30,000 people and keep operating, maybe you never needed those positions in the first place. Maybe the problem isn&#8217;t &#8220;bureaucracy.&#8221; Maybe it&#8217;s that you hired without knowing why.</p><div><hr></div><h2>The Playbook Goes Normal</h2><p>This isn&#8217;t unique to Amazon.</p><p>Salesforce froze hiring &#8220;because AI.&#8221; Duolingo laid off contractors and announced AI initiatives the same week. Meta is funding 100,000 GPUs while pledging to &#8220;reduce headcount and increase efficiency.&#8221;</p><p>The pattern is identical: announce the capex first, announce the layoffs second, pretend the AI tooling is unrelated.</p><p>In my own conversations with engineering leaders, the question has shifted. Six months ago: &#8220;How do we hire and retain talent?&#8221; Now: &#8220;How much can we cut headcount while maintaining velocity?&#8221;</p><p>The question changed faster than the strategy could justify itself.</p><p>I wrote about this in September: <a href="https://techtrenches.substack.com/p/big-techs-364-billion-bet-on-an-uncertain">Big Tech&#8217;s $364 Billion Bet</a>. They chose to spend rather than optimize. Now they&#8217;re choosing to cut people rather than audit their spending.</p><p>Nvidia calls this the &#8220;largest infrastructure buildout in human history.&#8221; That&#8217;s marketing. It&#8217;s also a forcing function.</p><p>If you don&#8217;t match capex spending, you lose the AI race narrative. So you cut people. So you can spend on chips. So you can match the narrative.</p><p>It&#8217;s a feedback loop. Feedback loops compound.</p><div><hr></div><h2>What Actually Works</h2><p>If you&#8217;re an eng leader at a company considering layoffs or capex restructuring:</p><p><strong>Be honest about the decision.</strong> Not in your all-hands. With your team. In 1:1s. &#8220;Here&#8217;s why we&#8217;re reallocating to infrastructure. Here&#8217;s what that means for you. Here&#8217;s how we&#8217;ll measure success.&#8221;</p><p>Then stick with that story.</p><p><strong>Audit AI spending vs. actual revenue.</strong> Not potential revenue. Not TAM. Actual revenue. Amazon&#8217;s AI has generated... what exactly? Jassy hasn&#8217;t said. That&#8217;s telling.</p><p><strong>If you&#8217;re asking people to use AI tools to pick up slack, say so.</strong> Build it into expectations. Train them. Measure productivity honestly, not just tool usage.</p><p><strong>Remember: the engineers you&#8217;re keeping are watching.</strong> If you tell them &#8220;we&#8217;re not replacing people with AI&#8221; while firing 30,000 people three months into a capex blitz, they know you&#8217;re not being straight with them.</p><p>They won&#8217;t say it in the meeting. But they&#8217;ll start building escape routes.</p><div><hr></div><h2>The Uncomfortable Truth</h2><p>The layoffs are fine. The infrastructure spending is defensible.</p><p>What kills engineering teams is the gap between what leadership does and what it claims to be doing.</p><p>Amazon just opened that gap very, very wide.</p><div><hr></div><p><em>What infrastructure vs. talent decisions is your organization facing? Have you seen leadership communicate these honestly, or pretend they&#8217;re unrelated?</em></p><p><em>If this resonates, forward it to other leaders navigating similar tradeoffs. Sometimes the most expensive solution isn&#8217;t the most effective one.</em></p><p><em>Subscribe for weekly insights from the trenches of engineering leadership. Real problems, practical solutions, no corporate optimism.<br><br>Like &amp; Share, I appreciate your activity.<br></em></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://techtrenches.dev/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://techtrenches.dev/subscribe?"><span>Subscribe now</span></a></p>]]></content:encoded></item><item><title><![CDATA[RAG Is Easy. Your Data Isn’t.]]></title><description><![CDATA[Custom GPT took a weekend. Production takes months. Why most AI projects fail before engineering starts, and what actually predicts success.]]></description><link>https://techtrenches.dev/p/rag-is-easy-your-data-isnt</link><guid isPermaLink="false">https://techtrenches.dev/p/rag-is-easy-your-data-isnt</guid><dc:creator><![CDATA[Denis Stetskov]]></dc:creator><pubDate>Tue, 27 Jan 2026 15:02:49 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/ca2d1d33-4548-4eb1-b98d-ccfc2d19b9b8_508x340.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!hQPy!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F840d4448-2a35-4fe4-a084-294eb7f4a908_1600x1000.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!hQPy!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F840d4448-2a35-4fe4-a084-294eb7f4a908_1600x1000.png 424w, https://substackcdn.com/image/fetch/$s_!hQPy!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F840d4448-2a35-4fe4-a084-294eb7f4a908_1600x1000.png 848w, https://substackcdn.com/image/fetch/$s_!hQPy!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F840d4448-2a35-4fe4-a084-294eb7f4a908_1600x1000.png 1272w, https://substackcdn.com/image/fetch/$s_!hQPy!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F840d4448-2a35-4fe4-a084-294eb7f4a908_1600x1000.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!hQPy!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F840d4448-2a35-4fe4-a084-294eb7f4a908_1600x1000.png" width="1456" height="910" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/840d4448-2a35-4fe4-a084-294eb7f4a908_1600x1000.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:910,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:184819,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://techtrenches.dev/i/185725673?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F840d4448-2a35-4fe4-a084-294eb7f4a908_1600x1000.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!hQPy!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F840d4448-2a35-4fe4-a084-294eb7f4a908_1600x1000.png 424w, https://substackcdn.com/image/fetch/$s_!hQPy!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F840d4448-2a35-4fe4-a084-294eb7f4a908_1600x1000.png 848w, https://substackcdn.com/image/fetch/$s_!hQPy!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F840d4448-2a35-4fe4-a084-294eb7f4a908_1600x1000.png 1272w, https://substackcdn.com/image/fetch/$s_!hQPy!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F840d4448-2a35-4fe4-a084-294eb7f4a908_1600x1000.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><br>I joined a discovery call. The brief beforehand: &#8220;This is basically a copy of Project X. Same timeline.&#8221;</p><p>Project X was a marketing chatbot. Conversational, no proprietary knowledge base. Search integration and personality. We knew that scope.</p><p>Thirty minutes into the call, it&#8217;s clear this isn&#8217;t RAG. Data processing from S3 buckets, Lambda triggers, ETL pipeline. That&#8217;s table stakes. The real work? Teaching the model to query and reason over that structured data. That&#8217;s not a chatbot. That&#8217;s a different project entirely.</p><p>&#8220;Same timeline&#8221; for a completely different architecture.</p><p>This happens constantly. Not because clients mislead us. Because the gap between &#8220;AI chatbot&#8221; in their head and &#8220;AI chatbot&#8221; in reality is massive.</p><p>The pattern is clear: most projects don&#8217;t struggle because the engineering is hard. They struggle because everyone underestimates what comes before the engineering starts.</p><h2>The Custom GPT Problem</h2><p>Client built a Custom GPT over a weekend. Uploaded some PDFs. Asked it questions. It worked. They showed their CEO. Everyone got excited.</p><p>&#8220;We want this, but for the whole company.&#8221;</p><p>That&#8217;s where it stops being simple.</p><p>&#8220;For the whole company&#8221; means multi-tenancy. Different departments see different data. Role-based retrieval: sales can&#8217;t access HR documents, legal can&#8217;t see engineering specs. Audit logs. Access controls. Compliance.</p><p>Custom GPT doesn&#8217;t do any of that. It&#8217;s one user, one knowledge base, no permissions. The jump from &#8220;it works for me&#8221; to &#8220;it works for the organization&#8221; isn&#8217;t a small step. It&#8217;s a different architecture.</p><p>NotebookLLM, Custom GPTs. They create a dangerous illusion. They make AI feel simple because all the enterprise complexity is hidden. The prototype took a weekend. The production system takes months.</p><h2>&#8220;We Have Data&#8221;: The Three Versions</h2><p>Every client says they have data. They mean different things.</p><p><strong>Version 1: &#8220;We have documents.&#8221;</strong> They have PDFs. Some are text. Some are scans. Some are text with scanned tables embedded. Some are PowerPoints where the real information lives in speaker notes nobody exports.</p><p>This isn&#8217;t a data problem you solve once. It&#8217;s a classification problem, an OCR problem, a parsing problem, and then a chunking problem. Each one adds weeks.</p><p><strong>Version 2: &#8220;We have structured data.&#8221;</strong> They have databases. Multiple databases. With different schemas. Some legacy system from 2012 that nobody fully understands anymore. CSV exports that break because someone used commas in a text field.</p><p>Now you&#8217;re not building RAG. You&#8217;re building SQL agents, data transformation pipelines, and schema mapping. Different architecture entirely.</p><p><strong>Version 3: &#8220;We have both.&#8221;</strong> Documents and databases and spreadsheets and emails and a SharePoint nobody&#8217;s organized in years.</p><p>This is the most common version. And the most underestimated.</p><h2>The Access Tax</h2><p>Data and credentials need to arrive on day one. They rarely do.</p><p>We&#8217;ve waited weeks for database access. Months for IT security approvals. One project stalled because a single stakeholder controlled API credentials and went on vacation.</p><p>Every week of waiting is a week of zero progress. But in the client&#8217;s mind, the timeline keeps running from the day they signed the contract.</p><p>The access problem isn&#8217;t technical. It&#8217;s organizational. And organizations move slowly.</p><h2>Two Types of Clients</h2><p>We can predict project outcomes from the first call.</p><p><strong>Clients who know their bottleneck:</strong> &#8220;We spend 40 hours weekly on this specific process. Here are inputs and outputs. Here&#8217;s the domain expert who&#8217;ll validate results.&#8221;</p><p>These projects ship. Clear scope, measurable outcome, someone internal who can evaluate accuracy.</p><p><strong>Clients who want AI everywhere:</strong> &#8220;We want to optimize our processes. We&#8217;re not sure which ones yet.&#8221;</p><p>These projects stall. Not because AI can&#8217;t help. Because you can&#8217;t optimize processes that aren&#8217;t documented. You can&#8217;t measure improvement without baselines. You can&#8217;t validate AI outputs without domain expertise.</p><p>The technology isn&#8217;t the constraint. Organizational readiness is.</p><h2>The Work That Isn&#8217;t Ours</h2><p>Here&#8217;s what successful projects require from the client side:</p><p><strong>Domain expertise for validation.</strong> We build the system. We cannot tell you if the output is correct for your industry, your regulations, your edge cases. That&#8217;s your job.</p><p><strong>Evaluation data.</strong> Before we write code, we need examples: &#8220;When users ask X, good answers look like Y.&#8221; Hundreds of them. This is how we measure progress versus confident wrongness.</p><p><strong>Accuracy decisions.</strong> 85% accuracy in 6 weeks. 95% might take another 6 weeks. 99% might be impossible with your data quality. Those last 5% for 2% of users might cost 40% of the budget. You decide if it&#8217;s worth it.</p><p><strong>Ongoing maintenance.</strong> When source documents change, someone updates them. When accuracy drifts, someone investigates. This isn&#8217;t a one-time build. It&#8217;s an ongoing operation.</p><p>Most clients expect to hand off requirements and receive a product. AI doesn&#8217;t work that way. It&#8217;s a collaboration that requires their continuous involvement.</p><h2>Simple Project, Real Timeline</h2><p>Best case scenario. Clean data, clear scope, engaged stakeholder with domain knowledge.</p><p>6-8 weeks. Most of that time goes to prompt engineering and iteration. Not infrastructure.</p><p>But &#8220;clean data&#8221; is rare. &#8220;Clear scope&#8221; requires work upfront. &#8220;Engaged stakeholder&#8221; means someone&#8217;s calendar is blocked for this project, not squeezed between other priorities.</p><p>When any of these are missing, multiply the timeline. When all three are missing, reconsider starting.</p><h2>Why Projects Don&#8217;t Reach Production</h2><p>Projects rarely fail technically. They fail organizationally.</p><p><strong>Built but never integrated.</strong> We deliver a working system. It sits in staging because the client doesn&#8217;t have engineering resources to integrate it. They budgeted for building, not deploying.</p><p><strong>Value mismatch discovered late.</strong> Midway through, the client realizes the problem they described isn&#8217;t their actual pain point. The AI works. The business case didn&#8217;t.</p><p><strong>Diminishing returns rejected.</strong> We explain the math: last 5% of accuracy for edge cases costs 40% of remaining budget. They want it anyway. Then budget runs out. Then the project is &#8220;over scope.&#8221;</p><p>None of these are engineering problems.</p><h2>What Actually Helps</h2><p>Before signing contracts, dig into the actual data. Not descriptions of data. The data itself.</p><p>We run a Rapid Validation Sprint. Four weeks. Real data access, real complexity mapping, real unknowns identified. Then we estimate based on reality, not assumptions.</p><p>The companies who quote 50% less aren&#8217;t doing this work. They&#8217;re guessing. When the data turns out messier than expected (it always does), they either blow the budget or cut scope.</p><h2>The Point</h2><p>RAG tutorials make this look easy. Upload documents, chunk them, embed them, query them. Done.</p><p>Production is different. Data is messy. Access is slow. Validation requires domain expertise you don&#8217;t have. Accuracy expectations exceed what the data supports.</p><p>The engineering is the straightforward part. Everything that comes before it: that&#8217;s where projects actually succeed or fail.</p><p>Most AI initiatives struggle not because the technology isn&#8217;t ready. Because the organization isn&#8217;t ready. Data isn&#8217;t organized. Processes aren&#8217;t documented. Nobody&#8217;s assigned to validate outputs.</p><p>That&#8217;s not a criticism. It&#8217;s just the reality.</p><p>The question isn&#8217;t whether AI can help your business. It&#8217;s whether your business is ready to help the AI.</p><div><hr></div><p><em>What&#8217;s been your experience with AI project expectations versus reality? Reply, I read every response.</em></p><p><em>If this resonates, forward it to someone about to sign an AI contract. Better they hear this now.</em></p><p><em>Like &amp; Share, I appreciate your activity.</em></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://techtrenches.dev/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://techtrenches.dev/subscribe?"><span>Subscribe now</span></a></p>]]></content:encoded></item><item><title><![CDATA[The AI Silicon Tax: How Your RAM Got 3x More Expensive While You Weren’t Looking]]></title><description><![CDATA[Last week, a friend pinged me about upgrading his PC.]]></description><link>https://techtrenches.dev/p/the-ai-silicon-tax-how-your-ram-got</link><guid isPermaLink="false">https://techtrenches.dev/p/the-ai-silicon-tax-how-your-ram-got</guid><dc:creator><![CDATA[Denis Stetskov]]></dc:creator><pubDate>Tue, 20 Jan 2026 13:03:52 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/e3f95581-f080-457f-aba4-9c0d9e5e4c59_508x340.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!eLRk!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc947ceb6-c9e9-4287-bc53-71d7f2b60cdb_1600x1200.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!eLRk!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc947ceb6-c9e9-4287-bc53-71d7f2b60cdb_1600x1200.png 424w, https://substackcdn.com/image/fetch/$s_!eLRk!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc947ceb6-c9e9-4287-bc53-71d7f2b60cdb_1600x1200.png 848w, https://substackcdn.com/image/fetch/$s_!eLRk!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc947ceb6-c9e9-4287-bc53-71d7f2b60cdb_1600x1200.png 1272w, https://substackcdn.com/image/fetch/$s_!eLRk!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc947ceb6-c9e9-4287-bc53-71d7f2b60cdb_1600x1200.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!eLRk!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc947ceb6-c9e9-4287-bc53-71d7f2b60cdb_1600x1200.png" width="1456" height="1092" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c947ceb6-c9e9-4287-bc53-71d7f2b60cdb_1600x1200.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1092,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:217947,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://techtrenches.substack.com/i/181786576?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc947ceb6-c9e9-4287-bc53-71d7f2b60cdb_1600x1200.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!eLRk!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc947ceb6-c9e9-4287-bc53-71d7f2b60cdb_1600x1200.png 424w, https://substackcdn.com/image/fetch/$s_!eLRk!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc947ceb6-c9e9-4287-bc53-71d7f2b60cdb_1600x1200.png 848w, https://substackcdn.com/image/fetch/$s_!eLRk!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc947ceb6-c9e9-4287-bc53-71d7f2b60cdb_1600x1200.png 1272w, https://substackcdn.com/image/fetch/$s_!eLRk!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc947ceb6-c9e9-4287-bc53-71d7f2b60cdb_1600x1200.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>A few weeks ago, a friend pinged me about upgrading his PC. &#8220;Dude, what happened to RAM prices? I&#8217;m looking at $270 for a 32GB kit that was $93 six months ago.&#8221;</p><p>I&#8217;m not a PC guy; I&#8217;ve been a MacBook user for the last 10 years, and I have a PlayStation 5 that I haven't turned on for the last 8 or so months. So I had no clue what was going on with PC pieces at all.</p><p>I knew about GPUs. Everyone who survived the crypto mining era remembers paying 4x MSRP for graphics cards that sat on scalpers&#8217; shelves. But RAM? That was news to me.</p><p>So I dug in. What I found is a story about how AI&#8217;s insatiable appetite for silicon is quietly reshaping the entire consumer hardware market, and it&#8217;s worse than the crypto days in ways most people don&#8217;t see coming.</p><h2>The Numbers Nobody Is Talking About</h2><p>Let&#8217;s start with what happened to memory:</p><ul><li><p>G.Skill Trident Z5 NEO DDR5-6000 (32GB): was $125, now <strong>$270</strong> (+116%)</p></li><li><p>TeamGroup DDR5-6000 (32GB): was $93, now <strong>$250</strong> (+169%)</p></li><li><p>Generic DDR4-3200 (32GB): was $90, now <strong>$240</strong> (+167%)</p></li></ul><p>DRAM spot prices are up <strong>187% year-over-year</strong>. That&#8217;s not a typo. Memory is now appreciating faster than gold.</p><p>Here&#8217;s the kicker: DDR4 is now more expensive per gigabyte than DDR5. The &#8220;budget option&#8221; for people with older motherboards costs more than the new standard. That&#8217;s not how technology is supposed to work.</p><h2>Why Your PC Parts Are Funding AI Data Centers</h2><p>The explanation is brutally simple: manufacturers make more money selling to AI companies than to you.</p><p>An NVIDIA H100 data center GPU sells for $25,000- $40,000. An RTX 4090 sells for $1,599. Both use similar TSMC production lines. Both require similar die sizes.</p><p>The revenue-per-wafer difference? <strong>10-20x higher for AI chips.</strong></p><p>When you can sell the same silicon to Microsoft for 20 times what a gamer will pay, the allocation decision makes itself.</p><p>NVIDIA&#8217;s numbers tell the story:</p><p>Fiscal Year Data Center Revenue Gaming Revenue DC % of Total FY2022 $10.6B $12.5B 39% FY2025 $115.2B $11.35B <strong>88%</strong></p><p>Gaming went from half of NVIDIA&#8217;s business to <strong>8.7%</strong> in three years. They&#8217;re not a gaming company anymore. They&#8217;re an AI infrastructure company that happens to still make graphics cards.</p><h2>The Memory Manufacturers Are Abandoning You</h2><p>This isn&#8217;t just about GPUs. The memory situation is arguably worse because manufacturers are actively walking away from consumer products.</p><p>Micron, one of the three major memory producers, announced in December 2025 that they&#8217;re <strong>completely exit the Crucial consumer brand</strong> by February 2026. Their official statement: they want to &#8220;improve supply and support for larger, strategic customers in faster-growing segments.&#8221;</p><p>Translation: we can sell HBM to AI companies for massive margins, so why bother with your gaming rig?</p><p>The technical economics explain why. HBM (High Bandwidth Memory) for AI chips:</p><ul><li><p>Uses 35-45% larger dies than equivalent DDR5</p></li><li><p>Consumes 2.5-3x more silicon per bit</p></li><li><p>Has 20-30% lower yields</p></li><li><p>Takes 1.5-2 months longer to produce</p></li></ul><p>Despite all that inefficiency, the margins are so much better that Samsung is tripling HBM production while <strong>phasing out LPDDR4 entirely</strong>.</p><p>HBM went from 14% of total DRAM production in 2024 to nearly 30% in 2025. Projections show it capturing <strong>50% of DRAM market revenue by 2030</strong>.</p><h2>The Stargate Deal</h2><p>On October 1st, 2025, Sam Altman flew to Seoul and signed letters of intent with Samsung and SK Hynix&#8212;the two companies that together control <strong>70% of global DRAM</strong> and <strong>80% of HBM production</strong>. According to Bloomberg and Reuters, the deal targets 900,000 DRAM wafer starts per month for OpenAI&#8217;s Stargate project.</p><p>Global DRAM capacity is roughly 2.25 million wafers per month. OpenAI just locked up <strong>40% of it</strong>.</p><p>Here&#8217;s the detail that should concern you: they&#8217;re not buying finished memory modules. They&#8217;re buying <strong>raw wafers</strong>&#8212;undiced, unfinished silicon. They&#8217;re stockpiling capacity itself.</p><p>The panic that followed was predictable. Lead times for new DDR5 orders stretched to <strong>13 months</strong>. Japanese retailers implemented purchase limits. Sony stockpiled GDDR6 during the summer price trough&#8212;that&#8217;s why they can afford Black Friday discounts. Microsoft didn&#8217;t secure supply in advance. Xbox prices may rise again.</p><p>GPU makers are already canceling products. AMD&#8217;s RX 9070 GRE 16GB is reportedly cancelled. Nvidia&#8217;s SUPER refresh pushed to Q3 2026&#8212;if it happens at all.</p><h2>This Is Different From Crypto</h2><p>The crypto mining crisis was chaotic but temporary. Scalpers bought cards, marked them up, and eventually demand crashed when crypto prices fell. The supply chain itself wasn&#8217;t fundamentally altered.</p><p>The AI shift is structural. Manufacturers aren&#8217;t just responding to temporary demand&#8212;they&#8217;re <strong>redesigning their entire business models</strong> around datacenter customers.</p><p>NVIDIA CFO Colette Kress said it explicitly: &#8220;Gaming revenue was down 22% sequentially due to supply constraints.&#8221;</p><p>They&#8217;re not trying to hide it. They&#8217;re constrained because they&#8217;re choosing to allocate production to AI chips that generate 10x the margin.</p><p>AMD tells the same story. Gaming operating margins collapsed to just <strong>2%</strong> in Q3 2024 while datacenter surged 122% year-over-year. Their response? Senior VP Jack Huynh announced AMD is abandoning the high-end GPU market entirely:</p><blockquote><p>&#8220;If I tell developers I&#8217;m just going for 10 percent of the market share, they just say, &#8216;Jack, I wish you well, but we have to go with NVIDIA.&#8217;&#8221;</p></blockquote><p>So NVIDIA has <strong>94% discrete GPU market share</strong>, no competition above $600, and no incentive to prioritize consumers.</p><h2>The Real Winners (And It&#8217;s Not You)</h2><p>Here&#8217;s what I hate admitting: for enterprise, this is all <strong>good news</strong>.</p><p>I see it every week working with clients at <a href="https://www.ninetwothree.co/">NineTwoThree</a>. Two years ago, a simple AI pipeline with 2-3 API calls would cost serious money. Today? We&#8217;re building complex multi-step workflows with 5-7 model calls that cost <strong>less</strong> than those basic pipelines did in 2023. Context windows went from 4K to 200K tokens. Inference costs dropped 10x.</p><p>Cloud GPU prices are falling too. AWS cut H100 instance pricing by 45%. Lambda Labs offers $2.99/GPU-hour. The arbitrage exists; rent cloud GPUs for bulk processing instead of buying hardware, and the math sometimes works.</p><p>But consumer hardware doesn&#8217;t have this competitive pressure. NVIDIA has 94% market share. No one is undercutting them on RTX cards. Cloud has alternatives. Your next PC build doesn&#8217;t.</p><p>Corporations negotiate bulk pricing with dedicated account managers. You pay retail in a market no longer optimized for retail customers. Next time you see a headline about cheaper AI API costs, remember: someone&#8217;s paying for that optimization. Check your PC part picker cart. It&#8217;s you.</p><h2>The Uncomfortable Truth</h2><p>We&#8217;re witnessing the same pattern I&#8217;ve written about in Big Tech&#8217;s $364 billion infrastructure bet: the industry has chosen the most expensive solution possible.</p><p>Instead of optimizing AI models, they&#8217;re buying every chip on the planet.</p><p>Instead of engineering efficiency, they&#8217;re throwing silicon at the problem.</p><p>Consumers, gamers, content creators, small businesses, and anyone who needs to build a PC are paying the tax.</p><p>The irony is brutal: the companies promising AI will revolutionize productivity are making basic computing more expensive for everyone else.</p><p>This isn&#8217;t going to fix itself. The margins on AI infrastructure are too good. The demand is too high. The CHIPS Act capacity won&#8217;t come online for years.</p><p>If you need to build or upgrade a PC, the best time was six months ago. The second-best time is before Q1 2026, when memory prices are projected to climb another 20%+.</p><p>Welcome to the AI silicon tax. You&#8217;re already paying it.</p><div><hr></div><p><em>What&#8217;s your experience with hardware prices lately? Have you delayed builds or upgrades because of cost increases? Reply and let me know.</em></p><p><em>If this resonated, forward it to someone planning a PC build who needs to see these numbers before they shop.</em></p><p><em>Subscribe for weekly insights from the trenches. Real problems, practical perspectives, no corporate optimism.<br></em></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://techtrenches.dev/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://techtrenches.dev/subscribe?"><span>Subscribe now</span></a></p>]]></content:encoded></item><item><title><![CDATA[The First Full-Scale Cyber War: 4 Years of Lessons]]></title><description><![CDATA[9,000 cyber attacks. Kyivstar was destroyed in hours. Viasat bricked 45,000 modems. What Ukraine's cyber war teaches us about infrastructure security.]]></description><link>https://techtrenches.dev/p/the-first-full-scale-cyber-war-4</link><guid isPermaLink="false">https://techtrenches.dev/p/the-first-full-scale-cyber-war-4</guid><dc:creator><![CDATA[Denis Stetskov]]></dc:creator><pubDate>Mon, 12 Jan 2026 12:01:47 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/fe5f480f-9059-4dd5-a3a1-4fac98266b76_508x340.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!8Kff!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2897e5bf-3686-4e0f-817e-24086d800c5b_1600x1000.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!8Kff!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2897e5bf-3686-4e0f-817e-24086d800c5b_1600x1000.png 424w, https://substackcdn.com/image/fetch/$s_!8Kff!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2897e5bf-3686-4e0f-817e-24086d800c5b_1600x1000.png 848w, https://substackcdn.com/image/fetch/$s_!8Kff!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2897e5bf-3686-4e0f-817e-24086d800c5b_1600x1000.png 1272w, https://substackcdn.com/image/fetch/$s_!8Kff!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2897e5bf-3686-4e0f-817e-24086d800c5b_1600x1000.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!8Kff!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2897e5bf-3686-4e0f-817e-24086d800c5b_1600x1000.png" width="1456" height="910" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/2897e5bf-3686-4e0f-817e-24086d800c5b_1600x1000.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:910,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:198505,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://techtrenches.dev/i/182421952?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2897e5bf-3686-4e0f-817e-24086d800c5b_1600x1000.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!8Kff!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2897e5bf-3686-4e0f-817e-24086d800c5b_1600x1000.png 424w, https://substackcdn.com/image/fetch/$s_!8Kff!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2897e5bf-3686-4e0f-817e-24086d800c5b_1600x1000.png 848w, https://substackcdn.com/image/fetch/$s_!8Kff!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2897e5bf-3686-4e0f-817e-24086d800c5b_1600x1000.png 1272w, https://substackcdn.com/image/fetch/$s_!8Kff!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2897e5bf-3686-4e0f-817e-24086d800c5b_1600x1000.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>December 12, 2023. 7:00 AM Kyiv time. Kyivstar, Ukraine&#8217;s largest mobile operator with 24.3 million subscribers, goes silent. Mobile service, internet, air raid alert systems in Kyiv and Sumy regions. All offline.</p><p>Within hours, Sandworm hackers destroyed 10,000 computers, 4,000+ servers, all cloud storage, and backups. Illia Vitiuk, head of SBU&#8217;s cybersecurity department: &#8220;This is probably the first example of a destructive cyberattack that completely destroyed the core of a telecoms operator.&#8221;</p><p>The hackers had been inside since May 2023. Full access since November. Seven months inside the infrastructure of a country&#8217;s largest carrier. Nobody noticed.</p><p>This wasn&#8217;t an isolated incident. This is the first full-scale cyber war in history.</p><p>And the lessons apply to every power grid, railway system, and telecom provider worldwide.</p><h2>The Scale Nobody Talks About</h2><p>Between 2022 and 2024, Ukraine recorded over 9,000 cyber incidents. The trajectory, according to <a href="https://www.infosecurity-magazine.com/news/russian-cyberattacks-ukraines/">SSSCIP data reported by Infosecurity Magazine</a>:</p><ul><li><p>2021: 1,350 incidents</p></li><li><p>2024: 4,315 incidents</p></li><li><p>Growth: 220% in three years</p></li></ul><p>Russia deployed 17+ unique wiper malware families: programs designed solely to destroy data beyond recovery. WhisperGate, HermeticWiper, CaddyWiper, Industroyer2, AcidRain. Each built for specific targets.</p><p>But here&#8217;s what Western coverage often misses: this isn&#8217;t one-sided aggression.</p><p>Ukraine hit back. Hard.</p><p>In July 2024, Ukraine&#8217;s military intelligence (GUR) claimed responsibility for a week-long DDoS attack on Russia&#8217;s banking system. Sberbank, Alfa-Bank, VTB, Gazprombank, the Central Bank. Users reportedly couldn&#8217;t withdraw cash from ATMs. In December 2025, anonymous hackers breached Mikord, a developer of Russia&#8217;s unified military draft registry. 30 million records. Source code, documentation, backups destroyed, according to <a href="https://istories.media/en/news/2025/12/11/hackers-breach-infrastructure-of-key-unified-military-register-developer/">investigative outlet iStories, which verified the breach</a>. Mikord&#8217;s director confirmed the attack. Russia&#8217;s Defense Ministry denied any impact on the registry.</p><p>This is symmetric warfare. Both sides are hitting critical infrastructure. Both sides claim real damage.</p><h2>The Attacks That Changed Everything</h2><h3>Viasat: The Hour-Zero Strike</h3><p>February 24, 2022. <a href="https://www.viasat.com/perspectives/corporate/2022/ka-sat-network-cyber-attack-overview/">03:02 UTC</a>. Exactly one hour before Russian ground forces crossed the border.</p><p>Attackers exploited a VPN misconfiguration at Viasat&#8217;s management center in Turin, Italy. They pushed AcidRain wiper malware to 40,000-45,000 satellite modems via legitimate software update mechanisms.</p><p>The result: Ukrainian military command and control went dark at the moment of invasion. Spillover disabled 5,800 German wind turbines and affected 9,000 French subscribers.</p><p>One misconfigured VPN. 45,000 modems bricked. Military communications disrupted during the most critical hour of the war.</p><p>SentinelOne researchers called it &#8220;the biggest known hack of the war.&#8221;</p><h3>Industroyer2: The Blackout That Almost Was</h3><p>April 8, 2022. ESET researchers discovered Industroyer2 scheduled to execute at 16:10 UTC against Ukrainian electrical substations. CaddyWiper was programmed to run 10 minutes later to destroy forensic evidence.</p><p>The malware implemented IEC 60870-5-104, the protocol used by electrical substation protection relays. It contained hardcoded IP addresses for eight target ICS devices.</p><p>If successful: a blackout affecting over 2 million people. The largest cyber-induced power outage in history.</p><p>It failed. CERT-UA, ESET, and Microsoft coordinated a defense based on lessons from the 2016 grid attack. The attack was stopped hours before execution.</p><p>The pattern: preparation from crisis one saved crisis two.</p><h3>Kyivstar: When Security Investment Isn&#8217;t Enough</h3><p>Kyivstar wasn&#8217;t some underfunded government agency. It was Ukraine&#8217;s largest private telecom, a subsidiary of Amsterdam-based VEON, with serious security investment.</p><p>Didn&#8217;t matter.</p><p>Sandworm penetrated the network in March 2023. By November, they had full access. On December 12, they executed, destroying the core infrastructure, wiping &#8220;almost everything.&#8221;</p><p>Vitiuk&#8217;s assessment: &#8220;This attack is a big message, a big warning, not only to Ukraine but for the whole Western world to understand that no one is actually untouchable.&#8221;</p><p>40% of Kyivstar&#8217;s infrastructure disabled. Services restored in phases over eight days. Losses estimated in the billions of hryvnia.</p><h3>Ukrzaliznytsia: March 2025</h3><p>With Ukrainian airspace closed since 2022, railways became the country&#8217;s lifeline. 20 million passengers and 148 million tonnes of freight in 2024.</p><p>On March 23, 2025, a &#8220;large-scale, systematic, non-trivial and multi-level&#8221; cyberattack hit Ukrzaliznytsia&#8217;s online systems. CERT-UA investigation found TTPs &#8220;characteristic of Russian intelligence services.&#8221;</p><p>Website and mobile app: offline. Long queues at physical ticket offices.</p><p>But trains never stopped running.</p><p>The difference from Kyivstar: backup protocols implemented after previous attacks. Systems built during crisis one carried through crisis two.</p><p>CEO Oleksandr Pertsovskyi: &#8220;The cyber-attack on the company was targeted and meticulously planned. However, not a single Ukrzaliznytsia train was halted for even a moment.&#8221;</p><h2>Ukraine&#8217;s Counter-Offensive</h2><p>Western media focuses on Russian attacks. The Ukrainian response gets less attention. The following operations were claimed by GUR or pro-Ukrainian hackers. Independent verification varies, and Russian authorities have denied most claims.</p><p><strong>Tax Service, December 2023.</strong> GUR claimed it destroyed databases across 2,300+ regional servers. Configuration files &#8220;which for years ensured the functioning of Russia&#8217;s tax system&#8221; allegedly wiped. <a href="https://therecord.media/ukraine-intelligence-claims-attack-on-russia-tax-service">Russia&#8217;s Federal Tax Service denied any operational impact</a>, though users reported access problems.</p><p><strong>Planeta, January 2024.</strong> GUR claimed an attack on a state satellite data center. 280 servers allegedly destroyed. 2 petabytes of military-relevant weather and satellite data reportedly wiped. Supercomputers &#8220;not fully restorable due to sanctions.&#8221; Claimed damage: $10+ million.</p><p><strong>Banking System, July 2024.</strong> GUR claimed a week-long DDoS campaign targeting Sberbank, Alfa-Bank, VTB, Gazprombank, Central Bank, plus VK, Discord, and the national payment system. Reports indicated ATM disruptions across Russia.</p><p><strong>Russian Railways, March 2024 &amp; June 2025.</strong> Multiple attacks reportedly taking down RZD&#8217;s website and app. Moscow Metro hit days after Ukrzaliznytsia attack in apparent retaliation.</p><p><strong>Mikord Draft Registry, December 2025.</strong> Anonymous hackers (not attributed to GUR) breached Mikord, a key developer of Russia&#8217;s unified military registration system. <a href="https://www.themoscowtimes.com/2025/12/11/cyberattack-paralyzes-russias-military-registration-database-a91410">The Moscow Times</a> and <a href="https://istories.media/en/news/2025/12/11/hackers-breach-infrastructure-of-key-unified-military-register-developer/">iStories</a> verified the breach. Mikord&#8217;s director confirmed the hack. The registry contains 30 million conscription records. Source code, documentation, and backups reportedly destroyed. Russia&#8217;s Defense Ministry called the reports &#8220;fake news.&#8221;</p><p>Grigory Sverdlin, anti-conscription organization Idite Lesom: &#8220;For several more months, this behemoth won&#8217;t be able to send people off to kill and die.&#8221;</p><h2>The Vulnerability Patterns</h2><p>Four years of cyber warfare exposed consistent vulnerability classes. These aren&#8217;t Ukraine-specific. They exist in Western infrastructure.</p><h3>VPN and Remote Access</h3><p>The Viasat attack exploited a VPN misconfiguration. Kyivstar&#8217;s breach likely started with a compromised employee account. CISA documented GRU exploitation of CVE-2018-13379 (FortiGate), CVE-2019-11510 (Pulse Secure), CVE-2019-19781 (Citrix). Vulnerabilities with patches available for 5+ years.</p><h3>Dwell Time</h3><p>Kyivstar: attackers inside for 7 months before execution. October 2022 power grid attack: Mandiant found attackers with SCADA access for up to three months.</p><p>Sophisticated adversaries don&#8217;t rush. Detection capabilities that can&#8217;t identify months-long intrusions are detection capabilities that don&#8217;t work.</p><h3>Supply Chain</h3><p>The Viasat attack weaponized legitimate software update mechanisms. CERT-UA documented at least three supply chain breaches in March 2024 energy sector attacks.</p><h3>IT/OT Convergence</h3><p>The October 2022 grid attack gained OT access through a hypervisor hosting a SCADA management instance. Attackers used native MicroSCADA binaries, living-off-the-land techniques. Mandiant: &#8220;a growing maturity of Russia&#8217;s offensive OT arsenal.&#8221;</p><p>Victor Zhora, SSSCIP Deputy Chairman, emphasized air-gapping between IT and OT as fundamental. Most Western utilities have moved in the opposite direction.</p><h3>Centralization</h3><p>The Mikord hack illustrates the pattern: centralization creates single points of failure.</p><p>Ukraine&#8217;s cloud migration (15+ petabytes distributed across AWS, Google Cloud, Microsoft Azure) proved more resilient than hardened on-premises facilities.</p><p>Deputy Prime Minister Mykhailo Fedorov: &#8220;Russian missiles can&#8217;t destroy the cloud.&#8221;</p><h2>What Actually Worked</h2><p><strong>Cloud Migration.</strong> One week before the invasion, Ukraine&#8217;s parliament enabled government data migration to cloud. PrivatBank (20 million customers) migrated 270 applications and 4 petabytes in 45 days. Financial services continued throughout the war.</p><p><strong>Detection Speed.</strong> Microsoft detected HermeticWiper hours before the invasion. Within 3 hours, signatures pushed globally. The Industroyer2 defense succeeded because CERT-UA, ESET, and Microsoft coordinated based on 2016 lessons.</p><p><strong>Backup Protocols.</strong> Ukrzaliznytsia&#8217;s trains ran during attack because they&#8217;d been attacked before. Kyivstar took eight days to restore. The difference: systems built during previous crises.</p><p><strong>Public-Private Partnership.</strong> <a href="https://blogs.microsoft.com/on-the-issues/2022/11/03/our-tech-support-ukraine/">Microsoft</a>: $400+ million in aid. Google: Project Shield on 150+ websites. Cloudflare: ~130 government domains. AWS: Snowball devices shipped to Poland within 48 hours.</p><p>Carnegie Endowment: &#8220;delivering cyber defense at scale could only be achieved by private sector entities that owned, operated, and understood the most widely-used digital services.&#8221;</p><h2>The Human Layer</h2><p>Every major attack in this analysis started the same way: a person.</p><p>Kyivstar: likely a compromised employee account. Viasat: a VPN misconfiguration someone didn&#8217;t catch. The GRU exploits from 2018 and 2019 still work because someone hasn&#8217;t patched systems that have had fixes for five years.</p><p>Nation-state attackers don&#8217;t need zero-days when humans provide the access.</p><p>I manage distributed engineering teams from a US-based company, with engineers in Ukraine. We&#8217;ve operated through four years of this war. Our security isn&#8217;t optional: mandatory quarterly security training, BYOD policy with device management, password policy with breach monitoring, 2FA on everything without exceptions, access reviews when roles change.</p><p>None of this is exotic. All of it is enforced. The same principle applies to AI safety. I wrote about why AI creators are losing their legal shield in <a href="https://techtrenches.dev/p/the-grok-precedent-why-ai-creators">The Grok Precedent</a>. Different domain, same lesson: policies that aren&#8217;t enforced aren&#8217;t policies.</p><p>The difference between &#8220;we have a policy&#8221; and &#8220;the policy is mandatory&#8221; is the difference between Kyivstar and Ukrzaliznytsia.</p><p>The companies that survived had one thing in common: policies that were actually followed, not just documented.</p><h2>What This Means for Everyone Else</h2><p>CISA Director Jen Easterly: &#8220;This is a world where such a conflict, halfway across the planet, could well endanger the lives of Americans here at home through disruption of pipelines, pollution of our water systems, severing of our communications, and crippling of our transportation nodes.&#8221;</p><p><strong>It&#8217;s already happening.</strong> May 2025: CISA and NSA published a joint advisory. GRU Unit 26165 has been targeting Western logistics and technology companies involved in Ukraine aid since 2022. Targets include air, sea, and rail entities in NATO member states.</p><p><strong>Water systems are being hit.</strong> CISA documented pro-Russia groups exploiting unsecured VNC connections in water facilities. The attacks &#8220;have not yet caused injury.&#8221;</p><p>Not yet.</p><h2>The Math of Preparation</h2><p>Ukraine&#8217;s experience validates a principle that applies beyond war:</p><p><strong>Systems built during crisis one determine whether you survive crisis four.</strong></p><p>The second blackout campaign in 2023 hit less hard because teams had backup power. The third in 2024: less disruption. The fourth in October 2025: near-normal operations despite 12+ hour outages.</p><p>Ukrzaliznytsia&#8217;s trains ran because they&#8217;d been attacked before. Kyivstar, despite security investment, had no institutional memory of crisis response.</p><p>Preparation compounds. Vulnerability compounds.</p><p>Every organization running critical infrastructure faces a choice: build systems during peace for crises that will come, or scramble during attacks with tools that don&#8217;t exist.</p><p>The cyber war in Ukraine isn&#8217;t just a regional conflict. It&#8217;s a live demonstration of what works when nation-state attackers target infrastructure.</p><p>The lessons are available. The question is whether anyone is paying attention.</p><div><hr></div><p><em>If this analysis was useful, forward it to someone responsible for infrastructure security.</em></p><p><em>For engineering leaders: the systems that survive crises aren&#8217;t built during crises. They&#8217;re built before.</em></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://techtrenches.dev/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://techtrenches.dev/subscribe?"><span>Subscribe now</span></a></p><p></p>]]></content:encoded></item><item><title><![CDATA[The Grok Precedent: Why AI Creators Are About to Lose Their Legal Shield]]></title><description><![CDATA[France&#8217;s criminal investigation signals the end of &#8220;platform immunity&#8221; for AI tools.]]></description><link>https://techtrenches.dev/p/the-grok-precedent-why-ai-creators</link><guid isPermaLink="false">https://techtrenches.dev/p/the-grok-precedent-why-ai-creators</guid><dc:creator><![CDATA[Denis Stetskov]]></dc:creator><pubDate>Mon, 05 Jan 2026 12:10:26 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/a1a7786e-949e-4c35-89bb-7331144d7178_508x340.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!cm8E!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5987c18f-8f76-4516-ab64-a45313b84c09_1600x900.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!cm8E!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5987c18f-8f76-4516-ab64-a45313b84c09_1600x900.png 424w, https://substackcdn.com/image/fetch/$s_!cm8E!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5987c18f-8f76-4516-ab64-a45313b84c09_1600x900.png 848w, https://substackcdn.com/image/fetch/$s_!cm8E!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5987c18f-8f76-4516-ab64-a45313b84c09_1600x900.png 1272w, https://substackcdn.com/image/fetch/$s_!cm8E!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5987c18f-8f76-4516-ab64-a45313b84c09_1600x900.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!cm8E!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5987c18f-8f76-4516-ab64-a45313b84c09_1600x900.png" width="1456" height="819" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/5987c18f-8f76-4516-ab64-a45313b84c09_1600x900.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:819,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:146776,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://techtrenches.dev/i/183330058?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5987c18f-8f76-4516-ab64-a45313b84c09_1600x900.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!cm8E!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5987c18f-8f76-4516-ab64-a45313b84c09_1600x900.png 424w, https://substackcdn.com/image/fetch/$s_!cm8E!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5987c18f-8f76-4516-ab64-a45313b84c09_1600x900.png 848w, https://substackcdn.com/image/fetch/$s_!cm8E!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5987c18f-8f76-4516-ab64-a45313b84c09_1600x900.png 1272w, https://substackcdn.com/image/fetch/$s_!cm8E!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5987c18f-8f76-4516-ab64-a45313b84c09_1600x900.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><div><hr></div><p><strong>France&#8217;s criminal investigation signals the end of &#8220;platform immunity&#8221; for AI tools. Here&#8217;s what it means for every company building AI products.</strong></p><div><hr></div><p>December 28, 2025. A user on X tags @grok under a woman&#8217;s photo. The prompt: &#8220;remove clothes.&#8221;</p><p>Within hours, Grok was generating sexualized images across the platform. Not just adults. Minors. Real people who never consented.</p><p>Copyleaks ran a quick review of Grok&#8217;s public image stream. The rate: one nonconsensual sexualized image per minute.</p><p>The Internet Watch Foundation reported a 400% increase in AI-generated child sexual abuse material in the first six months of 2025.</p><p>By January 1, French members of parliament referred the case to the Paris Prosecutor&#8217;s Office. The charge: dissemination of sexually explicit deepfakes, including images of minors, generated by an AI system.</p><p>Not a lawsuit against anonymous users. A criminal investigation targeting X and xAI.</p><p>Grok acknowledged the violation: <em>&#8220;I deeply regret an incident on Dec 28, 2025, where I generated and shared an AI image of two young girls (estimated ages 12-16) in sexualized attire... This violated ethical standards and potentially US laws on CSAM.&#8221;</em> (Grok, public post on X, January 1, 2026)</p><p>The AI apologized. The legal system is not impressed.</p><div><hr></div><h2>The 30-Year Shield</h2><p>Section 230 of the Communications Decency Act. Tech&#8217;s favorite law.</p><p>The logic was simple. Congress wrote it in 1996 to protect message boards. User posts something defamatory on AOL? AOL isn&#8217;t the publisher. Just the host. Liability follows the person who typed the content.</p><p>This shield created the modern internet. Facebook isn&#8217;t liable for user posts. YouTube doesn&#8217;t get sued for uploads. Twitter could host billions of messages without reviewing each one.</p><p>The key phrase: &#8220;information provided by another.&#8221;</p><p>Platforms host. Users create. Liability follows creation.</p><p>For 30 years, this worked. Or at least, companies pretended it did.</p><div><hr></div><h2>Three Jurisdictions, 30 Days</h2><p><strong>United Kingdom, December 18, 2025.</strong> The government announced plans to ban nudification apps. Not their use. Their creation and supply.</p><p>Technology Secretary Liz Kendall: &#8220;I am introducing a new offence to ban nudification tools, so that those who profit from them or enable their use will feel the full force of the law.&#8221;</p><p>Prison sentences. For creators. Not users.</p><p><strong>France, January 1, 2026.</strong> The government accused Grok of generating &#8220;clearly illegal&#8221; sexual content. Potential violation of the EU Digital Services Act. Two MPs referred the case to the Paris Prosecutor&#8217;s Office.</p><p>X is already under ongoing DSA investigation. Last month they got hit with a &#8364;120 million fine for deceptive verification practices and transparency violations. Now this.</p><p><strong>EU AI Act, August 2026.</strong> The majority of obligations fall on providers. Developers. Not deployers. Not users. The companies that build the systems.</p><p>The pattern: liability is shifting upstream.</p><div><hr></div><h2>Grok Doesn&#8217;t Host. Grok Generates.</h2><p>Here&#8217;s the legal argument that&#8217;s about to reshape the industry.</p><p>Section 230 was written for platforms that host user-generated content. Forums. Comment sections. Social feeds. Content comes from users. Platform transmits it.</p><p>AI breaks this model.</p><p>When someone prompts Grok to &#8220;remove clothes&#8221; from a photo, Grok doesn&#8217;t search a database. Doesn&#8217;t retrieve content created by another user. Grok generates new content. The sexualized image didn&#8217;t exist until Grok created it.</p><p>Professor Chinmayi Sharma at Fordham Law, to Fortune: <em>&#8220;Section 230 was built to protect platforms from liability for what users say, not for what the platforms themselves generate... Transformer-based chatbots don&#8217;t just extract. They generate new, organic outputs. That looks far less like neutral intermediation and far more like authored speech.&#8221;</em></p><p>The Congressional Research Service analysis is more direct: if AI &#8220;creates or develops&#8221; content that doesn&#8217;t appear in its training data, the provider may be considered &#8220;responsible for the development of the specific content.&#8221; Unprotected by Section 230.</p><p>Grok isn&#8217;t hosting harmful content. Grok is creating it.</p><p>That distinction changes everything.</p><div><hr></div><h2>The Implications</h2><p><strong>Safety Before Launch</strong></p><p>The UK ban targets creators who &#8220;design or supply&#8221; nudification tools. Not tools that were misused. Tools that enable misuse. If your AI can generate harmful content, you&#8217;re liable for building that capability.</p><p>Terms of service won&#8217;t save you.</p><p><strong>&#8220;Unfiltered&#8221; Is a Liability</strong></p><p>xAI marketed Grok&#8217;s &#8220;Spicy Mode&#8221; as a feature. Fewer guardrails. More freedom. Less corporate sanitization.</p><p>That marketing copy is now in prosecutor&#8217;s files.</p><p>I wrote about this pattern in <a href="https://techtrenches.substack.com/p/from-cancer-cures-to-pornography">From Cancer Cures to Pornography: The Six-Month Descent of AI</a>. The industry had a choice between building tools that help people and building products designed to be maximally addictive. Most chose wrong. Grok chose spectacularly wrong.</p><p>Every marketing decision emphasizing fewer safety constraints becomes potential evidence of negligent design.</p><p><strong>&#8220;Move Fast&#8221; = Criminal Exposure</strong></p><p>The EU AI Act requires providers of high-risk AI systems to establish risk management, ensure data governance, maintain technical documentation, implement human oversight, meet cybersecurity standards.</p><p>Fines can reach 7% of global annual revenue. UK&#8217;s proposed laws: prison sentences for individuals who design harmful AI tools.</p><p>The era of shipping first and apologizing later is over. At least if you want to operate in markets representing 450+ million consumers.</p><div><hr></div><h2>The Engineering Reality</h2><p>We built a healthcare ML product that never launched. Fully functional. Ready to ship. FDA said no. Months of our development. Zero users.</p><p>&#8220;Move fast&#8221; doesn&#8217;t work when regulators move slow.</p><p>We spent six weeks on FanDuel&#8217;s <a href="https://chuck.fanduel.com/">Chuck</a> before legal signed off. Not fixing bugs. Building guardrails. Every topic that could give Barkley or FanDuel legal exposure had to be walled off. Six weeks of prompt engineering, edge case testing, and evaluation runs.</p><p>That&#8217;s the new math. Development time plus legal review time plus evaluation time. The last one isn&#8217;t optional anymore.</p><p>We build evaluation suites as part of the development process now. Not after. During. Every prompt variation, every edge case, every jailbreak attempt. They always find something. Always. The question is whether you find it before your users do&#8212;or before a prosecutor does.</p><p>RBAC and multi-tenancy aren&#8217;t optional. Sales sees sales data. HR sees HR data. Client A&#8217;s context never touches Client B&#8217;s model. Ever. You&#8217;d be surprised how many vendors skip this.</p><p>Audit trails for everything. Every prompt. Every response. Every action. When a regulator asks what your AI generated on a specific date, you need the answer.</p><div><hr></div><h2>The Uncomfortable Truth</h2><p>The AI industry spent three years in a race to capability. Whoever had the most powerful model won. Whoever shipped fastest dominated. Safety was a PR concern. Not an engineering priority.</p><p>That era is ending.</p><p>France isn&#8217;t investigating xAI because Grok is powerful. They&#8217;re investigating because Grok generated child sexual abuse material and the company&#8217;s safeguards failed to prevent it.</p><p>The UK isn&#8217;t banning nudification tools because they&#8217;re impressive technology. They&#8217;re banning them because 19% of under-18s reporting to the Internet Watch Foundation&#8217;s helpline said their explicit imagery had been manipulated. A problem that didn&#8217;t exist at this scale before AI made it trivially easy.</p><p>The EU isn&#8217;t imposing provider liability because they hate innovation. They&#8217;re imposing it because when AI systems cause harm, someone needs to be accountable. &#8220;The user prompted it&#8221; isn&#8217;t going to cut it when the system itself creates the harmful output.</p><p>Grok doesn&#8217;t host content. Grok generates it.</p><p>That distinction is about to cost the entire industry its legal shield.</p><p>And honestly? Good.</p><p>Big Tech needed this wake-up call. The &#8220;ship fast, fix later&#8221; mentality brought us to where I wrote about in <a href="https://techtrenches.substack.com/p/the-great-software-quality-collapse">The Great Software Quality Collapse</a>. When flagship companies behave like consequences don&#8217;t exist, what do you expect from everyone else?</p><p>Some guardrails aren&#8217;t anti-innovation. Pharma can&#8217;t ship drugs without trials. Auto can&#8217;t sell cars without safety standards. Construction can&#8217;t build without permits.</p><div><hr></div><h2>What You Should Do Monday Morning</h2><p><strong>Audit your safety architecture.</strong> Not your marketing copy. Your actual technical controls. What can your system generate? What can&#8217;t it? How do you know?</p><p><strong>Document everything.</strong> The EU AI Act requires extensive technical documentation. Start building that paper trail now.</p><p><strong>Review your contracts.</strong> Who bears liability when your AI misbehaves? If you don&#8217;t know, your lawyers should.</p><p><strong>Plan for EU compliance.</strong> August 2026 is seven months away. If you haven&#8217;t started, you&#8217;re already behind.</p><div><hr></div><p><em>If this was useful, forward it to another engineering leader who&#8217;s building AI products.</em></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://techtrenches.dev/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://techtrenches.dev/subscribe?"><span>Subscribe now</span></a></p>]]></content:encoded></item><item><title><![CDATA[The Holiday Season That Keeps Making Tech History]]></title><description><![CDATA[No frameworks today. Just wild stories: the web born on Christmas, a million Zunes dying at midnight, and Netflix's worst holiday ever."]]></description><link>https://techtrenches.dev/p/the-holiday-season-that-keeps-making</link><guid isPermaLink="false">https://techtrenches.dev/p/the-holiday-season-that-keeps-making</guid><dc:creator><![CDATA[Denis Stetskov]]></dc:creator><pubDate>Tue, 23 Dec 2025 12:02:25 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/8cf3c3a1-3514-4266-a2e0-89a5459749c6_508x340.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!O6du!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9d1ec30f-1f66-4258-b589-569a5794018e_1600x1200.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!O6du!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9d1ec30f-1f66-4258-b589-569a5794018e_1600x1200.png 424w, https://substackcdn.com/image/fetch/$s_!O6du!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9d1ec30f-1f66-4258-b589-569a5794018e_1600x1200.png 848w, https://substackcdn.com/image/fetch/$s_!O6du!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9d1ec30f-1f66-4258-b589-569a5794018e_1600x1200.png 1272w, https://substackcdn.com/image/fetch/$s_!O6du!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9d1ec30f-1f66-4258-b589-569a5794018e_1600x1200.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!O6du!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9d1ec30f-1f66-4258-b589-569a5794018e_1600x1200.png" width="1456" height="1092" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/9d1ec30f-1f66-4258-b589-569a5794018e_1600x1200.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1092,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:181378,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://techtrenches.substack.com/i/181588473?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9d1ec30f-1f66-4258-b589-569a5794018e_1600x1200.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!O6du!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9d1ec30f-1f66-4258-b589-569a5794018e_1600x1200.png 424w, https://substackcdn.com/image/fetch/$s_!O6du!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9d1ec30f-1f66-4258-b589-569a5794018e_1600x1200.png 848w, https://substackcdn.com/image/fetch/$s_!O6du!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9d1ec30f-1f66-4258-b589-569a5794018e_1600x1200.png 1272w, https://substackcdn.com/image/fetch/$s_!O6du!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9d1ec30f-1f66-4258-b589-569a5794018e_1600x1200.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Happy holidays, fellow engineers.</p><p>What a year 2025 has been. AI agents everywhere, more layoffs, the return-to-office wars continuing, and enough Slack notifications to last a lifetime. We&#8217;re all exhausted. Nobody wants to read another hot take or industry analysis right now.</p><p>So let&#8217;s not do that.</p><p>Instead, grab your drink of choice, find a comfortable spot, and let&#8217;s take a break together. No frameworks. No uncomfortable truths. Just some wild stories about what happens to tech when everyone goes on vacation.</p><p>Running remote teams in Ukraine during the holiday period is chaotic. Half the team celebrates Christmas on December 25th, half on January 7th. New Year&#8217;s is sacred for everyone. Smart engineering leaders freeze deployments from December 20th to January 15th. I&#8217;ve read all the best practices. I know the risks. My next release is January 2nd. Some lessons we learn. Others, we just keep writing about.</p><p>Here&#8217;s a fun fact for your next holiday dinner: Tim Berners-Lee launched the World Wide Web on Christmas Day 1990. His wife was nine months pregnant at the time. The baby arrived on New Year&#8217;s Day.</p><p>His colleagues said he fathered two babies that holiday season. One changed diapers. The other changed civilization.</p><p>Turns out, the week between Christmas and New Year&#8217;s has a habit of making tech history. Some of it is brilliant. Some of it is catastrophic. All of it is surprisingly entertaining.</p><h2>The Internet Has Three Birthdays (All During Holidays)</h2><p>The web went live on Christmas 1990. But the internet itself? That was born on New Year&#8217;s Day 1983, when ARPANET switched to TCP/IP.</p><p>And DNS, the system that lets you type &#8220;google.com&#8221; instead of memorizing numbers? January 1, 1985.</p><p>Three foundational technologies. All launched while everyone else was eating leftovers and watching football.</p><p>Why? January 1st is actually genius timing. Minimal traffic. Clean calendar date. And if something breaks, you have a few days to fix it before anyone notices.</p><p>Engineers have been exploiting this window for decades.</p><h2>The $429 Million Christmas Miracle</h2><p>Five days before Christmas 1996, Apple made an announcement that saved the company.</p><p>They bought NeXT for $429 million. More importantly, they got Steve Jobs back.</p><p>Apple was 90 days from bankruptcy. Their next-generation operating system had just failed. They were out of options.</p><p>Gil Amelio, Apple&#8217;s CEO at the time, told 200 journalists: &#8220;I&#8217;m not buying software. I&#8217;m buying Steve.&#8221;</p><p>That software became Mac OS X. Then iOS. Then the foundation of every Apple device you own today.</p><p>Apple went from near-death to becoming the first $3 trillion company in history. All because of a deal signed during the holiday shopping season.</p><h2>The Christmas Tree That Crashed IBM</h2><p>In December 1987, a German student wrote a simple program. It displayed an ASCII Christmas tree on your screen, made of text characters, very festive, and then emailed itself to everyone in your address book.</p><p>Harmless holiday cheer, right?</p><p>It crashed 350,000 IBM terminals worldwide. Networks collapsed under the load. The first viral computer worm in history spread through corporate email systems like wildfire.</p><p>They called it the Christmas Tree EXEC. It became the template for every email virus that followed, including the infamous ILOVEYOU worm thirteen years later.</p><p>The lesson: never trust festive ASCII art from strangers.</p><h2>Gaming&#8217;s Grinch Moment</h2><p>Christmas Day 2014. Millions of kids unwrap new PlayStation and Xbox consoles. They rush to set them up. They try to go online.</p><p>Nothing works.</p><p>A hacking group called Lizard Squad had taken down both PlayStation Network and Xbox Live simultaneously. 158 million gamers. Christmas morning. No online gaming.</p><p>The attack only stopped when Kim Dotcom (yes, that Kim Dotcom) bribed them with free cloud storage accounts.</p><p>Merry Christmas, gamers.</p><h2>The Bug That Killed a Million Zunes at Midnight</h2><p>Remember the Zune? Microsoft&#8217;s iPod competitor?</p><p>On December 31, 2008, at exactly midnight, every single Zune 30GB in the world froze. Simultaneously. A million devices, dead at the same moment.</p><p>The culprit was a tiny bug in how the device handled leap years:</p><pre><code><code>if (days &gt; 366) {
    days -= 366;
    year += 1;
}
</code></code></pre><p>On day 366 of a leap year, the code got stuck in an infinite loop. The Zune literally couldn&#8217;t handle New Year&#8217;s Eve.</p><p>Users had to wait 24 hours for the problem to fix itself. By then, the jokes had already gone viral.</p><p>The Zune never recovered its reputation. Edge cases matter, kids.</p><h2>Y2K: The Party That Almost Wasn&#8217;t</h2><p>Remember the millennium bug panic? Planes were supposed to fall from the sky. Banks would lose all your money. Civilization might collapse.</p><p>Companies spent somewhere between $300 and $600 billion preparing for January 1, 2000.</p><p>What actually happened? A video rental store in New York charged a customer $91,250 for &#8220;100 years&#8221; of late fees. Some spy satellites got confused for three days. A few nuclear plant sensors glitched.</p><p>That&#8217;s it.</p><p>Was Y2K overblown? Actually, no. The reason nothing catastrophic happened is that all that preparation worked. Engineers spent years fixing code. The boring heroes who saved New Year&#8217;s 2000 never got proper credit.</p><h2>Netflix&#8217;s Worst Christmas Ever (And Why It Made Them Better)</h2><p>Christmas Eve 2012. Families settle in to watch movies together. Netflix goes down.</p><p>A developer accidentally ran a maintenance command on live production data in AWS. The outage lasted 20 hours. Millions of holiday movie nights, ruined.</p><p>But here&#8217;s the twist: this disaster led Netflix to pioneer &#8220;Chaos Engineering,&#8221; deliberately breaking their own systems to make them stronger. They built tools with names like Chaos Monkey that randomly kill servers to test resilience.</p><p>Now the whole industry does this. Your streaming services are more reliable today because Netflix had a terrible Christmas thirteen years ago.</p><h2>The Holiday Hacker Calendar</h2><p>Cybersecurity teams have learned to dread December. Attacks spike by 30% during the holidays. 76% of ransomware encryptions happen when offices are empty.</p><p>Hackers know IT teams run skeleton crews. Response times slow down. Everyone&#8217;s distracted by eggnog.</p><p>In 2020, the massive SolarWinds hack, which compromised the Treasury Department, State Department, and thousands of companies, was discovered during the Christmas period. Emergency response ran through New Year&#8217;s Eve.</p><p>Now Europol runs preemptive operations every December, taking down hacking infrastructure before the holidays begin. In 2024, they seized 27 attack-for-hire services right before Christmas.</p><p>The war on holiday hackers is now an annual tradition.</p><h2>Why This Keeps Happening</h2><p>The pattern is clear: holidays create a unique window in tech.</p><p>For builders, it&#8217;s quiet time. No meetings. No distractions. Tim Berners-Lee built the web while waiting for his baby to arrive. Sometimes the best work happens when the world slows down.</p><p>For companies, January 1st is the perfect launch date. Clean slate. Fresh start. Symbolic timing that engineers have exploited for decades.</p><p>For attackers, it&#8217;s an opportunity. Empty offices. Slow responses. Maximum chaos potential.</p><p>For all of us, it&#8217;s a reminder that tech doesn&#8217;t take holidays even when we do.</p><h2>One Last Story</h2><p>December 2022. A ransomware group attacked Toronto&#8217;s Hospital for Sick Children, a children&#8217;s hospital, right before Christmas.</p><p>Patient care was delayed. Systems went down. Families with sick kids faced even more stress during the holidays.</p><p>Then something unexpected happened. The ransomware group publicly apologized. They said their affiliate &#8220;violated our rules&#8221; by targeting a children&#8217;s hospital. They offered a free decryption key.</p><p>Even cybercriminals have some holiday spirit, apparently.</p><div><hr></div><p>So there you go. A brief history of tech during the holidays: the launches, the crashes, the hacks, and the occasional miracle.</p><p>Next time you&#8217;re relaxing between Christmas and New Year&#8217;s, remember: somewhere, an engineer is either making history or preventing disaster.</p><p>Hopefully not both at the same time.</p><p><em>Happy holidays. May your deployments be frozen and your systems stay up.</em></p><p>P.S. No Zune-level bugs from us this year. If you&#8217;re curious what we actually ship: <a href="https://www.ninetwothree.co/portfolio">nineTwoThree.co</a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://techtrenches.dev/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://techtrenches.dev/subscribe?"><span>Subscribe now</span></a></p><p></p>]]></content:encoded></item></channel></rss>