Discussion about this post

User's avatar
Fukitol's avatar

My current client is a small company so, although I'm in a senior role, I only have to deal with PRs from a few juniors. I feel the burn more on my end, since I wear a lot of hats on such a small team and (under protest) have worked in claude code in hopes of improving throughput (futile hope, I argue). So probably 50% of the LLM code I review in this ongoing experiment, I prompted.

I also use it on pet/personal projects where the stakes are low. At worst, something only I use does something unfortunate. This is a fine use case for LLMs I think, but because I have been at this 20+ years I naturally check and recheck every line anyway. And that's where it gets weird. That's where I stare into the void.

It is very difficult to model the "mind" of a stochastic text generator. With other (human) programmers, I can have conversations with them, get a feel for their strengths and weaknesses, know when to second guess something that looks odd in their code and when to assume that, if the logic checks out, they had a reason for doing it the way they did.

Not so, at all, with chatbots. They will speak with absolute confidence in great depth and detail on any subject or specialty, then fuck up the most basic things, then spit out some fully functional if inelegant code, then asked for a small change, rewrite an unrelated half of their own code wrongly.

This is what fries my brain. With them it's contexts within contexts all the way down, all of them changing constantly according to some deranged rube goldbergian clockwork of tensors and matrices. Our brains are not made to deal with this fractal insanity. Huge portions of our psyche are built around dealing with other people, or people-like things, that exist within some boundary of predictability that can be discovered with observation and familiarity.

We leverage this subconsciously whenever we talk to our pets or plants, or see gods and spirits at work in the impersonal forces of nature, or wonder why our code or gadgets are misbehaving. But *especially* when something presents as human like chatbots do, this whole evolved subconscious architecture kicks in automatically.

And then the bot breaks it. Over and over. And we get exhausted, as we would in a bad relationship with a person with serious issues. Because no matter how much we tell ourselves logically, consciously, that the bot isn't human and can't be anticipated like one, that's only a single tiny input to the much larger true neural network inside our heads that begs to differ. And yet finds itself confused and disappointed moment to moment with the digital demon with whom we're trying to communicate.

whisperer's avatar

Good article, Denis and hope you are well and safe.

In my opinion generated code is by itself worthless. It can easily be created at will in any desired amount.

The actual “value” comes from someone accurately and precisely expressing intent (which is best done in a formal language with unambiguous interpretation, we used to call this programming) of how a machine should behave and the intent being in itself correct in the sense of properly and reliably solving whatever task/problem that led the programmer to write the code in the first place.

This is why using ai to generate large amounts of code and then attempting to critically review it seems asinine to me. The AI can not read your mind and figure out what your intent is. You have to express it, and if you can properly do so then translating that to code was always trivial.

Sorry for the rambling.

35 more comments...

No posts

Ready for more?