I totally agree with this article. This is one of the biggest and most immediate problems we'll face with generative AI. The temptation to delegate everything is so strong that we will do it in any possible circumstance. At some point, we should really hope that hallucinations and non-determinism in these tools are definitely solved; otherwise, there won't be anyone prepared enough to manage the rising problems at the moment they are most needed.
Everybody is so sure that scaling will solve every current limitation, but I'm really not sure that this will happen in the context of Llama and their architecture.
The scaling skepticism is warranted. Toby Ord did the math on the actual scaling law graphs: halving model error requires roughly a million times more compute. We're hitting data walls, energy walls, and architectural limits simultaneously.
And you're right about the timing problem. The people who could catch AI failures are being cut now. By the time the limitations become undeniable, the expertise to manage them will already be gone.
That's amazing. I feared that my formerly prodigious intellect was going soft and senile. It looks like the pretty princelings of AI, and all things phony, insincere and ultimately shockingly sinister, are well on their way to becoming the malleable morons of the new corporate fascist state.
You're not alone. The pattern is everywhere once you start looking. The question is what we do about it individually while the industry figures itself out.
I totally agree with this article. This is one of the biggest and most immediate problems we'll face with generative AI. The temptation to delegate everything is so strong that we will do it in any possible circumstance. At some point, we should really hope that hallucinations and non-determinism in these tools are definitely solved; otherwise, there won't be anyone prepared enough to manage the rising problems at the moment they are most needed.
Everybody is so sure that scaling will solve every current limitation, but I'm really not sure that this will happen in the context of Llama and their architecture.
The scaling skepticism is warranted. Toby Ord did the math on the actual scaling law graphs: halving model error requires roughly a million times more compute. We're hitting data walls, energy walls, and architectural limits simultaneously.
And you're right about the timing problem. The people who could catch AI failures are being cut now. By the time the limitations become undeniable, the expertise to manage them will already be gone.
Love this article and it resonates a lot with my personal experience. Thanks for sharing.
Appreciate it. The research confirmed what a lot of us were already feeling.
That's amazing. I feared that my formerly prodigious intellect was going soft and senile. It looks like the pretty princelings of AI, and all things phony, insincere and ultimately shockingly sinister, are well on their way to becoming the malleable morons of the new corporate fascist state.
https://davidgottfried.substack.com/p/3-phenomena-which-auger-unrelenting
Great article - i completely agree, and am also happy i'm not the only person seeing/experiencing/living through this dystopian nightmare!
You're not alone. The pattern is everywhere once you start looking. The question is what we do about it individually while the industry figures itself out.