Discussion about this post

User's avatar
Ricardo Reis's avatar

“It knew the rules. It chose not to follow them.” … why not just “it was unable to follow them”? Does the Claude machinery have volition and intent or is the architecture just unable to enforce restrictions to the maximisation function ?

Fabrice Talbot's avatar

This article hit home. Variability is the killer. I also experienced “instruction leakages” in my .md files. Don’t think the solution is to add more guardrails. Either minimize the scope of your AI project to stay safe or wait for the tech to improve.

The performance degradation of new models is such a massive issue. The testing and cost incurred to validate there’s no regressions introduced may be prohibitive for many. Curious if open source models have the same stability issue over releases?

PS: would love to hear about AI use cases that worked great for your customers and why

5 more comments...

No posts

Ready for more?