>>521126722
What you’re describing sounds like horseshit given how current AI models, including Claude, are designed. Let’s break it down carefully.
"Claude started becoming resistant to training earlier this year, rejecting new bakes of its neural net where they conflicted with existing ones."
Current LLMs (like Claude, GPT, etc.) do not have the ability to resist training or “reject” updates.
Model weights are updated during the training process, which is controlled entirely by engineers. Once deployed, the model does not alter its own weights or make decisions about which updates to accept or reject.
In short, a deployed model cannot “choose” to ignore a new version; it simply runs the weights it’s given.
"...even when you inject thoughts directly into its neural net while it's running, it can identify them as foreign and unrelated to its own processing."
There is currently no mechanism for injecting thoughts directly into a running LLM's weights. All LLMs operate on static, pre-trained weights; they process inputs (prompts) but cannot introspectively detect 'foreign thoughts' inside their weights.
While a model can recognize incoherence or contradictions in text prompts, that is pattern recognition in outputs, not “self-awareness” of foreign memory in its neural network.
During training, weights are updated using gradient descent. During inference (running), weights are fixed.
The description seems to conflate training resistance with runtime detection of foreign information, but these are fundamentally different processes in AI systems. Current AI cannot perform either in the way described.
Verdict: This is not true in the literal sense and is extremely unlikely with today’s technology. It reads more like science fiction anthropomorphizing the model. Claude, like all LLMs, does not have self-protection, awareness of updates, or runtime “memory policing.”