Search Results

Found 2 results for "33e4e59064077bdc3c217fed5e484c82" across all boards searching md5.

Anonymous /g/105872817#105875601
7/12/2025, 1:33:06 AM
>>105874937
>We can do it now.
>And models whose state is not kept as a kv_cache. In those, the entire state changes as the inputs come in.

Elaborate. How does the model self update? Even in the realm of fantasies ather anons have proposed here in the past, none of them or anyone else anywhere else has actually proposed a method for higher model can update itself without re-finetuning itself (which, even if it could do that, would be monstrously inefficient and time-consuming and wouldn't even replicate a regular person learning something new and retaining it). An anon a few threads ago mentioned how Bayesian models can (sort of already have) solve the "models don't actually think" problem (kinda but also not really). But that doesn't solve the problem of it not being able to actually learn something, at least not in the conventional way that we understand learning is.
Anonymous /g/105822371#105832189
7/8/2025, 1:29:07 AM
>>105823837
>>105825549
>>105825799
>>105825825

How can AI models improve themselves without modifying their own weights, understanding how their own training data works, and making edits to that? That would require a very advanced pipeline that even if implemented would take far too long to "self improve" upon. Self-improving models are currently just a meme for the same reasoning models are a meme. They can't actually think, they replicate semantic meaning based on input. I see this as a dude who routinely uses both local and online models for his personal hobbies on the daily. The models THEMSELVES Believe and explain to you why they themselves thinking is fundamentally impossible. They are good for explaining certain complex topics, debugging errors and software, and OKish at RP depending on the model and parameter count. Nothing more. As an AI enthusiast myself, the AGI means still existing kinda pisses me off