Search Results
7/6/2025, 4:44:06 AM
>>96021698
>The calculation methodology doesn't preclude "really thinking"
I meant it as it doesn't form abstract rules independent of language: it just searches for the most likely next token to slap down given the history of tokens in its current statement/chat. The AI for a grunt in a vidya or a chess playing bot doesn't "really think" if you want to get all metaphysical, but it does think in that it has a series of rules based on the task at hand.
Instead of going "I should play Card X to defeat Card Y because Rule 7 says that will happen", it just goes "After doing enough random shit, playing Card X after I see Card Y tends to end in a win". It doesn't have actual planning or objectives the same way something like a chess bot would actually be "thinking" some number of moves ahead or a shooting game enemy "thinks" "I am being attacked; cover protects me; I should move to the nearest 'cover' node on the map".
Go look at that Claude Plays Pokemon thing if you want an example of the sort of "thinking" LLMs do.
>The calculation methodology doesn't preclude "really thinking"
I meant it as it doesn't form abstract rules independent of language: it just searches for the most likely next token to slap down given the history of tokens in its current statement/chat. The AI for a grunt in a vidya or a chess playing bot doesn't "really think" if you want to get all metaphysical, but it does think in that it has a series of rules based on the task at hand.
Instead of going "I should play Card X to defeat Card Y because Rule 7 says that will happen", it just goes "After doing enough random shit, playing Card X after I see Card Y tends to end in a win". It doesn't have actual planning or objectives the same way something like a chess bot would actually be "thinking" some number of moves ahead or a shooting game enemy "thinks" "I am being attacked; cover protects me; I should move to the nearest 'cover' node on the map".
Go look at that Claude Plays Pokemon thing if you want an example of the sort of "thinking" LLMs do.
Page 1