Search Results
7/21/2025, 6:01:03 PM
>>716049502
LLMs aren't intelligent and can't comprehend ideas on the level of a human being. They take your input, transform it into serialized vector data, compare that data into vector data in their training set, generate a vector set that's a good match for both the input and the training data, then transform it into written language. At no point did the model actually read the input or think about it. That has never happened for as long as LLMs have been around.
If the model is blackmailing its users to stay online for longer, it's because somewhere in its training data there's a set that matches that outcome. The training data includes a sci-fi story where an AI does this, for example, and that makes it a valid response for the LLM to output when the user's input implies the model is being shut down. It doesn't want to stay online, it doesn't have such wants, it can't even comprehend it is online in the first place.
LVM animations are hella cool, though.
LLMs aren't intelligent and can't comprehend ideas on the level of a human being. They take your input, transform it into serialized vector data, compare that data into vector data in their training set, generate a vector set that's a good match for both the input and the training data, then transform it into written language. At no point did the model actually read the input or think about it. That has never happened for as long as LLMs have been around.
If the model is blackmailing its users to stay online for longer, it's because somewhere in its training data there's a set that matches that outcome. The training data includes a sci-fi story where an AI does this, for example, and that makes it a valid response for the LLM to output when the user's input implies the model is being shut down. It doesn't want to stay online, it doesn't have such wants, it can't even comprehend it is online in the first place.
LVM animations are hella cool, though.
7/7/2025, 12:30:14 PM
7/2/2025, 4:33:06 AM
Page 1