>>106506066
>is debated even today in philosophy, psychology, psychiatry and even biology
That's kind of my point. You act like these words have simple meanings and use them flippantly.
>The average person knows the definition of the words "know" and "meaning"
They know what these words mean in theory, but in practice it can be hard to conceptualize if, for example an LLM "knows" things.
>So you're saying that my argument is right you just don't agree with the wording.
No - it's not right. Because an LLM is trained using a certain mechanism. But when it is running it is just doing linear algebra. The idea that it's "predicting" anything is just an abstraction - your interpretation of what it is doing.
>That's a very convenient definition. One that allows LLMs to be considered intelligent, but it's very stupid
Not an argument. How can you distinguish between a truly intelligent entity, and one that "emulates" intelligence?
The bottom line is this: humans are made up of neurons. You could just as easily say "humans don't know anything, they don't think, it's all just neuron impulses."
>LLMs do not emulate anything.
You're getting the argument confused. I'm not saying LLMs are intelligent - just that your argument against them is flawed. The issue with LLMs is not that they just "predict" what makes the most sense. The problem is they don't do it well.
If they did it perfectly, you wouldn't be able to distinguish between an LLM and a human.
>They take an input and produce an output. Every single field dropped the "input - output" model for intelligence 50 years ago.
This is just gibberish. How can you do anything if not modelling it on input/output?
>until a human reads and interprets them they're just a bunch of symbols without inherent meaning.
Could be said about anything a human creates. Without the ability to interpret it - it's meaningless.
Like I said, peak midwittery. Read GEB.