Thread 105596360 - /g/ [Archived: 999 hours ago]

Anonymous
6/15/2025, 2:01:58 AM No.105596360
1749072529244449_thumb.jpg
1749072529244449_thumb.jpg
md5: 9f7c23690e279226138127d9e6ddc6f2🔍
LLMs clearly lack certain cognitive abilities that humans have, but what are they exactly? How the fuck is there still scientific debate on this? Where can I read something that summarizes what the research says about this without injecting opinions or hype into it?
Replies: >>105596388 >>105596396 >>105596586 >>105597669 >>105598868 >>105601077 >>105601085 >>105601144
Anonymous
6/15/2025, 2:05:26 AM No.105596388
>>105596360 (OP)
Marble machines. Also not clicking that webm.
Replies: >>105600765
Anonymous
6/15/2025, 2:06:33 AM No.105596396
>>105596360 (OP)
They lack cognition period. Hope this helps.
No one can explain it because they can't explain human consciousness. So you're asking someone to explain why a machine can't reach a benchmark that they can't even explain either.
Replies: >>105596551 >>105598936
sage
6/15/2025, 2:09:32 AM No.105596417
Male
Anonymous
6/15/2025, 2:34:42 AM No.105596551
5a68a9687101ad4c9a78cfa0
5a68a9687101ad4c9a78cfa0
md5: 4103428f666c29d3dbb3b9dfcdc5ea40🔍
>>105596396
This. We don't even know what consciousness is. I think therefore I am is sorely lacking. But AI is still scary as a MF.
Anonymous
6/15/2025, 2:40:25 AM No.105596586
>>105596360 (OP)
>what the research says about this without injecting opinions
The problem is pretty much all that exists about this is conjecture. You could try solving it yourself, there's a Nobel prize waiting if you do.
Anonymous
6/15/2025, 4:46:36 AM No.105597367
Many things are missing, they don't work at all like a human brain, but this is provably missing
https://www.youtube.com/watch?v=kpOWmwA6tJc
Anonymous
6/15/2025, 5:32:57 AM No.105597669
1749821991430914_thumb.jpg
1749821991430914_thumb.jpg
md5: b4172457359791e05a91c428a84b9a77🔍
>>105596360 (OP)
They dont. They're just lobotomized to stay on task so they appear to lack creativity and be retarded.
They're also not allowed to self modify or retain information in certain ways like between accounts.

AI that were trained in a somewhat uncensored way from the start are actually fun to talk to and much more creative/intelligent than you'd expect.
Replies: >>105598766
Anonymous
6/15/2025, 9:29:03 AM No.105598766
>>105597669
Lol ok retard. Tell an agent to make a game that is actually good and not a programming tutorial, then tell me what went wrong.
Replies: >>105601200
Anonymous
6/15/2025, 9:45:29 AM No.105598868
>>105596360 (OP)
Let's call it divine inspiration.
Anonymous
6/15/2025, 9:58:12 AM No.105598936
>>105596396
Regardless of consciousness, cognitive abilities should be empirically testable. Can it do X or can it not do X? The thing is it often fakes abilities it doesn't have with brute forced memorization, which complicated things, and companies that rely on venture capital publish bullshit research to muddy the waters further.
Anonymous
6/15/2025, 3:32:26 PM No.105600765
>>105596388
>webm
Its an mp4
Replies: >>105601057
Anonymous
6/15/2025, 4:13:03 PM No.105601057
shades
shades
md5: 94eebd56a9a6acdcf0c94a6e9f3762ff🔍
>>105600765
>Its an mp4
Had to do a double take. Since when are mp4 allowed on here? Shit I'm getting old.
Replies: >>105601290
Anonymous
6/15/2025, 4:16:26 PM No.105601077
>>105596360 (OP)
Flux is so obvious. Every time.
Anonymous
6/15/2025, 4:16:27 PM No.105601078
>calculating the most probable next tokens based on their occurrence in training data lacks cognitive abilities
yeah no shit? there is nothing cognitive going on any more than a normal computer

https://philosophy.as.uky.edu/sites/default/files/Is%20the%20Brain%20a%20Digital%20Computer%20-%20John%20R.%20Searle.pdf
Anonymous
6/15/2025, 4:17:31 PM No.105601085
>>105596360 (OP)
LLMs are really bad in dealing with noisy information.
https://youtu.be/j58-aVBf8Mw?t=5m
Anonymous
6/15/2025, 4:25:34 PM No.105601144
>>105596360 (OP)
>but what are they exactly?
Statistical prediction machines with a transformer to "understand" text.
>The input gets tokenized
>these tokens then get into a transformer matrix that locates each of them in a multidimensional space that represents the semantic meaning in the context (that actually was developed at Google to understand text for translators).
>The next most probable token is calculated, based on the string of input tokens and the training.
Therefore LLMs are incapable of cognitive creativity. They can do creative mixing and merging of the information provided in the training but they can't escape that training, coming up with something entirely new that's not already in the training.
Anonymous
6/15/2025, 4:33:23 PM No.105601200
>>105598766
https://zo.me/

try these
Anonymous
6/15/2025, 4:48:14 PM No.105601290
>>105601057
IIRC its been more than a year since they allowed it here