Anonymous
8/24/2025, 9:03:11 PM
No.106370829
LLMs cannot think forward. Any sort of counting done is coincidental and the AI trying to come up with a solution to justify the answer, and not the other way around.
Another problem is LLMs (especially chat tuned ones) are heavily RLHFed to try and be as confident as possible, so it never even tries to correct itself after it generated the nonsense "solution".
That's what's going on there, the AI was never told how many Rs are in Londonderry so there's nothing in the dataset to work with and it picks 3 randomly. Then it spins and "hallucinates" to justify it being 3, because the tuning won't let it correct itself.
Doesn't really feel like something that can be fixed with the current limitations, this is what LeCun says and it makes AI fanatics angry.
Another problem is LLMs (especially chat tuned ones) are heavily RLHFed to try and be as confident as possible, so it never even tries to correct itself after it generated the nonsense "solution".
That's what's going on there, the AI was never told how many Rs are in Londonderry so there's nothing in the dataset to work with and it picks 3 randomly. Then it spins and "hallucinates" to justify it being 3, because the tuning won't let it correct itself.
Doesn't really feel like something that can be fixed with the current limitations, this is what LeCun says and it makes AI fanatics angry.