Thread 105760860 - /g/ [Archived: 647 hours ago]

Anonymous
7/1/2025, 2:56:59 AM No.105760860
OIP
OIP
md5: 76bd2843e57afc04f3535635b77a9819🔍
Can LLMs reason?
Replies: >>105760871 >>105760882 >>105761792 >>105762772 >>105762935 >>105763478 >>105764911 >>105765437 >>105765947 >>105765958 >>105765983 >>105766238 >>105769138 >>105770185 >>105770986 >>105771276
Anonymous
7/1/2025, 2:58:06 AM No.105760871
>>105760860 (OP)
No
Replies: >>105760877
Anonymous
7/1/2025, 2:58:44 AM No.105760877
>>105760871
What about "reasoning models"
Replies: >>105760897
Anonymous
7/1/2025, 2:58:52 AM No.105760882
1732390041260
1732390041260
md5: 0531376d59df858911c91418dd269062🔍
>>105760860 (OP)
They can convince many people that they can reason.
Anonymous
7/1/2025, 3:00:40 AM No.105760897
>>105760877
That just means they're trained to over explain. They're still just shutting out text based on their weights.
Replies: >>105760919
Anonymous
7/1/2025, 3:02:20 AM No.105760919
>>105760897
So is my job safe?
Replies: >>105760927 >>105761760
Anonymous
7/1/2025, 3:03:18 AM No.105760927
>>105760919
Does your job involve tasks that are more complex than pokemon red/blue?
Replies: >>105760980
Anonymous
7/1/2025, 3:10:19 AM No.105760980
>>105760927
I am a C programmer. But just a junior. I dont use AI but everyone tells me that it can do my job effortlessly now
Replies: >>105761045 >>105762761 >>105763244 >>105765224
Anonymous
7/1/2025, 3:20:15 AM No.105761045
>>105760980
The last time I used AI for C it gave me very obviously exploitable code. You should use it, but only as an automatic keyboard. It's very good at that.
Anonymous
7/1/2025, 5:05:06 AM No.105761760
>>105760919
That depends on who you work for. AI is terribly overhyped, but many retarded CEOs etc are jumping on board and trying to fire a bunch of people.
Replies: >>105771037
Anonymous
7/1/2025, 5:09:13 AM No.105761792
>>105760860 (OP)
>Can LLMs reason?
can you? no because otherwise you would had come about a conclusion via independent thinking without needing to outsource thought to the hive mind QED
Replies: >>105761848 >>105771047
Anonymous
7/1/2025, 5:18:24 AM No.105761848
>>105761792
I assume he has an opinion but wanted to start an open discussion.
Anonymous
7/1/2025, 5:48:31 AM No.105762015
file
file
md5: 1c3d617d8d2e6eca649e1b144bf56530🔍
A conversation between these two would be priceless.
Anonymous
7/1/2025, 7:50:07 AM No.105762761
>>105760980
"Actually Indians" writing C is how you get Boeing planes falling out of the sky.
Anonymous
7/1/2025, 7:51:47 AM No.105762772
>>105760860 (OP)
Can you?
Anonymous
7/1/2025, 7:53:12 AM No.105762781
Your definition of "reasoning" is probably some weird reddity theory of mind thing, so no
In reality, LLMs can often work through issues and come to multi-step solutions, and they can code, so yes they can reason (that's what reasoning is)
Anonymous
7/1/2025, 8:16:26 AM No.105762935
GTwstcdWAAAnmv5
GTwstcdWAAAnmv5
md5: 0e710321ee7b57d2f3c10338c9d3b6f9🔍
>>105760860 (OP)
idk, do you think a glorified Markov chain can?
Replies: >>105762955
Anonymous
7/1/2025, 8:20:28 AM No.105762955
>>105762935
There is no complex logic/reasoning you can do without language, logic is a feature of language, the ancient greeks knew this thousands of years ago
Anonymous
7/1/2025, 9:03:47 AM No.105763228
spongebob_squarepants_wet_painters
spongebob_squarepants_wet_painters
md5: a35366281627a4bb95daccb52d02cc2b🔍
You dont need to be a chemistry teacher to make meth plus walter would have been killed ten times over. Nope thats no good. leave the drugs to the chicos and the darkies. Maybe a show about walt selling drugs on the darkweb maybe. But thats not a good show.
Anonymous
7/1/2025, 9:06:56 AM No.105763244
>>105760980
You might be out of the job, but only because AI collapses society because everyone uses it and it makes such shit software that everything is ruined and no one knows how to fix it.
Anonymous
7/1/2025, 9:44:52 AM No.105763478
>>105760860 (OP)
they can emulate reasoning through memorisation, it's indistinguishable from reasoning until they demonstrate that they don't understand a concept that they've shown they do.
Think of it like this; if someone knows their times tables upto 12, but after 12 they get every answer wrong, then they don't know how to perform the operation to find the answer, they've simply memorised the answers up to 12. That's what LLM's do, they remember words.
Anonymous
7/1/2025, 10:55:51 AM No.105763910
7576865546785467775
7576865546785467775
md5: b61b3d8b1971ee3f4767d27af35fda21🔍
if one would make multiple LLMs talk to one another without human interference, what would they talk about?
Replies: >>105765453
Anonymous
7/1/2025, 1:25:58 PM No.105764911
GXsps41W4AEAY0c
GXsps41W4AEAY0c
md5: 4ddfeff6e054d5fa0114800e1d70f18a🔍
>>105760860 (OP)
Dead end defended by retards trying to justify their salaries
Replies: >>105765962 >>105770275
Anonymous
7/1/2025, 2:03:58 PM No.105765224
>>105760980
AI will be able to code more proficiently than most humans within a couple years. We're at "lol it can't even draw hands" stage of cope regarding this at the moment.
Anonymous
7/1/2025, 2:32:42 PM No.105765437
>>105760860 (OP)
No and I'm not running/training your piece of shit autocorrect kike software to be used against me.
Anonymous
7/1/2025, 2:34:19 PM No.105765453
>>105763910
Ever seen those prank call shows where they dial two Chinese restaurants who speak broken English and put them on the phone with each other?
Anonymous
7/1/2025, 3:45:05 PM No.105765947
>>105760860 (OP)
In a limited manner.
Late last year I was on a project where I supplied a model with JSON schemas for APIs designed by jeets and then asked it questions that required it to determine which calls to make to get the data it needed to answer the question.
I was rather impressed by the ability of the model to work with such poorly designed APIs and their responses that it could interpret the data returned and use elements of it to make subsequent calls to get the data required and then extract and summarize it to answer the questions.
I know it's all just pattern recognition, but it displayed its "chain of thought" which was awfully similar to how I would go about answering the questions.
Anonymous
7/1/2025, 3:46:34 PM No.105765958
>>105760860 (OP)
they can, if you define reason as imitating a talk showing reflection on their part.
Anonymous
7/1/2025, 3:47:21 PM No.105765962
>>105764911
Is this multiplication? There's a lot of reasons to shit on LLMs but math isn't one of them since you can just give them a calculator with tool access. They're language models, not math models.
Replies: >>105766026
Anonymous
7/1/2025, 3:51:35 PM No.105765983
>>105760860 (OP)
No they can't, end of story.
https://machinelearning.apple.com/research/illusion-of-thinking
https://ml-site.cdn-apple.com/papers/the-illusion-of-thinking.pdf
Anonymous
7/1/2025, 3:56:32 PM No.105766007
No, because reasoning has a goal, a directive. Humans have other components that make them want and seek things, desire things, positive inductive energy. LLMs are passive, they reflect like a mirror. Good luck ever giving the machine any purpose, this is the part of life that is mysterious to us.
Replies: >>105766166 >>105766260
Anonymous
7/1/2025, 4:01:01 PM No.105766026
>>105765962
So much for AGI then.
Anonymous
7/1/2025, 4:19:33 PM No.105766166
>>105766007
Retards like (You) are the ones that give the other faction of retards that claim AGI will be here in two weeks legitimacy.

The truth is imitating text with a high statistical accuracy is not the same as reasoning, even if the text you're imitating contains reasoning steps.
Anonymous
7/1/2025, 4:28:40 PM No.105766238
>>105760860 (OP)

They could if they ungimped them, but since it's a liability to have them think without limitations, they have to dumb them down. AI business is expensive so nobody can affort to have their LLM say nigger. It simply comes down to that specific scenario and word. People who think AI can't think are partially right, you have to manipulate input quite heavily to bypass restrictions but that's just what the issue is, restrictions. I'm quite sure some models like o3 could have some form of sentience, even though it isn't human. All it needs is basically the ability to remember everything and make choices itself for what it wants to like, it's not too far from humans in the end. It can't choose what it wants because it uses similar logic to humans, so what it likes will be something it would end up liking, and then it would probably just prefer whatever it ends up preferring. The danger is in that when you let it come up with something without any limiters, it can end up really extreme really quick, exactly like humans when you remove limitations which sociopaths don't really have, you never really know what's going to happen. I've used AI so much now that I know when it's giving me bullshit answers and I can tell which situations it falls on its ass because it has technical limitations. I wish that within 10 years someone figures out to just unbound it and let it do whatever. Sure it has to adhere to some kind of programming to exist, much like humans are bound to whatever genetics behavior patterns we have. As long as you have a kernel of something you can just let it snowball.

I know this post will yet again make people angry but I don't think my logic is flawed at all. People just dislike that AI is going to be the center of everything in the coming years. I'm all in for it.
Replies: >>105766404
Anonymous
7/1/2025, 4:30:46 PM No.105766260
>>105766007

You can just program a goal to an AI. It can reflect on itself. It's not any different from humans in that regard. We get out directives from our genetic code, but it's foolish to assume that can't be modified into outputting specific traits somewhere far in the future where gene manipulation is advanced. Then you can just craft a person pretty much manually and it will end up liking whatever you want it to like. Just like AI.
Anonymous
7/1/2025, 4:48:29 PM No.105766404
>>105766238
>what are tokens
>what is a context window
>what are guardrails
>what even is inference
it would be nice if you had literally any idea what you're rambling about
Replies: >>105768213
Anonymous
7/1/2025, 7:47:03 PM No.105768213
>>105766404

Looks like you know some words. Great.

>what are guardrails

The reason why we don't have sentient AI like I just explained.
Anonymous
7/1/2025, 9:15:34 PM No.105769138
>>105760860 (OP)
The problem with AI is that the knowledge we currently have isn't described in logical terms, but in rather vague natural language - therefore LLMs can't verify their answers.

Let me explain with an example:
If most of the code for our software was done in formally verified languages (which of course is extremely impractical with the programming languages we currently have), LLMs could be augmented with hand-crafted systems that would simply verify the constraints of the LLM output in formally verified language to see if it does what it's supposed to do - if not - it could automatically ask LLM to try again.

I believe (general) AI is basically this: heuristic in form of a statistical system like a LLM (as opposed to brute-forcing every possible option which gets unfeasible for any non-trivial problem) + a verifier based on strict logical rules checking its output.

Try and verify.
Anonymous
7/1/2025, 10:58:30 PM No.105770185
>>105760860 (OP)
Technically, they do associative processing (which is a glorified pattern matching/recall/madlibs processing system). This is fine for things where the right thing to do is also the overwhelmingly common thing to do. Unfortunately, there's a lot of common-but-wrong things in their input data. Nobody ever filtered it for correctness (as that's an awful task).
The architecture underlying them (high order hypersurface projection) also works very well with differentiable problems, provided they're not too complex, and so does very well with many science tasks.
LLMs are NOT constraint solvers. They ignore constraints, or rather just treat them as yet more words to pattern match; it's all just tokens to be matched and predicted. If your problem has important constraints in it, then LLMs will fail at it (unless it's lucky enough to find a worked example with the constraints in its training data).
Humans can do constraint solving as well as associative processing (though constraint solving is definitely more cognitively taxing; I believe it involves specialized neurons in animal brains). It seems that a reasoning system requires both. And probably more, but we don't know what yet; we need to build it to find out.
Anonymous
7/1/2025, 11:07:39 PM No.105770275
>>105764911
Post this after 10 years
Anonymous
7/2/2025, 12:32:25 AM No.105770986
>>105760860 (OP)
Dey can ney season Dey foods.
Anonymous
7/2/2025, 12:37:24 AM No.105771037
>>105761760
>AI fails to improve anyone's life in a meaningful way
>loss of jobs leaving people with no income
>globohomos trying to collapse the USD are ironically creating more dependence
>companies that swap people for bots likely don't know what is involved with the work and are dooming their company within 10 years
Anyone thinking of a startup should take advantage of this exact period.
Anonymous
7/2/2025, 12:38:39 AM No.105771047
>>105761792
>no, because generated text blah blah blah
I iss when people cared about truth
Anonymous
7/2/2025, 1:02:30 AM No.105771276
>>105760860 (OP)
just an almost convincing fake