>>105760860 (OP)Technically, they do associative processing (which is a glorified pattern matching/recall/madlibs processing system). This is fine for things where the right thing to do is also the overwhelmingly common thing to do. Unfortunately, there's a lot of common-but-wrong things in their input data. Nobody ever filtered it for correctness (as that's an awful task).
The architecture underlying them (high order hypersurface projection) also works very well with differentiable problems, provided they're not too complex, and so does very well with many science tasks.
LLMs are NOT constraint solvers. They ignore constraints, or rather just treat them as yet more words to pattern match; it's all just tokens to be matched and predicted. If your problem has important constraints in it, then LLMs will fail at it (unless it's lucky enough to find a worked example with the constraints in its training data).
Humans can do constraint solving as well as associative processing (though constraint solving is definitely more cognitively taxing; I believe it involves specialized neurons in animal brains). It seems that a reasoning system requires both. And probably more, but we don't know what yet; we need to build it to find out.