>>16732163This could easily turn into nitpicking about what it means to "do" something, so it kinda is what it is.
What these LLM's are "doing" is predicting the next word that it should put in a sentence given an input as context. They can (attempt to) identify an object from an arbitrary image. They can summarize a list of source material. They can copy-paste code it finds on tech forums based on what you ask it to do.
It's about as "intelligent" as your average dalit tech support guy.