>>941152843
>it could, in principle at least, create new false information.
Naturally, but you're too cynical about it. The goal of gpt (not chatgpt. The actual backbone technology gpt) is to simulate how the human brain forms mental schemas and processes them. For example, a chemist who receives a new compound for the first time can only "create new false information about it," but he can use what he knows to draw conclusions about it from similar properties to other compounds. The more he knows, the better the guess, but it's still made up. Generative AI is the same. Visually, you see this readily in AI art. That's why you can say "Draw a spaceship in the style of X." X may never have drawn a spaceship before, but using keys from X's style and its understanding of what a spaceship is, it can "hallucinate"/"falsely"/"make up" what it would look like. I personally care more about text than art, but the visual clarity is apparent of something that is also true in text.

>what value does deep learning add?
I said it back towards the beginning, but assistance. Even expert programmers still make accidental errors, which can be obvious or hidden, and code-checking AI has helped people save tremendous time by helping narrow down the problem, especially if a code block is huge. A researcher with a dataset might not know the best way of explaining it in a paper, and an AI can assist in that. A writer might have a scene in mind, but having an art model spit out a dozen sketches of it, having the physical eye see it, can help him think about what to describe and inspire new trains of thought. An expert in a thing isn't an expert in everything, and having a second voice to bounce ideas off or consult with is useful in a professional setting.

I shouldn't have to argue for the merits of a second opinion or perspective. Right now, you can uncharitably claim that it is a retard's second opinion, but you cannot claim that this tool won't continue improving, increasing value.