>>941152828
Indeed, but to correct you, it's not "hallucinated from nothing," it's "vomited back from it's training data." One of its integral faults is that training data can be contradictory.

Take for example a recent stealth /an/ thread on /v/, where someone asked if a venomous snake is immune to venom from its own bites. I was curious too and browser searched it. The articles I found were contradictory. Big, long articles you have to sift through about the history of snakes, what venom is, why snakes have venom, to finally find
>No, a snake is only safe from it's own venom when it's stored in specialized venom glands. If its venom enters its bloodstream, the snake will be affected the same as its prey...
and in another article
>Due to the prominence of accidental bites, snakes have evolved to be unaffected by their own brand of venom...
A dozen pages, a dozen conflicting answers with long justifications. Now when that junk gets fed into an AI, which set of data is it giving back, and how accurate was its training data in the first place? This is just a real example of "AI answers aren't perfect, but today, search engines are less perfect." For the purpose of this post, I threw the question into a small local model, and I find it funny that it gives a reasonable take on both sides and points out that different snakes have different means, unlike trying to answer the question from a browser search.