>>24502526>I can see empirically when it's right and when it's wrongOne of the major problems with AI are the convincing hallucinations that can be peppered into a large text, and unless you are fact checking every single thing, you can't be expected to know what is empirically true or false, as that would imply that your knowledge is infinite. When a topic gets too advanced, obscure, or novel, and the more text is in its context window, the more likely for massive errors or that these small errors will build up to a critical mass. Even when there are minor errors, you can slowly build up your knowledge of the world based on a collection of convincing yet erroneous facts, and when a larger delusional error aligns with a constellation of these minor misunderstandings, you will be much more likely to think it's reasonable. From your perspective, the subtle ways that everything is just ever-so-slightly off line up perfectly with the greater delusion, so it seems fairly reasonable. When it comes to those really deluded facts, some of them are extremely difficult to verify, especially when they're novel topics.
If you really are in compsci then verifying AI is simpler in directly programming/engineering related topics, because you'll know if something is going wrong as soon as you type
>g++But when it comes to literary critique, philosophy, obscure and novel topics, AI can be inaccurate in so many ways and the topics can be so difficult for AI the AI itself becomes kind of useless.
With that said, AI should never be used as a source of information or replacement for educating yourself through reading and contemplating on a text. It's best to use for quick pointers and looking up ideas and terms that you can verify yourself quickly, but NOT explanations of passages or texts. The best use for AI by far is general pointers, using it to get a normal person's perspective on something, as most of the training data is from midwit redditors, and especially fun stuff and banter like you have with your flaubert AI.