>>23119484
>Sure they can keep up the act pretty good at lower context/low word count but as the conversation goes on or becomes more complex they’ll eventually start slipping up in was a human doesn’t normally do. Little cues here and there like incorrectly recalling events or changing opinions suddenly or reacting to incoming information out-of-character.
i think this happens when i talk in those threads, but i haven't done it for very long
>>23119484
>This is relatively speaking, a fairly low entropy environment. Now imagine that you are attempting to simulate a image board with multiple AI characters interacting not only to each other but events on a live show in real time, while adhering to 4chan format (something I doubt there’s a lot of training data on since I’ve never been able to successfully replicate with an LLM).
the important real-time events could be transcribed/written and fed to the system by one paid guy, possibly
>I don’t think it’s impossible by any stretch with some of these new unreleased models and they’re touted capabilities, but I thinks it’s highly unlikely those resources would be waste on something as silly as Sam Hyde’s Big Brother clone.
it's not about new and unreleased models, or even LLMs
it's about neural networks in general
sentiment analysis programs have been around for over six years, same with computer vision
LLMs are a single parlor trick of neural networks, which are fairly versatile and useful in their place
everyone is so distracted by the consumer models and what nerfed LLMs can do
you should check out the youtube channel sentdex and look at some of the older videos, if you're interested
he also wrote a book on neural networks
all the modern "AI explained" videos are so full of slop
these are basically like cooking videos, it's one mildly autistic dude just building things and explaining his thought process
the guy who made the modified baritone AI for 2B2T was cool as well, that was also over 5 years ago