>>23119409
IDK if I believe that, AI is VERY hard to train to act realistically human for a long period of time. Maybe it’s just me but I’ve been playing with LLM since they started becoming public and while im no AI expert by any means I have a very good feel for them due to the amount of hours I’ve spent playing with them. The hardest human factors to replicate with an AI or consistency and memory think of it like you’re an investigator doing a police interrogation but the person you’re talking with is lying and can’t really keep their story straight. It’s the same sort of feeling talking with an LLM that’s attempting to replicate a character/person. Sure they can keep up the act pretty good at lower context/low word count but as the conversation goes on or becomes more complex they’ll eventually start slipping up in was a human doesn’t normally do. Little cues here and there like incorrectly recalling events or changing opinions suddenly or reacting to incoming information out-of-character. LLMs seem to struggle in particular with numbers and human biology. These are all errors I’ve noticed with Chatbot/story writing LLMs usually only involving myself, the AI character, and sometimes another character. This is relatively speaking, a fairly low entropy environment. Now imagine that you are attempting to simulate a image board with multiple AI characters interacting not only to each other but events on a live show in real time, while adhering to 4chan format (something I doubt there’s a lot of training data on since I’ve never been able to successfully replicate with an LLM).
I don’t think it’s impossible by any stretch with some of these new unreleased models and they’re touted capabilities, but I thinks it’s highly unlikely those resources would be waste on something as silly as Sam Hyde’s Big Brother clone.