>>106582612
Funny seeing you post exactly what I thought today. I burned 10 bucks on Opus this weekend to see what the hype is all about and came out thoroughly disappointed. The best I can say about Opus is that it tones down a lot of common AI-isms (em dash, comma qualifiers).
It makes sense though. Even the best LLMs still suck ass at simple logic puzzles like figuring out table seats from clues. LLM is very good at spitting out smart-sounding text that fit the prompt, but they don't do any actual logical "model-based" thinking. On the other hand, (smart) readers and (good) authors think about plot and characters logically. Cause and effect, high and low probability, nature and nurture, etc.
This is why any roleplay with LLMs eventually fall apart without significant handholding. LLMs are like midwits who can parrot smart-sounding responses to most questions but fall apart as soon as they encounter a problem that requires actual understanding.
Until a next generation of AIs that incorporate LLMs with causal/world models come out, this won't change. That said, they are very good at mimicking. Just don't expect them to think independently.