>>107100705
the larger the context, the more issues llms have with following all your commands with pristine precision
in a way the chat story IS a prompt, so if there's constant mentions of user doing something, it will unavoidably sip into llm behavior
even if you tell it to "not say anything for {{user}}", it may repeat what you say in your prompt, because it's acknowledging it's command of writing a *story*

tl;dr deal with it and/or stamp it out with extreme prejudice because if it happens once it will likely happen again (and again and again...)