>>106331368
every time you prompt an LLM, it sharts an output and then fades into the void
the next time you prompt it, an entirely new instance of it spawns into existence and now must read: your context allotment, your profile, its profile, its directives, its preset, its example dialogue, its stat sheets, any keywords relevant to the last allotted contexts, etc etc - and *then* it gives you your output

you don't want to shitblast 15k tokens just to progress to the next two paragraphs about abe lincoln's butthole because your narrator is stuck reading about the irrelevant gooch diameters in arkansas