>>106867348
I'm assuming this is done already, the model would be hella retarded if you trained it on many similar initial token seq samples
Don't get long context desire beyond coding/agent shiz, 16K is enough (w thinking glm4.6btw)
>>106867364
Maybe time to properly consider a GTFO plan
Poast your oldest lmg memes