>>544702492
First faggot, no they are not training their own model, they are creating a finetune as their devs states, which is like if you were to fucking train a lora for Imagegen, it won't be anywhere near what they could have done if they generated their own models and they simply won't given how long the last one took, and the costs associated with it. How about you fucking grow a brain cell and get your own story right.
>Every model performance go to shit after 28k. The devs choose this optimal size for a reason.
No it doesn't you cope baby, this was a limitation on earlier models but it's been years, even when novelai had their magnum opus of their own trained models, finetunes of mistral existed for self hosting that could out of the box handle context token sizes up to 84k and pass the needle in a haystack test and be able to recall exactly when the needle was dropped in it's context pool. You are genuinely full of shit, the best model they provide isn't even theirs, and they aren't even giving it to you at the max context, why? Because they literally can't afford to, even after years of bitching when they finally decided to increase the context size, it's only just now and not even additional context, it's just 28,672 tokens with a 8192 token roll over, because giving you actually more is jsut something they can't afford to.