>>106869842
done, it shat the bed at 4k but i saw that spilled out of vram so that's to be expected
well, at least i learned that my batch sizes were suboptimal because i forgot to set them in my regular llama-server script, so thanks for the tip