>>106125354
I think there's no harm in trying? FWIW I'm running into VRAM limitations with quantized models and 24GB