can you use quantized models for lora training? i use wan 2.2 fp8 for inference, so is training a lora on that okay or no?