Anonymous
8/14/2025, 1:05:30 AM
No.8690534
>>8690522
Just looked at vram usage, it spiked at 17 gb. Seems like comfy caches all three models for me, weird. It shouldn't really take any longer than usual or require more vram though. Lora is reapplied each step during training, so this operation shouldn't take long at all.
Just looked at vram usage, it spiked at 17 gb. Seems like comfy caches all three models for me, weird. It shouldn't really take any longer than usual or require more vram though. Lora is reapplied each step during training, so this operation shouldn't take long at all.