Search results for "1f108d20a6cc6d8574124d2d30b5266e" in md5 (8)

/g/ - /ldg/ - Local Diffusion General
Anonymous No.106360321
>>106360308
>Nunchaku
>better quality than gguf
lmao, what gguf? anything below q8 is a meme, svdquant included
/g/ - /ldg/ - Local Diffusion General
Anonymous No.106320654
>>106320624
Because 4 and 5 k series copers need to delude themselves that 2-8k$ they spent on a single gpu was worth it for that fp8 support and thus 2x speed for "minimal quality loss"
/g/ - /ldg/ - Local Diffusion General
Anonymous No.106310355
>>106310330
No reason not to have fp16 encoder since its not held in vram, there is a difference.

Picrel are model quants not text encoder quants but they show you the difference, minimal between q8 and fp16, and then everything else is noticably worse.
/g/ - /ldg/ - Local Diffusion General
Anonymous No.106275577
>>106273510
>gguf was always shit, just use regular quantization that is scaled, just as good quality and both faster and supported natively in Comfy
oh no no no
/g/ - /ldg/ - Local Diffusion General
Anonymous No.106249212
>>106249016
/g/ - /ldg/ - Local Diffusion General
Anonymous No.106227524
>wan2.2 rentry guide
>fp8 scaled instead of q8
Yeah, into the trash it goes
/g/ - /ldg/ - Local Diffusion General
Anonymous No.106211052
>>106211026
>buying a 4x or even more expensive gpu to get shittier results than Q8

https://www.reddit.com/r/StableDiffusion/comments/1gc0wj8/sd35_large_fp8_scaled_vs_sd_35_large_q8_0_running/
/g/ - /ldg/ - Local Diffusion General
Anonymous No.106195630
>>106195486
>>106195509
>>106195455
Seems like scaled still takes a hit, but might be ok depending on what you want to gen

https://www.reddit.com/r/StableDiffusion/comments/1gc0wj8/sd35_large_fp8_scaled_vs_sd_35_large_q8_0_running/