Anonymous
8/27/2025, 12:08:11 PM
No.106400027
So I did a comparison of FP16 and the Kijai FP8 scaled wan2.2 model. Other than knowing that each are referring to the number of bits used for floats, and therefore their precision, I don't really know what the effect is on the resulting generation. Some minor things in the FP16 do look better imo, but I don't really have any other takeaways from the comparison other than the FP16 run took almost 3 times as long.