>>8681484
dude, read what it says about sdxl:
>I found how to train and run SDXL in my hardware, so started a SDXL section here.
>There seem to be some limitations about training the text encoder. At the moment training will not start if learning rate for TE is not equal to zero.
>Training on SDXL 1.0 and using an anime model to generate gave better results than generating from the same anime model and I cannot explain why.
and the rest was written for the nai leak, it doesn't explain anything about vpred, almost always it's literally bullshit which isn't correct even for 1.5.
>AnyLoRA is a new checkpoint designed to train from.
>The effects of min_snr_gamma seem a bit more interesting when using Prodigy, it seems to become a sort of multiplicative scale for the dynamic learning rate. Lower values will have lower loss (and therefore learn more aggressively), while higher values have higher loss (less resemblance to training set).
>flip_aug Randomly flips images horizontally. Useful anytime, unless your character is heavily asymmetrical. Like that guy from Street Fighter III.
go rewrite it if you will. like i said, it doesn't deserve to be in the op. this shit is completely outdated and full of wrong advice, and the only distinctive quality of it is that it indeed works "just as well" as in 2023, i.e. as shit. it wastes everyone's time, and the ones pushing for it to be in the op are actively harming any productive discussion (likely trolls hiding under "it worked back in the day!" banner or completely ass blown brainlet retards).