Search Results

Found 1 results for "741afa9f23a4d8e61a7561dfa4545d1a" across all boards searching md5.

Anonymous /g/106127425#106131072
8/3/2025, 11:51:39 PM
>>106131008
lora rank (dimension/alpha) determines something something (usually how much detail it learns)
everyone who makes loras usually bases the rank they use on knowledge from sd15/sdxl models, which said basically "use min rank 32, better 64/128 or higher!" because sd15/sdxl are "small" models (that's why you see those 400mb-1+gb loras for xl)
flux and chroma (and other current models) are huge in comparison, so some people have been playing around with making super smalll rank (8, 4, 2 or even 1) with chroma to use with the low cfg/low step variants, which have shown they are nearly or as good as higher rank (larger sized) loras in terms of likeness, detail and flexibility of the lora, and they work on both the low cfg/step variants and large/base/etc variants of chroma.
so bottom line - you can make supersmall loras with almost any amount of sources (and almost any size since you can train at 512), and use it across the chroma variants (and flux models in some cases) and have a good lora too
i used my regular sources and settings except for the rank and i increased the learning rate, used 2k steps and compared it to the last which was on rank64, and it's just as good, needing only some tweaks in the nodes but i do that for all of them anyway