>>936013877
>nta, but them thangs lookin pretty stable to me

It isn't, because I test my Loras at different strengths and with different models. There was overfitting, probably due to a problem with some images having too much difference in resolution or missing captions. Or maybe some hyperparameters needed adjusting.

The Lora was failing at a strength of 0.9 or 0.8 but working well at 1.0, which isn't normal.