>>106373187
>Yeah, but that's not really what you're looking for
Like it or not, you're literally memorizing the (latent) dataset by minimizing the difference each step. That's why lora sameface happens.
>it's better than nothing if you don't have access to image training samples, like with Diffusion-Pipe, but that's about it
It's the only objective measure of how well the model is memorizing the training set.
Do you though anon. I'm gonna go game