>>3081651

<3<3<3

There is such a thing as overtraining a checkpoint.

At the end of the day, all of the checkpoints we are talking about are based off of a few "base" models created by cooperate/research entities with large amounts of money to spend and a great deal of computer power.

In our case, the Illustrious model, started with LAION-5B (5.85 billion images, according to google) and added something like 8 million more images from Danbooru.

Small scale operations like HomoSimile don't have anywhere near the resources to make a new model. So what they are doing is, in a way, sort of training a Lora INTO the base model to give it a specific style with a small number of images. For example, bettering male anatomy, which is overlooked for the more popular tits and vaginas.

The (relatively) small amount of images (due to time and money) will then start to outweigh the base, which had the flexibility due to the much larger training set. This means it will start to only output images looking like the smaller set, constricting the whole thing. There is a lot of art involved in the science of training, and the amount of time it takes makes it hard to pivot if something goes wrong.

You will noticed that as the popular, but small scale, checkpoints keep trying to improve, they often become less nimble due to the erasure of the 8 million images they started with to focus on the small set they think they want to aim toward.