>>106526797
>since models are being trained on the same datasets and synthslop, even downloading "new" models isn't going to bring back the magic because you're just recycling the same shit
I don't think it's that, after reading the unet creativity paper and going back to sometimes using Coldcut, I think there is just something that has been lost in current models, both in t2t and t2i