Ok, so
I trained the 2000s digital camera lora on Qwen Image
https://files.catbox.moe/5ppur2.7z
The results were mixed, hit or miss, you have to do a little of gacha to get unslopped results, especially if you are not using negative prompts
If you prompt the mildest trigger word for slop (like mentioning anime characters, light conditions or anything reminiscent of studio photograph) it produces slop
If anything, this experiment made me value Chroma even more. Yes, Qwen is more detailed, yes, Qwen has better prompt alignment, but it's simply not worth it in my opinion, it is still strongly biased to slop depending on the prompt, and at the same timeframe you generate one Qwen image you can generate like 4 on Chroma Flash with similar and at times superior results
Please generate images at 50 steps, Qwen produces bad outputs on anything lower.
I can try training again using Rank 32 to check if it manages to unslop further, but this is likely not a model I would use on a daily basis
I hope Chroma haters can rethink their positions, trust me as someone training loras (who can also run beefier models like Qwen), Chroma learns styles easily, trains fast, and is a good all-arounder