>>29397912
>>29398210
Inspired by WalkAnon, I figured out how to use a refiner step in Comfyui a few weeks ago.
>pick your models - in this case illustrious for its built in characters and then hyphoria for its realism
>set up a stable diffusion t2i workflow as usual with an illustrious model
>next to it, set up a second SD workflow with your refiner model, minus conditioning (pos/neg) - get those from the first half
>from the first workflow, instead of sending the ksampler output to vae decode, send it straight to the second ksampler as latent
>specify total steps to match in both ksamplers
>set first ksampler to stop partway, second to start at same
Adjusting where one stops and the other starts works as a slider for, in this case, how much realism you want.
Alternatively, if you just want 100% realism:
>fully run the first half of the workflow and vae decode to image
>feed that finished image into vae encode in the second workflow
The result of this will take advantage of the first model's character embeds amd design choices, but be 100% what the second model is built for.
This is pretty good about the second ksampler not ruining character details from the first, but it helps a lot if you add those details to the prompt so they're in the conditioning. Using an actual character lora piped to both sides works well, too.
You can encode any finished image into SD as a latent and it will reinterpret it with the specified model. Sometimes useful to shoop gens together and then let SD sort it out (say, multiple characters genned separately and pasted onto a background).