3 results for "d29a535261a3d1b1353f040fd49a695c"
Is there a way to feed it data so that the AI combines it...say I want it to generate a character. Would it be possible to feed it the pictures of models/actors and have it generate a new image based on those? I know how the people look in my head but the only way I could describe them is by using pictures of people that look like them as I can't draw for shit
I get the selling points of ComfyUI — flexibility, control, experimentation, etc. But honestly, the output images are barely usable. They almost always look "AI-generated." Sure, I can run them through customized smart generative upscalers, but it's still not enough. And yes, I know about ControlNet, LoRA, inpainting/outpainting on the pixel level, prompt automation, etc, but the overall image quality and realism still just isn’t top notch?
is using IPAdapter + FaceDetailerold hat? In this case I have issues with it following the original composition. Even with low denoise it seems to modify the subject a lot. I can mitigate this by copying portions of the prompt into the detailer. However the bird, but then if I try to upscale the image it essentially reverts the FaceDetailer. My guess is this is because the upscale is trying to apply my original prompt and controlnet?