Search Results

Found 3 results for "d29a535261a3d1b1353f040fd49a695c" across all boards searching md5.

Anonymous /b/937240959#937243629
7/18/2025, 1:08:48 AM
Is there a way to feed it data so that the AI combines it...say I want it to generate a character. Would it be possible to feed it the pictures of models/actors and have it generate a new image based on those? I know how the people look in my head but the only way I could describe them is by using pictures of people that look like them as I can't draw for shit
Anonymous /b/937191965#937202069
7/17/2025, 1:47:43 AM
I get the selling points of ComfyUI — flexibility, control, experimentation, etc. But honestly, the output images are barely usable. They almost always look "AI-generated." Sure, I can run them through customized smart generative upscalers, but it's still not enough. And yes, I know about ControlNet, LoRA, inpainting/outpainting on the pixel level, prompt automation, etc, but the overall image quality and realism still just isn’t top notch?
Anonymous /b/936966989#936967734
7/11/2025, 9:58:00 PM
is using IPAdapter + FaceDetailerold hat? In this case I have issues with it following the original composition. Even with low denoise it seems to modify the subject a lot. I can mitigate this by copying portions of the prompt into the detailer. However the bird, but then if I try to upscale the image it essentially reverts the FaceDetailer. My guess is this is because the upscale is trying to apply my original prompt and controlnet?