Search Results
7/12/2025, 2:22:15 AM
>>936978061
Now 'm going to go over my Auto-Resize group. This uses a few nodes from the ComfyMath extension so you might need to enable that in the custom nodes manager. When the image is first loaded in, I pass it to a Get Image Size node which reads the original resolution, NearestSDXLResolution which finds the closest resolution that will still fit in what the stable diffusion model is capable of creating while maintaining the original aspect ratio, and it also passes the image to ResizeAndPadImage along with the output from NearestSDXLResolution to take the original image and turn it into something the AI can actually handle. If you don't go through all that and try to edit a large image then it will get confused when it tries to go beyond those boundaries and will wind up repeating the prompt multiple times in multiple areas. If you've gotten results that wouldn't look out of place in a cronenberg film then that's probably why. Of course we don't want the image to stay at that small resolution which is where the upscaler comes in after the processor finishes but before it saves. I personally find that the bicubic setting works best in general, but feel free to try the others.
Now 'm going to go over my Auto-Resize group. This uses a few nodes from the ComfyMath extension so you might need to enable that in the custom nodes manager. When the image is first loaded in, I pass it to a Get Image Size node which reads the original resolution, NearestSDXLResolution which finds the closest resolution that will still fit in what the stable diffusion model is capable of creating while maintaining the original aspect ratio, and it also passes the image to ResizeAndPadImage along with the output from NearestSDXLResolution to take the original image and turn it into something the AI can actually handle. If you don't go through all that and try to edit a large image then it will get confused when it tries to go beyond those boundaries and will wind up repeating the prompt multiple times in multiple areas. If you've gotten results that wouldn't look out of place in a cronenberg film then that's probably why. Of course we don't want the image to stay at that small resolution which is where the upscaler comes in after the processor finishes but before it saves. I personally find that the bicubic setting works best in general, but feel free to try the others.
7/7/2025, 10:16:36 PM
>>936779611
Now 'm going to go over my Auto-Resize group. This uses a few nodes from the ComfyMath extension so you might need to enable that in the custom nodes manager. When the image is first loaded in, I pass it to a Get Image Size node which reads the original resolution, NearestSDXLResolution which finds the closest resolution that will still fit in what the stable diffusion model is capable of creating while maintaining the original aspect ratio, and it also passes the image to ResizeAndPadImage along with the output from NearestSDXLResolution to take the original image and turn it into something the AI can actually handle. If you don't go through all that and try to edit a large image then it will get confused when it tries to go beyond those boundaries and will wind up repeating the prompt multiple times in multiple areas. If you've gotten results that wouldn't look out of place in a cronenberg film then that's probably why. Of course we don't want the image to stay at that small resolution which is where the upscaler comes in after the processor finishes but before it saves. I personally find that the bicubic setting works best in general, but feel free to try the others.
Now 'm going to go over my Auto-Resize group. This uses a few nodes from the ComfyMath extension so you might need to enable that in the custom nodes manager. When the image is first loaded in, I pass it to a Get Image Size node which reads the original resolution, NearestSDXLResolution which finds the closest resolution that will still fit in what the stable diffusion model is capable of creating while maintaining the original aspect ratio, and it also passes the image to ResizeAndPadImage along with the output from NearestSDXLResolution to take the original image and turn it into something the AI can actually handle. If you don't go through all that and try to edit a large image then it will get confused when it tries to go beyond those boundaries and will wind up repeating the prompt multiple times in multiple areas. If you've gotten results that wouldn't look out of place in a cronenberg film then that's probably why. Of course we don't want the image to stay at that small resolution which is where the upscaler comes in after the processor finishes but before it saves. I personally find that the bicubic setting works best in general, but feel free to try the others.
7/6/2025, 2:03:59 AM
>>936694676
Now 'm going to go over my Auto-Resize group. This uses a few nodes from the ComfyMath extension so you might need to enable that in the custom nodes manager. When the image is first loaded in, I pass it to a Get Image Size node which reads the original resolution, NearestSDXLResolution which finds the closest resolution that will still fit in what the stable diffusion model is capable of creating while maintaining the original aspect ratio, and it also passes the image to ResizeAndPadImage along with the output from NearestSDXLResolution to take the original image and turn it into something the AI can actually handle. If you don't go through all that and try to edit a large image then it will get confused when it tries to go beyond those boundaries and will wind up repeating the prompt multiple times in multiple areas. If you've gotten results that wouldn't look out of place in a cronenberg film then that's probably why. Of course we don't want the image to stay at that small resolution which is where the upscaler comes in after the processor finishes but before it saves. I personally find that the bicubic setting works best in general, but feel free to try the others.
Now 'm going to go over my Auto-Resize group. This uses a few nodes from the ComfyMath extension so you might need to enable that in the custom nodes manager. When the image is first loaded in, I pass it to a Get Image Size node which reads the original resolution, NearestSDXLResolution which finds the closest resolution that will still fit in what the stable diffusion model is capable of creating while maintaining the original aspect ratio, and it also passes the image to ResizeAndPadImage along with the output from NearestSDXLResolution to take the original image and turn it into something the AI can actually handle. If you don't go through all that and try to edit a large image then it will get confused when it tries to go beyond those boundaries and will wind up repeating the prompt multiple times in multiple areas. If you've gotten results that wouldn't look out of place in a cronenberg film then that's probably why. Of course we don't want the image to stay at that small resolution which is where the upscaler comes in after the processor finishes but before it saves. I personally find that the bicubic setting works best in general, but feel free to try the others.
7/1/2025, 10:06:39 PM
>>936509616
Now 'm going to go over my Auto-Resize group. When the image is first loaded in, I pass it to a Get Image Size node which reads the original resolution, NearestSDXLResolution which finds the closest resolution that will still fit in what the stable diffusion model is capable of creating while maintaining the original aspect ratio, and it also passes the image to ResizeAndPadImage along with the output from NearestSDXLResolution to take the original image and turn it into something the AI can actually handle. If you don't go through all that and try to edit a large image then it will get confused when it tries to go beyond those boundaries and will wind up repeating the prompt multiple times in multiple areas. If you've gotten results that wouldn't look out of place in a cronenberg film then that's probably why. Of course we don't want the image to stay at that small resolution which is where the upscaler comes in after the processor finishes but before it saves. I personally find that the bicubic setting works best in general, but feel free to try the others.
Now 'm going to go over my Auto-Resize group. When the image is first loaded in, I pass it to a Get Image Size node which reads the original resolution, NearestSDXLResolution which finds the closest resolution that will still fit in what the stable diffusion model is capable of creating while maintaining the original aspect ratio, and it also passes the image to ResizeAndPadImage along with the output from NearestSDXLResolution to take the original image and turn it into something the AI can actually handle. If you don't go through all that and try to edit a large image then it will get confused when it tries to go beyond those boundaries and will wind up repeating the prompt multiple times in multiple areas. If you've gotten results that wouldn't look out of place in a cronenberg film then that's probably why. Of course we don't want the image to stay at that small resolution which is where the upscaler comes in after the processor finishes but before it saves. I personally find that the bicubic setting works best in general, but feel free to try the others.
Page 1