← Home ← Back to /r/

Thread 19987607

302 posts 136 images /r/
1st.AId.bag !!GFB1W2jC9WS No.19987607 [Report] >>19988186
ComfyUI portable for newbies (vol.3)
::::Apprentice pack (v0.3.60):
::::::::https://mega.nz/file/3h4y2KgC#ZQsoH7JHyzb5StA64kiUTVByMkhkc5Byro24DP5M548

Updated workflows since initial pack release:

::::WAN 2.2 v1.12 workflow (blockswap added):
::::::::https://mega.nz/file/fsZVAapK#T1g8SUlgELHpSitWr6iVqfLXTZpjIadmsX32ExL8aHo

::::WAN 2.2 v1.13 workflow (MOE/Auto-split sigmas sampler):
::::::::https://mega.nz/file/zoQWGJ5B#zcQojnO1IDUOB2xWSvn0fdf_MdAtJce91yaohMbhX2w

::::WAN 2.2 v1.14 workflow (StartFrame/Endframe option, NOT VACE):
::::::::https://mega.nz/file/GsgD0Krb#400s829w2xrjK5KpjI0w4mYTFDscc57nAvLJf_nDwKE

::::WAN 2.1 v1.13 workflow (based on 2.2 one but single sampler):
::::::::https://mega.nz/file/ilhmARTL#ZvV93hRQA7XojDhpcTBSkC70BClHI2ARd8auQTe3GNE


Old thread(s)
vol1 https://archived.moe/r/thread/19910536/
vol2 https://archived.moe/r/thread/19938024/
1st.AId.bag !!GFB1W2jC9WS No.19987608 [Report] >>19994729
>What is it?
Modified version of comfy portable
Aimed for new apprentices who wanna jump in AI world. In the package
----Kijai's WAN-Wrapper nodes pre-installed
----Pre-installed nodes needed for my wan2.2/wan2.1 workflows (wf included)
----Pre-installed nodes needed for pisswiz's wan2.2 workflow (wf included)
----ReActor preinstalled, NSFW patched (note that updating the node might fuck it up)
----Sage & Triton installation
----Simple batch file by me that downloads all needed files (vae/clip/lightx-lora/ggufs) to get you started
----Incl. program called LosslessCut to join your clips together
----Couple upscale models included

>How
----Download the package
----Extract it on some fast drive that have good amount free space (100gb at least)
----Run the run_nvidia_gpu and see if you get your comfy running
----....if so, good, close it up. It won't do anything as you don't have models yet
----Install VC_redist.x64.exe and reboot (most likely you have this installed already, needed for triton)
----Run 1stAIdBag.bat
----....Download options 1&2 and pick one of the GGUF models (Q8, Q5 or Q2)
----....Downloading these models take time as those are several gigs
----....Install sage and triton (also updates flash attention)

>Already got old install?
----Backup (cut) models folder somewhere (pref same drive's root where your comfy is)
----del ComfyUI_windows_portable folder
----read how section above
----paste models folders into your fresh comfyui portable

Once all downloaded. Launch Comfyui using run_nvidia_gpu_sageattention.bat, RESELECT ALL MODELS, to make sure
they are all there in your computer. Load image, type something in the prompt, run
1st.AId.bag !!GFB1W2jC9WS No.19987611 [Report] >>19994636 >>19994729
If expecting path/string(NoneType) errors ---> you didn't reselect the models
If sage related error = you didn't install it correctly/ or at all --> install sage, or disable the nodes
If out of memory / allocation to device --> lower output resolution, shorten vid length, lower steps, enable blockswap

If/when you need help....
A) take a screenshot of your whole workflow (and log console window if possible)
B) state what workflow are you using
C) what specs on your rig

>Where to get loras?
https://civitai.com/search/models?baseModel=Wan%20Video&baseModel=Wan%20Video%202.2%20I2V-A14B&baseModel=Wan%20Video%202.2%20T2V-A14B&baseModel=Wan%20Video%2014B%20t2v&modelType=LORA&sortBy=models_v9%3AcreatedAt%3Adesc
(you need to make account and enable show adult content)
https://civarchive.com/search?type=LORA&base_model=Wan+Video+14B+i2v+480p&is_nsfw=true&sort=newest&page=2
(old 2.1 loras that got removed from civitai due to their nsfw bullshit policy)
Anonymous No.19987748 [Report] >>19987754 >>19987943
is there a way to add a reference image so the face stays consistent? I'm chaining together clips using the last frame, but after 3 or 4 clips the subject sometimes looks completely different
AiWeaver !g2tzMwrI3g No.19987754 [Report] >>19996022
>>19987748
I personally would use an external tool like FaceFusion
1st.AId.bag !!GFB1W2jC9WS No.19987943 [Report]
>>19987748
i wouldn't try more than 3 clips....and try to make sure on last frame subjects face is clear and sharp, eyes open, facing camera.
Anonymous No.19987951 [Report]
Hi bag. I haven't been here since the 2nd thread since I've figured it out. We're all still learning with each release though so I like to drop in and troubleshoot.
Just wanted to say thanks. Carry on.
Anonymous No.19987960 [Report] >>19988022
Is there anything i can use that's decent with an AMD Ryzen 5 4500U? I am trying to figure it out lol. CPU is 2.38 Ghz, 8GB RAM. Probably not, but i figure i'd ask. Thanks
1st.AId.bag !!GFB1W2jC9WS No.19988022 [Report] >>19988031
>>19987960
Well cpu plays pretty little part in AI....speed comes from GPU, more ram allows faster loading of bigger models

I'd say minimum is 8gb rtx card, 16gb ram.... But on those specs you would have to use pretty low quality models, not really worth the time.

I have 4070super (12gb) and 64gb ram... It's not superfast but does the job. RAM is rather cheap. I got 2 x 32gb ddr5 under 200bucks
Anonymous No.19988031 [Report]
>>19988022
Thank you sir, i appreciate the information. I gotta upgrade. Some of these request vids are insanely real.
Anonymous No.19988171 [Report] >>19988172 >>19989323
Comfy taking longer than usual for others after the last update? I swear I'm taking 5 minuts longer out of nowhere. Maybe I'm just a faggot.
Anonymous No.19988172 [Report]
>>19988171
Patience is a virtue.. I'm probably a fag as well
Anonymous No.19988186 [Report] >>19988203
>>19987607 (OP)
Anyone's got recently an AMD GPU to work?

Could you talk about how it went incorporating it to these modified apprentice pack?
1st.AId.bag !!GFB1W2jC9WS No.19988203 [Report] >>19988532 >>19995956
>>19988186
there was one anon who installed zluda comfyui on last thread. Dunno about the speeds on that....models, nodes, etc are the same in amd and nvidia, so there is no difference....(apprentice pack just comes with node/depencies stuff preinstalled), nothing you couldn't do manually on your zluda install

basically once zluda comfyui installed, first thing is to install comfy manager, then open the workflow and install missing nodes via manager.I dunno if sage works on amd card. Maybe some wiser can help you with that
Anonymous No.19988279 [Report] >>19988282 >>19988349 >>19988641
I got "Power Lora Loader (rgthree)
Error while deserializing header: header too small" repeatedly when running the wokflow with the sample image and sample prompt. I'm using 1stAIdbag_WAN2.2_(v1.4) with a 3070.
anything I'm doing wrong?
Anonymous No.19988282 [Report] >>19988561
>>19988279
that's the wrong screenshot
Anonymous No.19988349 [Report]
>>19988279
Same for me. Had to download the lora manually on huggingface
Anonymous No.19988532 [Report]
>>19988203
yes im slowy making this working..
having problem installing TRITON right now
Anonymous No.19988542 [Report] >>19988620
Anyone have a good nudify website for a simpleton?
1st.AId.bag !!GFB1W2jC9WS No.19988561 [Report] >>19988656
>>19988282
>Error while deserializing header: header too small"

did you try downling via my bat?....if so there might be something wrong with it.

try dloading the loras manualy into /models/lora/ folder

https://civitai.com/api/download/models/2090458?type=Model&format=SafeTensor

https://civitai.com/api/download/models/2090481?type=Model&format=SafeTensor
Anonymous No.19988620 [Report] >>19988637 >>19988686
>>19988542
I always wonder what the low end for simple image generation is.
I was getting sub 1 minute times for SD1.5 on a 6gb GPU, though SDXL was problematic.
If you forget video gens, and go pure single image, say 1024x1024, what is the low end requirements?
Anonymous No.19988631 [Report]
almost glad the update bonked my workflow ,I was getting kinda addicted lmao,did like all anygirl i thought was vaguely attractive I know suck a dick lmao
Anonymous No.19988637 [Report]
>>19988620
I'm more so referring to websites where you can upload images and have it done for you?
Anonymous No.19988641 [Report] >>19988656
>>19988279
i am also getting this error, does anyone have a fix?
Anonymous No.19988656 [Report] >>19988737
>>19988561
>>19988641
Anonymous No.19988686 [Report] >>19988691 >>19990481
>>19988620
I've 8GB GPU and 32 ram with an i5.
I can do image stuff no problem.

Currently using gwen-image-edit-2509 with ggufs (Q5 works fine) and it's doing wonders.

there's a workflow in comfyui for it already. Just swap the unet loader in and grab the gguf you need
Anonymous No.19988691 [Report] >>19988715
>>19988686
1.5 to 2 minute generation times I presume?
Anonymous No.19988715 [Report]
>>19988691
seems to greatly depend on the image, the edits, and how many images I'm feeding the workflow. Anywhere from 1-3 mins I'd say
Anonymous No.19988737 [Report]
>>19988656
downlaod the file from here

https://huggingface.co/Kijai/WanVideo_comfy/blob/d4c3006fda29c47a51d07b7ea77495642cf9359f/Wan22-Lightning/Wan2.2-Lightning_I2V-A14B-4steps-lora_LOW_fp16.safetensors

https://huggingface.co/Aitrepreneur/FLX/blob/main/Wan2.2-Lightning_I2V-A14B-4steps-lora_HIGH_fp16.safetensors
and place in

ComfyUI_Apprentice_portable\ComfyUI_Windows_portable\ComfyUI\models\loras
Anonymous No.19989002 [Report] >>19989286
I'm just starting out. Can anyone list out their negative prompts? Just looking for a baseline to go by. Thanks.
Anonymous No.19989220 [Report] >>19989241 >>19989252 >>19989286
videos are coming out blurry as fuck and it literally looks like the person turns into an alien. Anything you notice bag?
1st.AId.bag !!GFB1W2jC9WS No.19989241 [Report] >>19989251 >>19989252
>>19989220
turn off teacache......not ment to be used with lightx lora

also: you need to load that deepthroat lora into low noise lora loader as well (if that is wan2.1 lora). If that is wan2.2 lora....high lora part on high, low lora part on low loader
1st.AId.bag !!GFB1W2jC9WS No.19989251 [Report] >>19992420
>>19989241
so it looks like....
Anonymous No.19989252 [Report] >>19989263
>>19989241
>>19989220
It's the 2.1 lora, it needs to be replaced in both high and low with the 2.2 version
1st.AId.bag !!GFB1W2jC9WS No.19989263 [Report]
>>19989252
you can use wan2.1 loras but then you need to load the same lora in both loaders, high and low. But there are good oral loras for wan2.2, i suggest you prefer those

example
https://civitai.com/models/1874811?modelVersionId=2122049
https://civitai.com/models/1874153/oral-insertion-wan-22
Anonymous No.19989286 [Report] >>19989297
>>19989002
>>19989220

bag is it necessary to have a bunch of negative prompts to smooth out the final product? I'm just not sure what to put in unless it's something specific I don't want in the video
1st.AId.bag !!GFB1W2jC9WS No.19989297 [Report]
>>19989286
no.....i keep negatives kinda minimal. "speaking, talking"....positives and loras weight more.

Example in oral insertion lora there is loads of camera action, zooms, pans.....you really can't get rid of them via negative prompts
1st.AId.bag !!GFB1W2jC9WS No.19989323 [Report]
>>19988171
on my workflow....i've set high noise lightx to 0.85 and high sampler cfg 1.5, this helps bit with the movement but also makes high noise part sampling bit slower.....you can set them back to 1.0 values for faster 1st pass
Anonymous No.19989870 [Report]
up
Anonymous No.19989931 [Report] >>19989949 >>19989965 >>20001223 >>20001223
On AMD GPU which nodes can remplace the sageattention one ? I have to disabled this node because seems like triton can only work on NVIDIA... I don't understand everything on this part... ?

Actually i made this working and generating very good quality content but it's taking 6 minutes to generate 5 seconds.

Config AMD 7800x3d + Radeon 7900xtx + 64gb ddr5
1st.AId.bag !!GFB1W2jC9WS No.19989949 [Report] >>19989953
>>19989931
That's not bad speed..... On the sage node, rather than setting it auto.... Select Triton option and give it a try.
1st.AId.bag !!GFB1W2jC9WS No.19989953 [Report]
>>19989949
*Not Triton but other options. Can't remember if there is romc option in it
1st.AId.bag !!GFB1W2jC9WS No.19989965 [Report] >>19990263
>>19989931
Reading some github pages....some suggest installing sage attention 1 when amd gpu

>pip install sageattention
Anonymous No.19990263 [Report] >>20001223
>>19989965
Thx man ill try this tonight.
Would like to thank you to made into this im starting to enjoy making some AI content!
Anonymous No.19990481 [Report]
>>19988686

what kind of content are you making.
Anonymous No.19990572 [Report] >>19991168
Does anyone have the issue where slider nodes show up as blank in the workflows? I can edit them via the properties panel, but the actual slider and content of the node does not appear on the node itself
1st.AId.bag !!GFB1W2jC9WS No.19991168 [Report] >>19991946
>>19990572
>slider nodes show up as blank in the workflows

i dunno if you have custom node called mixlab installed.....but this has caused some issues in the past (and seems like not fixed to this date)

https://github.com/Smirnov75/ComfyUI-mxToolkit/issues/28#issuecomment-2603091317
Anonymous No.19991201 [Report] >>19991214
Generally speaking, do wan 2.1 Lora's work with Wan 2.2? Curious in trying to mix and add a few together. Thankyou kind Sir's
1st.AId.bag !!GFB1W2jC9WS No.19991214 [Report] >>19991306
>>19991201
generally short answer; yes, they do.....you need to load 2.1 lora in both lora-loaders (high & low).

Some loras work better than others and you might want to use them in bit higher weights than 2.2 loras. It's trial and error
Anonymous No.19991306 [Report]
>>19991214
Sweet as thanks man, I've noticed the NSFW Wan 2.2 Lora's are still being made here and there, is it recommended to try and train your own Lora's or does that take alot of effort to tweak the results to what you want?
Anonymous No.19991778 [Report] >>19992411 >>19992520
>>19991079
hey bag how did this anon get sound into their gen?
Anonymous No.19991946 [Report]
>>19991168
Dude, you are awesome. That fixed it. I searched all around google and couldn't get this solution to pop up in the results. If there is ever a way to support your efforts, let me know!
1st.AId.bag !!GFB1W2jC9WS No.19992411 [Report] >>19992520
>>19991778
....how about, ask him?

could be wan animate or s2v model
Anonymous No.19992420 [Report] >>19992428
>>19989251
How did you come up with those weight numbers?
Also sometimes my gens makes their iris shine or flicker or eye wink. Will changing the weights fix that? Or increasing the steps?
1st.AId.bag !!GFB1W2jC9WS No.19992428 [Report] >>19992505
>>19992420
>How did you come up with those weight numbers?
no particular reason, i just lower them by mouse-slide, those values just happened to be on when ss taken

>Also sometimes my gens makes their iris shine or flicker or eye wink. Will changing the weights fix that? Or increasing the steps?

example? and what loras in use, also what sampler
Anonymous No.19992505 [Report] >>19992510 >>19992517 >>19994158
>>19992428
just the lightning lora and the pretzel pose lora set at 1.0 for this example, but the eye shining effect sometimes comes up with other loras and prompts too. I didn't change anything from the default settings from your v1.3 workflow, I think it's euler
Anonymous No.19992510 [Report]
>>19992505
One way to mitigate the eye flicker is increase the resolution, zooming out can impact fine details.
1st.AId.bag !!GFB1W2jC9WS No.19992517 [Report]
>>19992505
to my eyes...that seems pretty minor defect.

But yeah, you could try to increase the steps and output resolution a little bit. On zoom outs face (and eyes) gets smaller and smaller == less detail ai to sample from
Anonymous No.19992520 [Report] >>19992532 >>19992539 >>19992571
>>19991778
>>19992411
he didn't. He uses one of those credit websites. And those just plaster a random sound layer on the video.
Caнeк No.19992531 [Report]
Peбят, тaкoe дeлo, мeня кинyли нa oлх. Ha кoпeйки нo тaк oбиднo cтaлo, мoжнo кaк тo пpoбить нoмep тeлeфoнa? чeлoвeкa знaя eгo нoмep бaнкoвcкoй кapты и имя
Anonymous No.19992532 [Report]
>>19992520
Curious, how did you come to this conclusion?
Anonymous No.19992539 [Report] >>19992549 >>19993913 >>19993925 >>19994651 >>19997492
>>19992520
Spot on, mate.

I have unlimited credits at this website: http://127.0.0.1:8188.

For the sound, I just pick a random audio file and cross my fingers hoping the audio and lips sync.
1st.AId.bag !!GFB1W2jC9WS No.19992549 [Report] >>19992571
>>19992539
are you using gguf model of s2v?
Anonymous No.19992571 [Report] >>19992574
>>19992549
wan2.2-animate

>>19992520
r/confidentlyincorrect
1st.AId.bag !!GFB1W2jC9WS No.19992574 [Report] >>19992590
>>19992571
so you use input image + reference video (with audio)
Anonymous No.19992590 [Report] >>19992660 >>19992896
>>19992574
Yes, it's conceptually VACE + audio
Anonymous No.19992603 [Report] >>19992608
Question please how can I go about having a girl stripped then put her legs behind her head?
Anonymous No.19992608 [Report] >>19992615 >>19995775
>>19992603
pretzel lora on civit, it's perhaps the easiest to use and most consistent lora I've come across, so you can expect to see alot of it around here
Anonymous No.19992615 [Report] >>19992632
>>19992608
Can't seem to find it,do u know how i can
Anonymous No.19992632 [Report]
>>19992615
Make sure you are logged in and enabled nsfw loras
1st.AId.bag !!GFB1W2jC9WS No.19992660 [Report] >>19992729
>>19992590
have to say it keeps pretty good face consistency, even on your pov bjs
1st.AId.bag !!GFB1W2jC9WS No.19992681 [Report] >>19992729
s2v model is funny too =)
Anonymous No.19992729 [Report]
>>19992660
>>19992681

Agreed. S2V looks like it works well enough. I had been impatiently waiting for VACE 2.2, the audio is just a bonus and works far better than I expected.
Anonymous No.19992853 [Report] >>19993399
So if I want to make a text to video how do I add the image of the person I want to make the video of?
Anonymous No.19992896 [Report]
>>19992590
>VACE + audio
Well, that's out of my reach, that kind of rig costs real money.
Anonymous No.19992942 [Report]
Question can you do nude on comfyui online. Or is there a way to do comfyui image to video offline?
Anonymous No.19993399 [Report]
>>19992853
you dont. thats why its called text to video not image to video
Anonymous No.19993913 [Report] >>19993925 >>19994546
>>19992539

what is ur setup??
Anonymous No.19993925 [Report] >>19994546
>>19992539
>>19993913
Yeah I'd also be interested in a workflow
Anonymous No.19993943 [Report] >>19994266
How can you get templates and worlflows to not connect online? Is there free ones we can find
Anonymous No.19994059 [Report] >>19994168 >>19994271 >>19994287
How do we lengthen the clip so whatever we write into the prompt that it will fully play out
Anonymous No.19994158 [Report]
>>19992505
What did you write out as your prompts?
Anonymous No.19994168 [Report] >>19994271
>>19994059
impossible right now due to the power needed. You will run into oom or the video will be so deranged, you can't figure out the person anymore. Only way is to wait for future generations nodels
Anonymous No.19994266 [Report]
>>19993943
If I understand you correctly, you're wondering where to get and how people share workflows.

Everytime you gen in Comfy, the workflow is embedded in the image, BUT 4chan strips metadata. That is why some people share on catbox, which perserves metadata. Download the image, drop into comfy, workflow appears.

The other way is sharing .json files which can be opened in Comfy.

Generally, googline "TYPE workflow Comfy" will help find examples (image2video workflow, inpainting workflow).

Sites like reddit and civitai also share workflows.

Going to specific hugginfcae and github repos have workflows as well. For example, IPAdapter github hosts the node, and has sample workflows.
Anonymous No.19994271 [Report]
>>19994059
>>19994168
Presently, people generate 5-7 second chunks and link them together (hence why FAid's workflow gens a final frame in addition to the video, the final frame being your source age for part 2).
But still, each cuunk will have greater facial degradation. Itcould possibly be reduced by running the final frame through face fusion or facereactor, but each gen will make it less and less fsithful to the source.
1st.AId.bag !!GFB1W2jC9WS No.19994287 [Report] >>19994324
>>19994059
You need to type a prompt, not a novel. Think in this way; what can happen in 5-6sec

if you need want to make longer vid, segment it in parts. IE:
man walks into the frame next to the girl = clip one, save last frame of it

girl kneels down in front of the man, man pulls his penis out from his crotch area = clip two, save last frame of it

girl sucks the penis = third clip

generally wan tries todo all you prompt, but if there is too much action in so little time, movement might look bit unnatural and shit happens before they should and/or simultaneously
Anonymous No.19994324 [Report]
>>19994287
Thank you I have been trying around with it today and somewhat figuring it out. Seriously thank you for helping
Anonymous No.19994363 [Report]
What loras do you guys reccomend? for blowjob, cumshots, sex, doggystyle etc.

And does anyone have a good prompt list they use for good results?
Anonymous No.19994546 [Report] >>19994651
>>19993913
>>19993925
local comfy, 4090, 32GB RAM, 5800X3D
wan2.2-animate, workflow is in templates of updated comfy
Anonymous No.19994622 [Report] >>19994636
getting this constantly

SamplerCustom PassManager::run failed
1st.AId.bag !!GFB1W2jC9WS No.19994636 [Report]
>>19994622
sounds like sage attention problem....

>>19987611
more info you give, the better. Did you fresh install the pack or did you just download the workflows?, what rig, what workflow, where the error happens....
Anonymous No.19994651 [Report] >>19994693
>>19994546

how did you make this?

>>19992539
Anonymous No.19994663 [Report] >>19994669
fresh install, 2080ti
happens at highnoisesampler, nothing solves it. I enabled blocksswap incase its memory related but it doesn't fix it
1st.AId.bag !!GFB1W2jC9WS No.19994669 [Report] >>19994700
>>19994663
tried disabling patch sage nodes?
Anonymous No.19994693 [Report] >>19994749
>>19994651
The answer is in the post you replied to. Google is your friend.
Anonymous No.19994700 [Report] >>19994711
>>19994669
That solved the initial problem however now im getitng this instead
UnetLoaderGGUF
expected str, bytes or os.PathLike object, not NoneType
1st.AId.bag !!GFB1W2jC9WS No.19994711 [Report] >>19994725
>>19994700
RE-SELECT EVERY MODEL/VAE/CLIP/LORA EVERYTIME YOU OPEN SOME NEW WORKFLOW FIRST TIME, SO THAT THE FILES POINT TO YOUR COMPUTER
Anonymous No.19994725 [Report] >>19994729 >>19994741 >>19994759
>>19994711
Lets assume I'm retarded how would you reselect every model?
1st.AId.bag !!GFB1W2jC9WS No.19994729 [Report] >>19994734
>>19994725
first of all

have you downloaded the models/vae/clips/lightx loras needed?

Did you read these post?
>>19987608
>>19987611
Anonymous No.19994734 [Report] >>19994741
>>19994729
yeah I followed the instructions on those posts, downloaded everything the instructions laid out
Anonymous No.19994741 [Report]
>>19994734
>>19994725

Go through each and every node containing a dropdown selector, take note of what is already selected, then open the dropdown and choose the same option again.
Anonymous No.19994749 [Report]
>>19994693

no you gigantic faggot. i want you spoonfeed me you retard
1st.AId.bag !!GFB1W2jC9WS No.19994759 [Report] >>19994762
>>19994725
https://screenrec.com/share/saYTP2OLBk
Anonymous No.19994762 [Report] >>19994766
>>19994759
Yeah kk seems to be somewhat solved for now. cheers,
on a side note sage seems to want me to download pytouch. is that necassary?
1st.AId.bag !!GFB1W2jC9WS No.19994766 [Report]
>>19994762
i assume you mean, pytorch.....it is installed. when installing sage it autodownloads and installs the needed files.....im not if it works with 2080ti.
Anonymous No.19994808 [Report]
Just curious why does the video look so blurry when using pisswizards workflow?
Anonymous No.19995442 [Report] >>19995759 >>19995775 >>19995909
Anyone know the lora that's used here?
Anonymous No.19995759 [Report] >>19995775
>>19995442
ive been looking for this LORA as well but no success, please help wizards!!
Anonymous No.19995775 [Report] >>19996032
>>19995442
>>19995759

>>19992608
Anonymous No.19995862 [Report] >>19996145
How do we just upscale a image only
Anonymous No.19995909 [Report]
>>19995442
Gott in himmel...
Anonymous No.19995956 [Report] >>19995968 >>19996005
>>19988203
I think I managed to install Zluda.

I'm a TOTAL noob. Before I try other steps, is there a folder I should copy, something I should do?

The version I installed is not portable.

Random image to get more visibility.
Anonymous No.19995968 [Report] >>19996004
>>19995956
As in. I have a fresh install.

I can see in the instructions that

>Already got old install?
>Backup (cut) models folder somewhere (pref same drive's root where your comfy is)
>del ComfyUI_windows_portable folder
>read how section above
>paste models folders into your fresh comfyui portable

But certainly that won't apply to me.

I just want to make lewd pictures and vids. Both generation prompts as well as from existing images.

Help me frens.
>Replaced ReActor node with NSFW one

wat?
Anonymous No.19996004 [Report]
>>19995968
Depends if you used First Aid's install files to help.

If it is a fresh install, you have nothing downloaded anyways, download them and move the files into the appropriate folders in you non portable install.

If you were moving from portable, and had downloaded models and loras already, instead of redownloading them, you could just cut your models folder and paste it into your new install.

You MIGHT be able to do this with custom nodes, but ehn, just reinstall all your nodes through the managr, better that way.

If that doesn't apply to you, just download the files and place them in the appropriate folders.
1st.AId.bag !!GFB1W2jC9WS No.19996005 [Report] >>19996846
>>19995994
5060 should not crash/oom with those resos....post image of your whole workflow, maybe there is some other issue

>>19995956
if you got zluda installed.....don't download the pack. start with:

https://github.com/Comfy-Org/ComfyUI-Manager

download it as zip and unpack it into your comfyui-zluda-dir/custom_nodes/ directory. Next time you open comfyui, you have manager in to top right side. With this you can install all the missing nodes neede by the workflow you are using
Anonymous No.19996022 [Report]
>>19987754
thank you for your help.
Anonymous No.19996032 [Report] >>19996216
>>19995775
Is it really? This gen always has the subject strip consistently and their nipples and groin always look good too. And they always end with only their knees up and feet down. (Although I realise the clip could be cut short before the subject puts their feet above their head too.) The pretzel pose lora doesn't always strip them for me or bugs out and fuses their panties with their groin.
Anonymous No.19996033 [Report] >>19996139
Friends, I need help. Do you have any information about which lora we use to do this? Can you write what a prompt is?
1st.AId.bag !!GFB1W2jC9WS No.19996139 [Report] >>19996141 >>19996159
>>19996033
there are multiple ways to this.....but if woman spreads ass in the video...searching "ass spread" in civitai would be a nice first step to try. As for prompting...unless there are some specific keyword that must be used to to activate the lora, just type what you want to see.

let's not turn this thread into "name this lora for me"
1st.AId.bag !!GFB1W2jC9WS No.19996141 [Report] >>19996200
>>19996139
1st.AId.bag !!GFB1W2jC9WS No.19996145 [Report] >>19997867
>>19995862
this would be easiest and fast way.....alternative way is to use big model sdxl/flux/wan model and and "re-create" the whole image
Anonymous No.19996159 [Report] >>19996162
>>19996139
Thanks for that, are you using 12 and 6 steps?
1st.AId.bag !!GFB1W2jC9WS No.19996162 [Report] >>19996226
>>19996159
just 6 (1+5)....lightx in use
Anonymous No.19996200 [Report] >>19996202
>>19996141
I am getting an error. Could you please share your workflow with me if possible?
1st.AId.bag !!GFB1W2jC9WS No.19996202 [Report] >>19996211 >>19996524
>>19996200
...
rNdm No.19996211 [Report] >>19996214
>>19996202
you should really get money for this =D
1st.AId.bag !!GFB1W2jC9WS No.19996214 [Report] >>19996223
>>19996211
......i know. This seemed a good idea once. Now i get people standing in a pool asking where they can swim =)
Anonymous No.19996216 [Report]
>>19996032
I realized you can just turn down the weight. I am dumb
rNdm No.19996223 [Report]
>>19996214
over and over again the same questions people apparently can't read
Anonymous No.19996226 [Report] >>19996233 >>19996305 >>19996315
>>19996162
Cool thanks again, I'm using the original app install and the 1.11 workflow. Just tried the new 1.13 w split sigmas running my 1st Gen. Question how do people download the new app from mega when it is 6.5gb? I've tried before to dl more than 5gb per day and it just won't work for me.
1st.AId.bag !!GFB1W2jC9WS No.19996233 [Report]
>>19996226
there is download quota.....
https://mega.io/desktop
you can paste the link in the app....it gets 5gb, and resumes asap the coolddown is off
Anonymous No.19996305 [Report] >>19996314 >>19996315
>>19996226
last thread First Aid provided an alt download, if it is still valid go find it, it didn't have that limit.
1st.AId.bag !!GFB1W2jC9WS No.19996314 [Report]
>>19996305
thats for old pack, and i think that anon have it.

dunno if it its even needed to get to new pack if you got old one and sage installed, just update comfyui and then update all custom nodes, should work ok.......but then again, nothing works that easy in comfy, most likely something breaks...thats why i do fresh install time to time
Anonymous No.19996315 [Report]
>>19996226
>>19996305
https://archived.moe/r/thread/19938024/#q19951862
Anonymous No.19996395 [Report] >>19996459
Can you add an input image crop or expand to 480 or 720 like in the kj guide on rentry https://files.catbox.moe/00boca.json
This is the wf from it.
Anonymous No.19996398 [Report]
This is the old 2.1 guide mentioned and updated with lightx loras.
https://rentry.org/wan21kjguide
I will check out your 2.1 wf and see if it's better because my 8gb 3050 is giving poor gens
1st.AId.bag !!GFB1W2jC9WS No.19996459 [Report]
>>19996395
seems some old wan 2.1 workflow on fast glance....i dunno what what you think it would do? My workflow resize the input as well before sampling (so no1 goes and tries to make video from 8k image and then come here to ask why it gave them errors)

lowering resolotion resolves memory issues and runs faster, sure. You can move the slider in my workflow to 480....does the resize (not cropping), it change the other dimension keeping proportions to nearest value that is division of 16.
Anonymous No.19996524 [Report] >>19996579
>>19996202
Can you please share your work flow... what do you think they are currently using
Anonymous No.19996579 [Report] >>19996628
>>19996524
Anonymous No.19996628 [Report]
>>19996579
I worded it incorrectly, I meant what do they think they are already using
Anonymous No.19996641 [Report] >>19996644
So whats the best face swapper we're using for video? Or are we just face swapping images and then animating the results with LORAs?
Anonymous No.19996644 [Report]
>>19996641
VACE if you got the power.
>just face swapping images and then animating the results with LORAs
Pretty much yeah.

You could try with Face Reactor/Face Fusion and a sample video, but results are middling at best
Anonymous No.19996846 [Report] >>19997723
>>19996005
Alright. I'll do this and report back.

Do I need to do something to "remove" the NSFW filters or something like that?
Anonymous No.19997492 [Report] >>19997508
>>19992539

what prompt or lora is being used for this?? i have that workflow loaded, it's kind of confusing with the green circle thing
Anonymous No.19997508 [Report] >>19997900 >>19997975
>>19997492
You don't need a prompt or lora, you just need to provide a reference video.
1st.AId.bag !!GFB1W2jC9WS No.19997723 [Report] >>20001214
>>19996846
not really....that is for reactor node (aka faceswap). Not used in my wan workflows
Anonymous No.19997776 [Report] >>19997867
What type of workflow would be needed in order to upscale video
Anonymous No.19997867 [Report] >>19997903
>>19997776
basically same as this >>19996145
but you load video as input and use video combine node as output
Anonymous No.19997900 [Report]
>>19997508
How do i add a reference video
1st.AId.bag !!GFB1W2jC9WS No.19997903 [Report] >>20009046
>>19997867
Anonymous No.19997975 [Report]
>>19997508

ohhh. that makes more sense. thank you
Anonymous No.19998199 [Report] >>19998843
https://github.com/kijai/ComfyUI-WanVideoWrapper/issues/998

This says to use the 2.2 lightning loras at 1.0 strength instead of .75 like the new wf, also I've just enabled sage attn for high noise to test it (default off in the new wf).
1st.AId.bag !!GFB1W2jC9WS No.19998843 [Report] >>19999454
>>19998199
What workflow are you talking about
Anonymous No.19999014 [Report]
1.13
this workflow is from 1977 on page 34 of this article, not really related but looks like 1970s comfyUI
Anonymous No.19999018 [Report]
Forgot link, it mentions a character and video generator https://vintageapple.org/byte/pdf/197705_Byte_Magazine_Vol_02-05_Interfacing.pdf
Anonymous No.19999454 [Report] >>20000309
>>19998843

hey bag..do you know how to convert anime/cartoon into realistic? i used to do it in Forge UI, but has since been using comfyui, so i forget the process.
Anonymous No.19999964 [Report]
What's the problem with non fp16 loras? Do you think they're contributing to crashes? I'm thinking it's because I'm overclocking my GPU
1st.AId.bag !!GFB1W2jC9WS No.20000309 [Report] >>20003044
>>19999454
something like this?....its just controlnet plus prompt
Anonymous No.20001214 [Report] >>20001228 >>20001230 >>20001255 >>20001296 >>20001300 >>20001305
>>19997723
Okay! Managed to "start" ComfyUI and I create absolute shit images. But it's a start.

I'll try to download the essentials.
Do I even try to install Sage 1.0, Sage 2.0? What are those?

I understand a bit more. What are Checkpoints, what is a loras and so on.

I want to start with images, for the time being.

I just downloaded the first MEGA. I'll try a bit and then continue.

Thank you SO Much man. You're the absolute real MVP around here.
Anonymous No.20001223 [Report] >>20001228 >>20001230
>>19989931
>>19989931
>>19990263
Did you manage to have something working?

I want to start with getting an image. Giving it a sample image, a portrait and being able to modify it into something lewd.

I kinda sorta already managed something out of what 1stAIdbag and here (https://rentry.co/RealisticAI) did, but I need more training.

I guess little by little.
Anonymous No.20001228 [Report] >>20001230
>>20001214
>>20001223

Finally. My main goal is to make fucking images, fucking vids. Penetration.

Blowjobs, moneyshot/bukkake and other simply is not my objective.

Given this, is there something I can "focus" on?
Anonymous No.20001230 [Report]
>>20001214
>>20001223
>>20001228
Holy shit...
I do type like a reddit imbecile
Anonymous No.20001255 [Report]
>>20001214
Keep the thread informed. I am just starting out and learning too. Figuring out key words, saving images and combining them and such its a fun process learning it all.
Anonymous No.20001296 [Report] >>20001302
>>20001214
Sage is an accelerator that males generation go faster. If your goal is strictly images, not needed at this point, because images are small. Also, if you really did read thatRentry,there are alternatives, like the DMD lora.

Your message is unclear. You understand a bit more or WANT to understand a bit more?

For transforming an existing image to a lewd image, you are looking at img2img (possibly txt2img with controlnet or ipadapter or other img based modifier.)
Anonymous No.20001300 [Report]
>>20001214
The only person I have seen do consistent img modification is here.

https://archived.moe/r/search/text/generic/

They switch between inpainting, where you paint over a portion of a picture, write a prompt, and the AI tries to makes the changes, to ipadapter, which generates an image based on a text prompt and image prompt (because an image is worth a thousand words)

I believe Comfy has sample inpainting workflows, and that guy has shared workflows on the past if you search the archives.

But this is where models come in. The model is the dictionary your AI uses. Differently models react differently to different inputs. If you are going inpainting, there are specific inpainting models. For image generation, normal models.

For image generation, the size of latent image/the size of image you want to create is key. The AI are trained on specific dimensions, so read about your chozen model. A good guide is just using SDXL latent sizes, google that phrases.

Which is also a point, different mods mean different setups. SD1.5 vs SDXL vs QWEN vs Flus, all have slightly different settings.
Anonymous No.20001302 [Report] >>20001309
>>20001296
Sorry If I'm not making sense.

At the moment I'm only interested in images.
I'm able to create lewd images from tutorials I patched up here and there (see image attached) but I'm very interested in taking a base image, and keep the face of the person in the base image into the generated output.

Is there a workflow or specific models/lora/checkpoint i should use for this end?
Anonymous No.20001305 [Report]
>>20001214
the easiest source of models, workflows, and loras is civitai.com, but there are other places. Make an account to access nsfw content.

Loras are specific dictionaries that work on top of a model. Some models might not have enouvh reference material on what you want, for example, tentacles. By applying a lora, you are adding pages to a dictionary to help it gen. They are activated by key words, read up on the specific lora you are using. Ensure your lora is designed to be used with your model (there are flux loras, sdxl models, etc...)

Loras can also be weighted, liked prompts. Higher weights mlre emphasis on the lora vs your prompt. Again, the documentation with a lora will generally suggest a weight.
Anonymous No.20001309 [Report] >>20001322
>>20001302
You probably want this
https://archived.moe/r/thread/19988780/

moreso than this
>>20000008

I assume?
AMD.newb No.20001322 [Report] >>20001341 >>20001342 >>20001342
>>20001309
Kinda sorta.

I want to create images with the face I provide.
Later on as I'm more experienced, videos of those images.

Is there a good tutorial for porn image generation? What should I include, which loras and so on?

Thanks for all of this btw.
1st.AId.bag !!GFB1W2jC9WS No.20001341 [Report] >>20001440
>>20001322
couple options;
1) Use some ipadapter workflow. Good thing on this that it will transfer more than just a face, like hair, clothing, style in general into your nsfw image. Downside is that the actual faceswap is pretty meh
2) use reActor node on your nsfw image. Faceswap looks better but has some limitations in blowjob images.
3) use external ai program like facefusion, works on pics and vids

reactor and facefusion are sfw, but those limitations can be bypassed by editing some files.

looking at your images, you might have wrong output resolution (latent) or wrong sampler as the faces/eyes look kinda fucked up. Every model (checkpoint) have their sweet spot latent sizes and sampler settings.
Anonymous No.20001342 [Report] >>20001440
>>20001322
>>20001322
if that's your gen, you're obviously literate, way further along than some and not a retard. I guess you did get AMD working.

I am commuting to work and am of little practical aid and have to go be a productive member of society. But different facial swap aids mentioned on /r/ (I don't know what those Realistic Parody AI blokes use):

1) Making your own model of a person (never fiddled eith that, heard it takes real processing power, but again, no idea. The rentry or thread has a link to a depository)

2)Facereactor (rewuires nsfw patch and for some can be problematic to install, read the github)
https://github.com/Gourieff/ComfyUI-ReActor

3) Facefusion (never used, no data)
https://github.com/facefusion/facefusion

4) Ipadapter (read the github, they have sample workflows)
https://github.com/comfyorg/comfyui-ipadapter
AMD.newb No.20001440 [Report] >>20001513 >>20001533
>>20001341
>>20001342

Thanks for your answer.

I'm doing very basic stuff, figuring it out as I go.
I understand that there's way more I can do and I'm finally getting the hang of it all.

Making videos would be pie in the sky ideal, but I know the limitations of my time and my GPU.
I have a 12GB Radeon Rx6700 XT.

But yeah, faces is my goal even if just pictures. Imma look at the options you've both given me.

And yeah, those are my generations.

About my workflow.

I've gotten some models already and I'm running some generations with the same caption. I know I can adjust things here and there.

For the thing I'm doing, expecting the generation to actually try to do the caption. What am I missing? What else could I adjust?
Anonymous No.20001513 [Report] >>20001758
>>20001440
1) I do not recognize that model, so I don't know what specific type of image generation you are doing. I use SD1.5 and SDXL, so my advice may not be fully applicable.

2) Reference prompting, it is a skill.
https://www.comflowy.com/basics/prompt
and can vary depending on model. It takes practice to see what works and what doesn't. A help ful tool would be WD14 Tagger or another tagger. It uses AI to read an image and extract tags so you can better write prompts or see what the AI sees when looking at an image. There are different taggers, that follow different styles (like plain langauge), but I had trouble getting anything beyond WD14 working.

3) Latent image size matters. 512x512 is a basic start, but add 256 increments and see what happens. For example, a doggystyle photo might gen better in landscape than portaist or square.
https://www.reddit.com/r/StableDiffusion/comments/15c3rf6/sdxl_resolution_cheat_sheet/

4) Reference your sampler options...I don't know.
GENERALLY, more steps, better outcome, but longer gen. I top out at 30 for abritary reasons.
Your CFG might be a little high. It is the amount you tell the AI to listen to your prompt (Ithink that is what you mean when you say caption) Trying a lower 3-5 might give the AI more freedom to interpret your prompt.
Beyond that, you can play with some. DMD lora takes CFG 1, 5-8 steps and lcm. A lot of peoe use Dpmpp2 and 3 with karras. They all have their own flair.
1st.AId.bag !!GFB1W2jC9WS No.20001533 [Report] >>20001758 >>20005976
>>20001440
If I remember, previous zluda guy got pretty good speed with similar card you have, even without sage attention on Wan vids

I don't recognize those models... Are they sdxl or flux? I prefer sdxl over flux. Get biglove v4 checkpoint, my sdxl workflows use that
Anonymous No.20001541 [Report] >>20001553
Anyway to integrate the CPU? it doesn't seem to do much
1st.AId.bag !!GFB1W2jC9WS No.20001553 [Report]
>>20001541
No it won't....cpu plays very little part in comfy, or in Ai Gen in general. You can force comfy to use cpu instead GPU, but gens take about several hours instead of minutes
Anonymous No.20001706 [Report] >>20001795
Tried a memory dumper app, still getting crashes gonna switch from regular SSD to nvme SSD like this thread mentions https://www.reddit.com/r/comfyui/comments/1j249lw/is_there_a_trick_using_wan_with_a_3060_and_not/
AMD.newb No.20001758 [Report] >>20001787 >>20001792 >>20001839
>>20001513
>>20001533
I'm beginning to understand the importance of loras.

What are the "checkpoint" golden standards? The ones I should give it a check?

Can I "use" several loras at the same time?
As previously stated, I'm interested in fucking (not blowjobs, only when penetration is involved), but also in orgy, so I want to explore doggy on one side, reverse cowgirl on the other.
Anonymous No.20001787 [Report] >>20001798
>>20001758
I think if you look at civitai archive and sort by most popular they are all nsfw sd 1.5 checkpoints w examples
Anonymous No.20001792 [Report]
>>20001758
You can use several loras, but their interactions can vary. Again, not at home right now, so cannot show examples, but for example, when I want to make a shemale, all the transexual futanari loras I used sucked, but when I paired it with a penis lora, cash money. Many of the vid wizzes see this as well.
An example of a bad example is when I use the tentacle lora I like with a cum lora, it creates quite the abomination.
Loras will help with sex acts, but what will probably help more is looking into controlnet. Controlnet can take the pose or shape of an image, and then you can gen over it. Like ipadapter. But there are many different types of controlnet.

For checkpoins, like FirstAid says, a lot of porn genners use BigLove, BigAsp. Cyberrealistic is a name I have heard as well.
Anonymous No.20001795 [Report] >>20002333
>>20001706
>memory dumper app
wat
>crashes
show error.
>not working
what workflow.


I mean,it is like we can't see your screen, and therefore can't troubleshoot. If only we were on an image blard so you could show pictures and we could understand, or at the very least specs.

I am trying to passively agressively tell you to read the thread and be more helpful so we can be more helpful, because I gen fine with my 3060 on a regular hdd
Anonymous No.20001798 [Report]
>>20001787
You should also filter by desired model. Personally, 1.5 works better for inpainting, sdxl better for image generation.
Anonymous No.20001827 [Report] >>20001844
I keep crashing at HighNoiseSampler, I've installed everything stepbystep and reselected all nodes
1st.AId.bag !!GFB1W2jC9WS No.20001839 [Report] >>20003607 >>20004336
>>20001758
I made this for you. You can use existing nfsw image as refrence and input your girl pic for the faceswap. Dunno if this is what you were after. Files you need

Checkpoint (save in ComfyUI/models/checkpoints)
https://civitai.com/models/897413?modelVersionId=1990969

DMD2 speedup LORA (save in ComfyUI/models/loras)
https://civitai.com/models/1608870?modelVersionId=1820946

Controlnet model for SDXL (save in ComfyUI/models/controlnet)
https://huggingface.co/xinsir/controlnet-union-sdxl-1.0/blob/main/diffusion_pytorch_model_promax.safetensors

RaActor thing....
Go to manager in Comfyui
-install reactor 0.6.1-b2 (once installed shut down comfyui)
-download the .py file from https://mega.nz/file/ap4GRRZS#IrFVxSFBeg8d24_8-lwYcmSXgTtVWlGOB4LzJcqq4FY
-paste and replace it in ComfyUI\custom_nodes\comfyui-reactor\scripts

Workflow
https://mega.nz/file/HkRl2JSI#pazk7rUvH3OaKJ02oPFxIf2i4HpeWD80-gIU0HJNlSw
1st.AId.bag !!GFB1W2jC9WS No.20001844 [Report] >>20001853
>>20001827
disable sage nodes and try again
1st.AId.bag !!GFB1W2jC9WS No.20001853 [Report]
>>20001844
also, i dunno your rig specs, but if on low-specs, you could try;
-lowering the output reso
-enable the blockswaps
fraz No.20001866 [Report] >>20001880
How should I setup the weight values of the Loras and total steps? I'm gettin always trash results. When i see the Loras on civitai they look fantastic but when i try it sucks.
Here i waw trying a lesbian kissing lora and that's the result.
1st.AId.bag !!GFB1W2jC9WS No.20001880 [Report] >>20001900
>>20001866
>If/when you need help....
>A) take a screenshot of your whole workflow (and log console window if possible)
>B) state what workflow are you using
>C) what specs on your rig
Anonymous No.20001884 [Report] >>20001914
i5-10600K CPU @ 4.10GH NVIDIA GeForce RTX 2080 SUPER Total VRAM 8192 MB, total RAM 32670 MB
fraz No.20001900 [Report] >>20001911 >>20001917 >>20002619
>>20001880

Workflow I'm using: 1stAIdbag_WAN2.2_(v1.4)

Ultra 7 265KF (3.90 GHz) - RAM 32,0 GB - RTX 5070Ti 16Gb
1st.AId.bag !!GFB1W2jC9WS No.20001911 [Report]
>>20001900
well....couple issue here
when you are using lightx loras. CFG MUST BE 1.0. On high you can lower the lightx high a bit and add little to cfg for better movement with price of quality

BUT IN GENERAL LIGHTX LORAS AT 1.0
CFG VALUE IN SAMPLERS AT 1.0

you are you text to video loras in image to video workflow.....they work sometimes, but if there is alternative for i2v, use that

if you random input some values into places not knowing what they do.....yes you will get shitty outputs
1st.AId.bag !!GFB1W2jC9WS No.20001914 [Report]
>>20001884
is there any more info on console window, i just wonder why it shows reconnectiong in your screencaps
1st.AId.bag !!GFB1W2jC9WS No.20001917 [Report] >>20001935
>>20001900
also that stepcount......with lightx, 6-8 steps is fine
fraz No.20001935 [Report] >>20001938 >>20001945 >>20001952
>>20001917
I changed the values but that's the result.
fraz No.20001938 [Report] >>20001964
>>20001935
I used the same Lora on TensorArt website and the result is perfect.
1st.AId.bag !!GFB1W2jC9WS No.20001945 [Report]
>>20001935
disable those T2V loras and give it a go.

also try
https://civitai.com/models/1881060?modelVersionId=2186130 (i2v version)
Anonymous No.20001952 [Report]
>>20001935
You may want to also try adjusting the Shift values from 5 to 8 as specific lightning loras require either or. It appears to be a shift issue.
1st.AId.bag !!GFB1W2jC9WS No.20001964 [Report] >>20001969 >>20002003
>>20001938
i found no issues...just use i2v loras, and leave the settings alone if you donät know what you are doing
1st.AId.bag !!GFB1W2jC9WS No.20001969 [Report]
>>20001964
Anonymous No.20001979 [Report] >>20001985
Here's the console logs bro...I think I might just have to delete and start everything again
1st.AId.bag !!GFB1W2jC9WS No.20001985 [Report] >>20001996
>>20001979
might be good idea....backup your models folder before you do.....but try enabling blockswaps before you do ^^
1st.AId.bag !!GFB1W2jC9WS No.20001996 [Report]
>>20001985
although i don't you would even need blockswap with that ram and card
fraz No.20002003 [Report] >>20002008 >>20002020
>>20001964
I downloaded the I2V versions (i didn't notice there were 2 different versions) but I get same bad result. Could ithe issue be related to image size or fps?
fraz No.20002008 [Report] >>20002017 >>20002023
>>20002003
Anonymous No.20002017 [Report] >>20002023
>>20002008
I'm even more convinced its a shift issue. You really should try changing the shift from 5 to 8 in both spots and update here to confirm I am correct.
1st.AId.bag !!GFB1W2jC9WS No.20002020 [Report]
>>20002003
try 1.3 workflow (same wf but without the last frame option).....i dunno...but maybe it tries to sample that bypassed last frame into the mix?
Anonymous No.20002023 [Report] >>20002039
>>20002008
>>20002017
fraz No.20002039 [Report] >>20002047 >>20002055
>>20002023
I put them on 8 and that's the result.
Anonymous No.20002047 [Report]
>>20002039
Damn. Thanks for trying, sorry it didn't help
Anonymous No.20002052 [Report] >>20002137 >>20002442 >>20003607 >>20005608 >>20005837
annotated workflow
https://www.mediafire.com/file/q6cpsau5tw04wmn/Generic_Annotated_Ipadapter_and_Reactor.json/file

demo
https://files.catbox.moe/xzwk7r.mp4
1st.AId.bag !!GFB1W2jC9WS No.20002055 [Report] >>20002069 >>20002086 >>20002327
>>20002039
https://huggingface.co/Aitrepreneur/FLX/blob/main/Wan2.2-Lightning_I2V-A14B-4steps-lora_HIGH_fp16.safetensors

https://huggingface.co/Kijai/WanVideo_comfy/blob/d4c3006fda29c47a51d07b7ea77495642cf9359f/Wan22-Lightning/Wan2.2-Lightning_I2V-A14B-4steps-lora_LOW_fp16.safetensors

download the lightxs again into loras folders (maybe one or/both corrupted)
fraz No.20002069 [Report]
>>20002055
I'm tryng on workflow 1.3. If it doesn't work i'll try downloading the files you sent
Anonymous No.20002084 [Report] >>20002101
after
[ComfyUI-Manager] All startup tasks have been completed.
(after extracting and running nvidia_gpu) is it done and I've to close it and install vcredist + run? or do I need to wait for something else?
fraz No.20002086 [Report] >>20002107
>>20002055
Using workflow 1.3 giver better results but I can still see some grainy artifacts
1st.AId.bag !!GFB1W2jC9WS No.20002101 [Report] >>20002106
>>20002084
pretty sure you got vcredist installed already. install sage via 1staidbag.bat
Anonymous No.20002106 [Report] >>20002119
>>20002101
I'm sure but if I skipped that part i'm sure you'd've been pissed XD

I also have installed comfy normally and qwen edit models. Is it possible that I have any of the models it's asking here?
1st.AId.bag !!GFB1W2jC9WS No.20002107 [Report] >>20002205
>>20002086
well try getting them lightx loras again......other thing to try is to change sampler/scheduler to euler/simple for try
1st.AId.bag !!GFB1W2jC9WS No.20002119 [Report] >>20002309 >>20002314
>>20002106
after sage install....get essentials and one of the Q models.....you backedup, paste them back into models folder

>installed comfy

are you talking about desktop version?. My pack is portable, you don't need to install anything. In that not using sage installer on desktop comfy wont work
Anonymous No.20002137 [Report] >>20002157 >>20002505 >>20003607 >>20005837
>>20002052
annotated workflow
https://www.mediafire.com/file/z3a8v15tx8f4fdn/Generic_ControlNetUnionWIthNotes.json/file

demo
https://files.catbox.moe/ol4xtw.mp4 (embed)
Anonymous No.20002157 [Report] >>20003607 >>20005837
>>20002137
alternate, non annotated workflow. Does not include Ipadapter. Less influence from source face.

https://www.mediafire.com/file/h1t2os4lz9riqd6/Generic_Control_Faceswap_Only_no_notes.json/file

no demo
fraz No.20002205 [Report] >>20002271
>>20002107
Downloaded e put them into model folder. Looks much more clean now. Thank you.
Just to know, so in general the lightning loras weights shouldn't never be changed right? What is "weight" basically, what implies modifying it?
Total steps should be always between 8-10? In every case?
If i add multiple loras, all of their weights should remain 1? Can multiple loras be "stacked" and used in the same rendering or there could be some conflicts?
How do i choose the right values?
1st.AId.bag !!GFB1W2jC9WS No.20002271 [Report] >>20003795
>>20002205
On lightX loras......higher weight value = faster generation speed, downside is that the movement is somewhat limited because cfg is set to "1", if you lower the weight of the high noise lightx lora, you can add little bit cfg in high pass in cost of quality but gain little bit of movement.

To dumb the dual-lora explanation down...HIGHs = the action / movement LOWs = details, textures etc

On lightX you can get good gen with 6 steps....you don't have to use 6 steps, generally speaking more sampling steps = better quality and precision. It comes down to how much time you wanna put into your gen. Do a test run with 6 steps then using same seed using 26 step, see if its worth it quality <-> time

Loras don't "stack" if you mean that their effect is somewhat multiplied when using many different loras. Loras that do the same thing, one with higher weight usually wins, althou they are both loaded and in use in your gen. Example prompt "man enters frame, girl sucks the penis. blowjob deepthroat" you could use oral_insertion @ 1 and jfj-deepthroat @ 0.7 to get better looking penis on insertion but the actual blowjob/deepthroat action comes from jfj-deepthroat lora

i rarely use loras at 1.0.....as some of them tend to change faces on full weights, i hoover around 0.7-0.95. Choosing the right values.....well testing and learning
Anonymous No.20002309 [Report]
>>20002119
>are you talking about desktop version?
yes but I did that before I saw this pack
Anonymous No.20002314 [Report] >>20002327
>>20002119
I've now realized that the numbers where options and not steps, and that I could just send 6 and install sage+triton kekw i'm stupid tired
1st.AId.bag !!GFB1W2jC9WS No.20002327 [Report]
>>20002314
reading the instructions helps sometimes......note that the lightx download is broken, get them manually and save them in lora folder. AND sage installer wont work on desktop version, its only for portable (you can run the workflows without sage, thats not the game-ender issue)

links here
>>20002055
Anonymous No.20002328 [Report]
>reading the instructions helps sometimes

I read them, I'm just stupid and didn't realize that "install sage" meant "there's gonna be many options, be sure to select the one that says install sage"

alas, i'm dumb kek
Anonymous No.20002333 [Report]
>>20001795
Screen goes black, num lock still works on keyboard, it might the 5060ti 16gb issue it's known for, so there's no errors to read even in event logs. The app is mem reduct, I'm gonna keep it. Wow just tried on the nvme SSD not sata SSD, it seems 30 percent faster. I'm using 5 shift, 12 fps, 4 seconds (it expands to 14 seconds at normal speed, maybe a bit slow mo sometimes), 1.0 lightx2.2, 1.5 and 1.0 cfg, sage on both. Workflow is the last one in OP the first last image wf 1.13

See the 4th comment about mem reduct in this thread linked https://www.reddit.com/r/StableDiffusion/comments/1mqh3eb/crashing_on_low_noise_wan_22/
1st.AId.bag !!GFB1W2jC9WS No.20002336 [Report] >>20002360
well you should have all the needed files (and some extras) now and ready to go
Anonymous No.20002343 [Report]
This 24GB card was sitting around since 2023 and I started on SD, I finally got it running. Though its so old the Tesla m40 that is used CUDA toolkit 5.2 (were on 13.1 now) so it won't won't with torch and sage and stuff.
ImJohnNow No.20002360 [Report]
>>20002336
I mean it's still downloading the essential stuff....
ImJohnNow No.20002437 [Report]
btw do you know of a qwen edit model that doesn't take forever?

I have a 12gb 3060 and via comfy desktop I tried using it and it took like 20mins just to load the model
ImJohnNow No.20002442 [Report]
>>20002052
kek i thought it camed with preinstalled nodes
Anonymous No.20002505 [Report] >>20002515 >>20002585
>>20002137
hey quick question, it appears as if the reactor node where missing, and when I press on install it says it's already installed. Then again if I want to add it manually, it doesn't let me. Any way to fix it? Or what am I missing
Anonymous No.20002515 [Report] >>20002548
>>20002505
Find it in /custom-nodes/, delete the folder, then reinstall it. This typically occurs if an install was incomplete. Check the log while installing to confirm completion
Anonymous No.20002523 [Report]
The easier alternative (for me) is using reactor on reforge SD (like auto1111) and parts of deepface labs to split and reintegrate videos with the complete. swapped face. I've tried the same thing on comfy a few times and on the apprentice app with no luck. Probably gonna try again with d bags links and wf above a few posts
Anonymous No.20002535 [Report]
site-packages\torch\cuda\__init__.py:63: FutureWarning: The pynvml package is deprecated. Please install nvidia-ml-py instead. If you did not install pynvml directly, please report this to the maintainers of the package that installed pynvml for you.
Anonymous No.20002548 [Report] >>20002564 >>20002569 >>20002577 >>20002585
>>20002515

this keeps popping up for me
Anonymous No.20002564 [Report] >>20002577
>>20002548
yeah same here, even after deleting it and reinstalling
Anonymous No.20002569 [Report] >>20002572
>>20002548
If this is after reinstalling, ensure the install was successful in the logs, then follow the prompt, Restart Required. That's not a reload, you'll need to restart the entire instance.
Anonymous No.20002572 [Report] >>20002582 >>20002585
>>20002569

i did that. restarted the whole thing but still giving error after reinstall
1st.AId.bag !!GFB1W2jC9WS No.20002577 [Report] >>20005837
>>20002548
>>20002564
RaActor thing....
Go to manager in Comfyui
-install reactor 0.6.1-b2 (once installed shut down comfyui)
-download the .py file from https://mega.nz/file/ap4GRRZS#IrFVxSFBeg8d24_8-lwYcmSXgTtVWlGOB4LzJcqq4FY
-paste and replace it in ComfyUI\custom_nodes\comfyui-reactor\scripts
Anonymous No.20002582 [Report]
>>20002572
1. Stop the instance
2. Delete any reactor folder in custom-nodes
3. Start instance
4. Install Reactor again through Manager, watch logs for completion
5. Restart instance

That should do it.
Anonymous No.20002585 [Report] >>20002824 >>20005744 >>20005837
>>20002505
>>20002548
>>20002572

Reactor can be a bitch to install. I thought I put it in the notes, but I guess I didn't (maybe it was in another workflow.)

https://github.com/Gourieff/ComfyUI-ReActor?tab=readme-ov-file#installation

follow the steps as written in the github.

I believe I need to to install visual studio AND
follow the steps for "OR if you don't want to install VS or VS C++ BT - follow this steps (sec. I)" to build insightface properly and get it working.
Anonymous No.20002619 [Report]
>>20001900
6 to 8 steps not 20 ffs lmao
Anonymous No.20002824 [Report] >>20002835
>>20002585
pip._vendor.pyproject_hooks._impl.BackendUnavailable: Cannot import 'mesonpy'
ffs
Anonymous No.20002834 [Report]
>>20002831
No worries I just fucking hate that it can never be easy. It's never plug and play. It's always complicated with python
Anonymous No.20002835 [Report] >>20002837
>>20002824

yeah, it's a bitch.

Sorry, it has been a month since my last reinstall, I forget the specific troubleshooting I had to do.

Do you have a 50xx graphics card?

https://www.reddit.com/r/comfyui/comments/1jqxsox/comfyui_manager_cannot_import_mesonpy_and_other/
Anonymous No.20002837 [Report]
>>20002835
nono 3060
Anonymous No.20003004 [Report]
absolute newb to this stuff, was able to get comfyui running but i dont have a nvidia graphics card so i had to do it manually through the celeb AI thread on /b/. can i move the files from this apprentice pack to the folders on my installed comfyui and still have them work? will the workflows still work?
Anonymous No.20003044 [Report] >>20004350
>>20000309

yeah. it was similar to that. how do you make it not so plasticy looking? possible?
Anonymous No.20003607 [Report]
>>20001839

Thanks!
It's asking me some missing stuff, but I'll install them next time I'm working on the computer.

>>20002052
>>20002137
>>20002157

Also thank you for these. I got them all.
I understand I have a lot to learn, but looking at other people's workflows gives me an idea.

---
Is there a workflow I could find to use a base picture and "pornify it"?
For example the way the woman in pic related is leaning and looks already seems to be good for a genration, just put a man behind her, something, something.

Moreover. Is there a good workflow to have an orgy? Two women, several men. It can be animated or something, I can later find ways to make it real (or maybe animated is good enough)
fraz No.20003795 [Report]
>>20002271
Thanks a lot for the explaination. Tell me if i got it right.

Lightx Loras weights must be set on 1
All other Loras usually set between 0.7-0.95.

If High noise affects action/movement and Low noise the visual quality, it implies that modifying High weight tells Wan how much take into consideration the "actions" embedded in the Model, whilst modifying Low weight tells Wan how much consider the "appearance/details" of the Model.

What if there were multiple models doing the same thing and with the same weight? Could that generate artifacts/conflicts or simply wan does his own thing?

The weight of the Loras must be set the same both in High and Low?

Let's suppose a blowjob scene where I have 2 different Models that do the same job:
1. oral insertion
2. deepthroat
I like the "action" of Model 1 but not the penis details. I don't like action of Model 2 but i like penis details. In this case should I set those values:
1. oral insertion - High (0.95) Low (0.7)
2. deepthroat - High (0.7) Low (0.95)

Is that right?
Anonymous No.20003826 [Report] >>20004316 >>20004350
What's the difference between run_nvidia_gpu_fast
and
run_nvidia_gpu_fast_fp16_accumulation
Anonymous No.20004316 [Report] >>20004350
>>20003826
This was surprisingly annoying to research.

https://blog.comfy.org/p/comfyui-v0-1-x-release-devil-in-the-details-2

Fast is for 40xx gpus and activites a thing like fp16, but is fp8?
Anonymous No.20004326 [Report] >>20004350 >>20004502
F:\D\ComfyUI_Apprentice_portable_0360\ComfyUI_windows_portable\python_embeded\Lib\site-packages\sageattention\attn_qk_int8_per_block.py:40:0: error: Failures have been detected while processing an MLIR pass pipeline
F:\D\ComfyUI_Apprentice_portable_0360\ComfyUI_windows_portable\python_embeded\Lib\site-packages\sageattention\attn_qk_int8_per_block.py:40:0: note: Pipeline failed while executing [`TritonGPUAccelerateMatmul` on 'builtin.module' operation]: reproducer generated at `std::errs, please share the reproducer above with Triton project.`
Error running sage attention: PassManager::run failed, using pytorch attention instead. How do I fix this?
AMD.newb No.20004336 [Report] >>20004350 >>20004354
>>20001839
okay, tried running that but I encountered issues with LayerFilter: Sharp Soft.

So I removed it.

The results are... progress. Not quite what I want because the image still could be improved.

How could I do this?
is it the filter I "replaced" ?
1st.AId.bag !!GFB1W2jC9WS No.20004350 [Report]
>>20004336
yes, that is way too sharp, remove the filter

>>20003826
>>20004316
fast_f16 is little faster on models that use floatpoint16....like wan and sdxl models, it also tends to eats some quality away. I haven't noticed any major speedboost

>>20004326
is sage installed?, if so, set it to "auto", if no help --> disable the node

>>20003044
get the workflow above ^^, it uses sdxl model so it don't have so much of that "plastic flux" look
1st.AId.bag !!GFB1W2jC9WS No.20004354 [Report]
>>20004336
next step is to animate thoser pics ;)
1st.AId.bag !!GFB1W2jC9WS No.20004441 [Report]
Using same seed, 8-steps;
"Nude black man enter the frame from the left side. Woman sucks the penis. Blowjob, deepthroat, man's upper body stays out of the frame. A beautiful woman is performing oral sex on a huge black penis"

1. Oral insertion 1.0 high&low
2. jfj-deepthroat 1.0 high&low
2. Oral insertion 0.5 & jfj-deepthroat 0.5 high&low
4. Oral insertion 0.5 & BBC blowjob 1.0 high&low

as you can see difference between the actions and cock quality in 1-2. 1-3 completely ignore the black cock part as those oral insertion/jfj-deepthoat loras don't have that many BBC in their dataset which they were trained on. Therefore i added BBC blowjob lora (which down have good insertion) into the mix and keep the weight above oral insertion one (otherwise insertion would win --> white cock)

Im not going to say any golden values to use, nor "what is the best prompt to do X" as there is no such things. Lora settings, as other settings, seed, image itself will dictate how the gen will. Faster you accept that you will get more not so great outputs than good ones, the better. Most of the fun comes trying stuff out, mixing and matching shit together.
Anonymous No.20004502 [Report] >>20004534
>>20004326
is sage installed?, if so, set it to "auto", if no help --> disable the node Tried all these steps and still getting that error
1st.AId.bag !!GFB1W2jC9WS No.20004534 [Report]
>>20004502
Anonymous No.20004937 [Report] >>20004939
qwen edit 4 steps lora on a 3060 is taking 300s to generate; is that normal? what can I do to make it faster?

I changed to gguf-5_s but it seems like it's not compatible with 4 steps? even if it is, it isn't faster than 200s
Anonymous No.20004939 [Report]
>>20004937
qwen is big and takes time; those times seem to match my experience when I dabbled. Also 3060.

Nature of the beast it is I figure.
Anonymous No.20004956 [Report]
Hi wizards, would a 3060 12 gb be better than a 4060 8gb for generation in general or not? Thank you.
Anonymous No.20005518 [Report]
Up
Anonymous No.20005601 [Report]
AMD.newb No.20005608 [Report] >>20005610 >>20005718 >>20005731 >>20005731
>>20002052
This one worked. Not perfect of course, but so far it's the best one I've found.
I had to update and then change a lot of the workflow but I got it done on a Radeon AMD.

What are good Loras for SEX positions. Penetration, anal sex, plowcam, full nelson, reverse cowgirl. The whole thing.

Anyone got a good Checkpoint -> Lora combo?

----
Expanding on this workflow or these ideas. How could I do two women? side by side, men behind them? How could I do the face thing?

It seems that there is a Bazillion of anime loras.
AMD.newb No.20005610 [Report] >>20005741
>>20005608
forgot to add results
Anonymous No.20005718 [Report] >>20005744 >>20005837 >>20006081
>>20005608

how did you get reactor to work. shit still wont install for some reason
Anonymous No.20005724 [Report]
I think I can make a webm installing reactor in under 1 to 2 minutes using reforge, deepfacelabs is always an alternative too
Anonymous No.20005731 [Report] >>20005741 >>20006081
>>20005608
Want to share your modified workflow? Curious to see what changes you felt were necessary and if there are potential improvements.

>>20005608
There is no definite lora list, go on civitai, filter by SDXL and LORA and start scrolling/searching.

Two women CAN be done, I usually just close crop two faces as an input and cross fingers.
You need to change the input and source face index on reactor from 0 to 0,1 though.

Using one of the control net versions and an input image with two women would be more reliable though.
Anonymous No.20005741 [Report] >>20006081
>>20005610
>>20005731
Anonymous No.20005744 [Report]
>>20005718
Did you try this
>>20002585
Anonymous No.20005837 [Report]
>>20005718
Also,

1) you might be getting errors earlier in the install, like Cython not being installed. Try googling "comfyui reactor No module named 'Cython'"

2) I just went to my second ComfyUI, and yeah, it didn't have Reactor working. Going through the steps at >>20002585 didn't help.

3) https://www.reddit.com/r/comfyui/comments/15mz2sv/reactor_node_cython_and_insightface_error/
Go down to Far-Ship-4187's post. I just copy and pasted my Lib and Include files from my working install to the not working one, and bam, started working. BUT, you don't have a working install, and I am not troubleshooting to figure which folders of the 6-10gb copy where the key ones.

3) This guy has a premade portable that should have reactor already installed and good to go, just like how First Aid's is g2g for video.
https://github.com/YanWenKun/ComfyUI-Windows-Portable
Yeah, you might have to upgrade, uninstall certain unwanted nodes, install others, but I downloaded it myself and Reactor works BUT DOES NOT the NSFW patch installed, so you'll need to >>20002577

It also has all the nodes necessary for
>>20002052
>>20002137
>>20002157
Save for one, so bonus.
Anonymous No.20005976 [Report]
>>20001533
Newbie here with zluda and Radeon 7900xtx im 10 days old into this

with your latest workflow or the pisswizard one i just disable sage attention and the first gen is between 600 and 800sec.
after it take between 250 and 500 seconds

I tried upgrading quality with q8 models wan model fp16 and the text encoder upgraded to xxl fp16...
Take a bit longer to generate it's between 900/1000sec.
Anonymous No.20006022 [Report] >>20006026 >>20006193
I've toggled off the sage nodes aswell now but no luck
Anonymous No.20006026 [Report] >>20006249
>>20006022
Try Sage Attention 1?
https://github.com/thu-ml/SageAttention/issues/234
AMD.newb No.20006081 [Report]
>>20005718
If wish I could help but frankly my Comfy installation has been a Frankenstein of packages and folders. I've no idea what I've done.

I did (I think) followed 1sr.AId.bag instructions earlier on this thread.

>>20005731
>>20005741
Getting one, two women in an orgy and perfecting the orgy participants not looking like Eldritch abominations is my next immediate goal. Getting the correct faces on the women as well, of course.

I'm thinking about maybe doing the orgy generation in anime, given that the loras are way more mature and robust and then using a guide to make them "realistic" and attach the faces then.

After I manage to get several good images, I'll graduate and start animating these pictures.

I'll share my workload later. For sure.

About the Eldritch abominations. Anyone has a good theory or resource regarding the prompts for lewd/porn stuff? Do the parameters change based on the Checkpoint or Loras?
1st.AId.bag !!GFB1W2jC9WS No.20006193 [Report]
>>20006022
do you have AMD card?
Anonymous No.20006249 [Report] >>20006276
>>20006026
I installed Sage Attention 1 and got a new error this time

I have i5-10600K CPU @ 4.10GH NVIDIA GeForce RTX 2080 SUPER Total VRAM 8192 MB, total RAM 32670 MB
1st.AId.bag !!GFB1W2jC9WS No.20006276 [Report] >>20006437
>>20006249
did you get my package or are you using some old comfyui portable, just loading my workflows?
Anonymous No.20006301 [Report]
Anyone got a good face swapping image/video workflow that uses VACE? I got a 5090 so it should handle it fine.
Anonymous No.20006437 [Report] >>20006453
>>20006276
Downloaded everything of this thread. I know nothing about any of this just follwed the steps shown on the thread.
1st.AId.bag !!GFB1W2jC9WS No.20006453 [Report] >>20006871
>>20006437
have you tried running it via
run_nvidia_gpu.bat (and disable those sage nodes)

NOT run_nvidia_gpu_sageattention.bat
Anonymous No.20006871 [Report] >>20008608
>>20006453

Omg thats fixed everything. Thanks for your help and patience!
Anonymous No.20007557 [Report] >>20007619
I'm using pisswizard's workflow and it works perfectly, but how do i load other loras i have placed in the models/loras folder?

ComfyUI_windows_portable\ComfyUI\models\loras

Load LoRA node seems to only be able to choose between Wan2.2-Lightning_I2V-A14B-4steps-lora_LOW_fp16, and Wan2.2-Lightning_I2V-A14B-4steps-lora_HIGH_fp16.
Anonymous No.20007619 [Report] >>20007639
>>20007557
Did you refresh your page after placing the loras in the folder?
Anonymous No.20007639 [Report]
>>20007619
Yeah that did it... excuse my dumb ass.
noob wannabe No.20008115 [Report] >>20008245
Anybody got a workflow example to work from for inpainting with loras? Particularly lip/breast enhancement etc
Anonymous No.20008245 [Report]
>>20008115
https://www.mediafire.com/file/08az8rpqj2i4mv1/generic.json/file
1st.AId.bag !!GFB1W2jC9WS No.20008608 [Report]
>>20006871
I still you should get sage working with your card.....but good you got it running anyways
Anonymous No.20008986 [Report] >>20009046
What is a good workflow/lora to use in order to upscale old video? Sorry if I said any of that incorrectly.
1st.AId.bag !!GFB1W2jC9WS No.20009046 [Report]
>>20008986
simple way to do it >>19997903
Anonymous No.20009865 [Report] >>20009894
I've been trying to use the QwenEditPlus workflow, and I can kind of get it to work, but I don't understand the resolutions, like how the resolution of your uploaded images ties to the "dimensions" in the "Empty Latent Image Presets" node. It always ends up generating a much larger image than the base image with basically hallucinations outside of the original frame of the base image, or if I've really fucked it up, it generates something super zoomed in. Any tips you can provide for using that workflow cleanly? I've mostly been using it to swap clothing.
Anonymous No.20009894 [Report] >>20009912
>>20009865
>QwenEditPlus workflow
if it is this one:
https://civitai.com/models/2030628/qwen-edit-plus-2509-openpose-8-steps

I don't actually see which node is determining size. BUT, there are three image inputs.
I would put an image that is the same size in each slot, and see if the output is the same size.
If so, I would change the size of an image in each slot one by one, to see if the final image is take a size from a single slot, or doing math and combining the sizes.

I don't have the necessary qwen models to test this myself.
Anonymous No.20009912 [Report] >>20009938 >>20009954 >>20009961
>>20009894
Thanks. It's not that one, but the similar 1st.AId.bag workflow that's included in OP's pack. I've tried some testing like you're suggesting, and will keep doing so to see if I figure it out.

Attaching an image of the workflow with an example of what I'm talking about.
Anonymous No.20009938 [Report]
>>20009912
Yeah, I'm not that wise, you might have to wait for 1stAid.
I see the latent set to 768x512, and see it has roughly doubled and quadrupled in the final.

BUT

Above the final output, latent size is set to 1, which is a combo of that pink and blue lines.

Trace those back and see where they lead.

And switching it to 2 should make it listen to your present (768x512)
1st.AId.bag !!GFB1W2jC9WS No.20009954 [Report]
>>20009912
well the latent size should be pretty close to your image 1 dimensions and width:height ratio.....i think that mc-shirt fucks it up as it's much bigger and and in 1:1 ratio

what might help, is it to resize/crop all your images in use in uniformed size and ratio...example make empty image of 1024x1024 in photoshop, gimp, paint, whatever, paste your image in and resize/crop it to fit that 1024x1024

also note that there is dimension invert option to swap portrait to landscape and vice versa
1st.AId.bag !!GFB1W2jC9WS No.20009961 [Report] >>20010033
>>20009912
....or just crop your output image ;) gen seems good thou
Anonymous No.20010033 [Report] >>20010042 >>20010078
>>20009961
That's what I've been doing. Your gens just seem so much better and cleaner, though, so I've wondered what I'm missing.

Here I tried resizing/cropping everything to a uniform size/ratio and setting the dimensions in the node to the same thing (with invert). Still fucky. I'm definitely missing something, but I'll probably just have to accept that and keep cropping. Appreciate the help.
Anonymous No.20010042 [Report]
>>20010033
?
1st.AId.bag !!GFB1W2jC9WS No.20010078 [Report]
>>20010033
there might be a bug in the workflow...that was kinda fast build when model came out...It seems it doesn't take the dimensions from image 1 (as it should)...more like pulling some random numbers out of nowhere....i try to fix it tomorrow

for now, try putting latent option to "2" (latent from preset).....then pick empty latent close to your input image 1
example 832x1216 (=portrait) choose empty latent preset 1152x896 and click invert (=896x1152)