← Home ← Back to /g/

Thread 105600092

321 posts 170 images /g/
Anonymous No.105600092 [Report] >>105600113 >>105600437 >>105602803
/ldg/ - Local Diffusion General
Discussion of Free and Open Source Text-to-Image Models

Prev: >>105595839

https://rentry.org/ldg-lazy-getting-started-guide

>UI
SwarmUI: https://github.com/mcmonkeyprojects/SwarmUI
re/Forge/Classic: https://rentry.org/ldg-lazy-getting-started-guide#reforgeclassic
SD.Next: https://github.com/vladmandic/sdnext
ComfyUI: https://github.com/comfyanonymous/ComfyUI

>Models, LoRAs, & Upscalers
https://civitai.com
https://civitaiarchive.com
https://tensor.art
https://openmodeldb.info

>Cook
https://github.com/spacepxl/demystifying-sd-finetuning
https://github.com/Nerogar/OneTrainer
https://github.com/kohya-ss/sd-scripts/tree/sd3
https://github.com/derrian-distro/LoRA_Easy_Training_Scripts
https://github.com/tdrussell/diffusion-pipe

>Chroma
Training: https://rentry.org/mvu52t46

>WanX (video)
https://rentry.org/wan21kjguide
https://github.com/Wan-Video/Wan2.1

>Misc
Share Metadata: https://catbox.moe | https://litterbox.catbox.moe/
Img2Prompt: https://huggingface.co/spaces/fancyfeast/joy-caption-beta-one
Archive: https://rentry.org/sdg-link
Samplers: https://stable-diffusion-art.com/samplers/
Txt2Img Plugin: https://github.com/Acly/krita-ai-diffusion
Bakery: https://rentry.org/ldgcollage | https://www.befunky.com/create/collage/
Local Model Meta: https://rentry.org/localmodelsmeta

>Neighbours
https://rentry.org/ldg-lazy-getting-started-guide#rentry-from-other-boards
>>>/aco/csdg
>>>/b/degen
>>>/b/celeb+ai
>>>/gif/vdg
>>>/d/ddg
>>>/e/edg
>>>/h/hdg
>>>/trash/slop
>>>/vt/vtai
>>>/u/udg
>>>/vp/napt

>Local Text
>>>/g/lmg

>Maintain Thread Quality
https://rentry.org/debo
Anonymous No.105600108 [Report] >>105603802
Blessed
Anonymous No.105600113 [Report] >>105600126
>>105600092 (OP)
Troll was faster, damn.
Anonymous No.105600126 [Report] >>105600160 >>105600186 >>105600197 >>105600218 >>105600232 >>105600565 >>105600848 >>105601944
>>105600113
I'm not this OP, but I was the previous OP. I added /napt/ because it didn't seem any worse or better than the other neighbor threads and I'd never seen anyone give a reason why it should be excluded.
Anonymous No.105600160 [Report] >>105600871
>>105600126
I don't care about that, but there's a troll making these threads before previous hits limit.
Anonymous No.105600186 [Report] >>105600234 >>105600785
>>105600126
i'll tell you why you shouldn't have
because it was requested by a repulsive avatartranny
at this point it might be best if we had another thread split
Anonymous No.105600197 [Report] >>105600515
>>105600126
Valid point but you fed the nagger. Let's see how he behaves. (I think he's just young)
>>105600150
hm, I don't even know what DrawThings is lol. But the values look ok I guess? I can't assign all of them tho (I work with comfy). configuration.steps = 30, is that the steps? 30 is a bit high for a resample. and yeah that is the right controlnet. lustifyV6 (with the exact same settings as the latest epicrealismXL) should produce at least an error-free image. doesn't help you much does it?
Anonymous No.105600217 [Report]
hopefully he shuts up at least and if I see one more rocketgirl she's gonna get BLACKED
Anonymous No.105600218 [Report] >>105600234
>>105600126
>I'd never seen anyone give a reason why it should be excluded.
You that new? How about the "R" avatarfaggot who keeps shitting up the threads trying to force it into the OP? That's reason enough not to add it. Won't be adding it on my bakes, that's for sure, fuck that guy
20Loras No.105600219 [Report] >>105600249
I tried to work this better, but inpainting at 10k resolution is horrible. Browser freezes every second click, recovers in 8 seconds or worse.
So just a test of upscale quality after postprocess edits.

It's not the vram that's filling up, what's causing it?
4090, 64gb ram.
Anonymous No.105600232 [Report]
>>105600126
By the same logic we should add /sdg/ to the neighbor list, right? If a borderline schizo avatar tard isn't reason enough to exclude neighbors, then there's no problem with /sdg/ being in there
Anonymous No.105600234 [Report]
>>105600186
>>105600218
I think useful information should be disseminated regardless of who originally shared it. That said, I understand your point of view, and I don't bake often, nor do I care if you go ahead and bake without the link.
Anonymous No.105600249 [Report] >>105600279
>>105600219
that's just how webui shit is with really large images. try setting up krita
20Loras No.105600279 [Report] >>105600306
>>105600249
I think I attempted krita before, but iirc you didn't have most of the tools available in forge, right?
I rely on perturbed attention guidance, no brown+AYS
Anonymous No.105600306 [Report]
>>105600279
there's a toggle for pag built in, i don't think you can modify it's values though. as for samplers/schedulers you can use anything that's in comfy by adding it as a template in a config file. i like to use it with swarmui since they both use the same backend and i can switch between the two whenever i need
Anonymous No.105600386 [Report] >>105600402 >>105600406 >>105600445 >>105600470
I'm trying to figure out Kohya to train loras, leaving most things at the default config and it's just failing immediately without telling me what I'm doing wrong at this point. Did I fuck up the installation or something?
Anonymous No.105600391 [Report] >>105600407 >>105600425
There doesn't seem to be a baking template rentry, so I made one at:
>https://rentry.org/ldgtemplate
I don't bake often myself, but I figure it might help to avoid issues like this (unwanted links being added and then passed on to the next bake) if there's a set template to copy. Plus, it's good to have one for new bakers, so they don't have to copy paste the last thread or keep a template updated locally.
I maintain a few of the other rentries as it is (wan, collage, model meta), so I'll keep it updated if this one sticks and there's some kind of consensus that something should be added. "Consensus" meaning:
>something new is added to the OP by any baker >over the next couple of threads, there's no arguments/valid arguments that said thing should be excluded
Same thing happened when AniStudio was added and it was generally accepted that it should be left off until its more feature complete. And you don't have to use it, but it's there if you want it.
Anonymous No.105600402 [Report]
>>105600386
You failed to canonicalize script path
Anonymous No.105600406 [Report]
>>105600386
you should ask claude
Anonymous No.105600407 [Report]
>>105600391
Seems fair to me
Anonymous No.105600425 [Report]
>>105600391
No problem with that.
Anonymous No.105600436 [Report]
Anonymous No.105600437 [Report]
>>105600092 (OP)
NAG waiting room
Anonymous No.105600442 [Report]
Anonymous No.105600445 [Report]
>>105600386
For a brief moment I thought I figured out that I was simply retarded when I realized there was a tab up top to switch to lora mode from dreambooth
But the same problem persists so my foolishness there doesn't seem to be the cause here
Anonymous No.105600452 [Report] >>105600463 >>105600467 >>105600477 >>105600478 >>105600480 >>105600694 >>105602644
Hi,
I was told to ask here. I'm from the great /aicg/ thread.
I have a RTX 3060.
My humble wish is that:
I want to convert an anime image into a video and animate it.
I would like to know how the prompts logic works and which programs to use.

In the meantime I will read this 2 rentris:
>WanX (video)
https://rentry.org/wan21kjguide
https://github.com/Wan-Video/Wan2.1
Anonymous No.105600463 [Report] >>105600478 >>105602644
>>105600452
>3060
Anonymous No.105600467 [Report] >>105600514 >>105602644
>>105600452
>I have a RTX 3060.
sorry for your loss anon...
Anonymous No.105600470 [Report]
>>105600386
some path problem, you maybe moved the folder manually after installation, didnt activate the venv if it was required or something like that, install fresh on a system drive and follow any guide
Anonymous No.105600477 [Report] >>105600514
>>105600452
you'll be running the gen overnight but its possible, use the workflows in that rentry
Anonymous No.105600478 [Report] >>105600514
>>105600452
This rentry has everything you need, including the workflow. If you don't know comfyui then your best bet is to look up some guides, maybe videos on youtube for complete newbies.
>>105600463
No he's not, Q4 would work just fine, even if it gets offloaded into RAM slightly
Anonymous No.105600480 [Report] >>105600489 >>105600514
>>105600452
wan2.1-i2v-14b-480p-Q4_0.gguf, possibly wan2.1-i2v-14b-480p-Q5_1.gguf if you offload some.
I've been meaning to make a big graph that shows exactly which quantization you can run at X amount of VRAM, but I'm too lazy, cause it would mean having to test them all at various offloading amounts.
Anonymous No.105600489 [Report]
>>105600480
This is the kinda visual I need as a retard.
Anonymous No.105600507 [Report] >>105600529 >>105600542 >>105600545
When using wan fusionx or causvid lora, I have issues in combination with teacache. The second generation will finish super quickly and be all noisy. I always have to restart comfyui.

On /b/ someone said that apparently everyone has this issue and no one knows why?

Are here people who use causvid or fusionx + teacache and don't have this issue? Can you share your workflow? What OS/GPU combination do you have?

Would appreciate some help. thx

I did research yesterday and I can pretty much find nothing about this "second generation is broken" in combination as said above.
Anonymous No.105600514 [Report] >>105600524 >>105600545
>>105600467
>>105600477
>>105600478
>>105600480
Luckily time is not an issue. I can chat with my wifu or go to work while generating 5 seconds of video.
Thanks anons, I will read the info.
Anonymous No.105600515 [Report] >>105600593
>>105600054
>>105600104
>>105600150
>>105600197
Figured it out. I used the wrong filename, so it was referencing the raw file rather than the one converted for use with mac. Can confirm now that lustify works with tiled upscaling. However, it doesn't do much better than epiCRealism when upscaling with no LoRAs, and the detail tweaker LoRA works less effectively with it. In concrete terms, it e.g. fails to converge on a legible license plate when a vehicle is visible. Perhaps its a skill issue, but disappointing nonetheless.
Anonymous No.105600521 [Report]
Anonymous No.105600524 [Report] >>105600532 >>105600545
>>105600514
Then use the 720p version of the model, preferably q8 if you really dont mind waiting but i would recommend testing 480p first and maybe 720p q6 first to see
Anonymous No.105600528 [Report] >>105601222
Anonymous No.105600529 [Report] >>105600547
>>105600507
You got causvid to work with teacache at all? How? It's all noisy mess for me regardless of what settings I use.
Anonymous No.105600532 [Report] >>105600545
>>105600524
Bro, he has 12GB of VRAM and you're recommending he use the 720p i2v model at Q8?!
Anonymous No.105600541 [Report] >>105600680
Anonymous No.105600542 [Report] >>105600557 >>105600561 >>105600578
>>105600507
There's no point in using teacache with a low step model, and you're just gimping the quality further on top of using causvid.
Anonymous No.105600545 [Report] >>105600559 >>105600570
>>105600507
i had the same problem, dont think there's a fix yet
>>105600514\
>>105600524
just keep in mind that your vram should never be 0.5gb near filled up, play with "virtual ram" in the ldg's node and make sure its always slightly below that no matter the quant you're using
>>105600532
he said he doesnt mind waiting, if he wants to prioritize quality, waiting 2-4h isn't a problem for everyone
Anonymous No.105600547 [Report]
>>105600529
8 - 12 steps, two samplers doing each half of the steps. lora strength set to 0.3.

However, the first gen looks great and then it starts to become noisy mess.

You could try fusionx-vace, it does the speed up without the graphic glitches (which is a different topic). But yeah it does work, but only once. 0.1 start to 1 end in t settings works for me
Anonymous No.105600557 [Report] >>105600578
>>105600542
this anon is correct
teacache basically skips some steps and if you already have a low step count, it's gonna fuck shit up
Anonymous No.105600559 [Report]
>>105600545
>make sure its always slightly below
*by looking at task manager or equivalent while the model is going through the inference steps
Anonymous No.105600561 [Report] >>105600578 >>105600585 >>105600616
>>105600542
But without teacache one can't use Skip Layer Guidance WanVideo, which does incredibly boost video quality. I really really want to use this, it's that good.
Anonymous No.105600565 [Report] >>105600848
>>105600126
i wish /v/ and /a/ had diffusion and api generals.
Anonymous No.105600567 [Report]
Anonymous No.105600570 [Report] >>105600578
>>105600545
thx at least someone else confirming. are you using Skip Layer Guidance WanVideo or disabled teacache and Skip Layer Guidance WanVideo? Skip Layer Guidance WanVideo is fantastic and I hate the need to disable it.
Anonymous No.105600578 [Report] >>105600607
>>105600557
no, the problem is there really is a bug that fucks up subsequent generations
>>105600542
>>105600561
there is nothing wrong with teacache, if anything causvid shits on he quality more, using lower teacache like 0.19-0.05 with SLG will make it higher quality, but you have to restart comfy each generation because of that bug
>>105600570
you need teacache for SLG, just use ldg's worfklow in op https://rentry.org/wan21kjguide
Anonymous No.105600585 [Report] >>105600594 >>105600597 >>105600602
>>105600561
>Skip Layer Guidance WanVideo, which does incredibly boost video quality
Is this even true?
Anonymous No.105600593 [Report] >>105600811
>>105600515
I see. I threw one epicrealismXL gen into an upscaling workflow and did 2 takes, one with the epic model it was genned with and a 2nd run with lustify, same everything (except the baked in vae). no loras involved https://imgsli.com/Mzg5MDcz
I don't switch models when upscaling realism stuff, too many variables. busy enough dialing in the stuff
Anonymous No.105600594 [Report]
>>105600585
>The perfect woman doesn't exi-
Anonymous No.105600597 [Report]
>>105600585
whatever you generate with SLG will be higher quality than without, even if its still ultimately shit because of other parameters you're using
Anonymous No.105600602 [Report] >>105601042
>>105600585
If you're running it with the optimizations, yes. Try running a standard Wan gen with the optimizations on and SLG off and you'll see the difference right away. Hands/limbs artifact like crazy during motion and the visual quality is much worse in general.
Anonymous No.105600604 [Report]
Anonymous No.105600607 [Report] >>105600617 >>105600624
>>105600578
>you need teacache for SLG, just use ldg's worfklow in op https://rentry.org/wan21kjguide

Yeah I have this workflow. So restarting comfyui after each generation is the only way. lol
Anonymous No.105600616 [Report] >>105601042
>>105600561
>But without teacache one can't use Skip Layer Guidance WanVideo
you can, if you go for a start_percent at 1 (which means it won't use teacache)
Anonymous No.105600617 [Report] >>105600627
>>105600607
if you encounter this bug, yes
which only happened to me with self-forcing version of wan, not with that workflow, maybe it happens because of something with causvid too, idk, i only use regular wan for full quality generations which doesnt have any problems
Anonymous No.105600623 [Report]
has anyone tried training a lora in comfy yet? cant seem to get any good settings or work out how to set a prompt per image? the example demonstarting it in the PR also looks bad?
is it just a proof of concept for now, or is it supposed to be usable, or just trash?
Anonymous No.105600624 [Report] >>105600639
>>105600607
Unload All Models node attached at the end of the workflow. That means it'll have to reload the model every time you gen, but it's better than manually restarting each time
Anonymous No.105600627 [Report]
>>105600617
yeah this wf doesn't use fusionx nor causvid lora. It's the cause of the issue in combination with teacache. I wouldn't want to use regular wan without fusionx or causvid any more, it takes only 1/3 of the time.
Anonymous No.105600628 [Report]
Anonymous No.105600639 [Report] >>105600647
>>105600624
this one ? https://github.com/SeanScripts/ComfyUI-Unload-Model

looks like it bugs and not unloads gguf. ... I use gguf. fml
Anonymous No.105600647 [Report] >>105600695
>>105600639
https://github.com/SeanScripts/ComfyUI-Unload-Model/issues/3#issuecomment-2818370070
It works on gguf, you just need to do that
Anonymous No.105600675 [Report]
Anonymous No.105600680 [Report]
>>105600541
comfy
Anonymous No.105600694 [Report]
>>105600452
check out framepack or wan2gp by deepbeepmeep.
Anonymous No.105600695 [Report] >>105600708
>>105600647
I put it on the last step, right before video combine. Still causes the noise issue on the second generation. Simply unloading the model doesn't seam to be the same as restarting comfyui.
Anonymous No.105600708 [Report] >>105600730
>>105600695
You using torch compile?
Anonymous No.105600725 [Report] >>105600862
Anonymous No.105600730 [Report] >>105600743 >>105600748
>>105600708
it happens with and without torch compile.it doesn't happen if I use regular wan or gguf wan. Only when using causvid or fusionx + teacache. Restart is the only way to solve this. I tried 3 different plugins to clear model, cache, vram. Always the same. second gen will be noisy.
Anonymous No.105600743 [Report]
>>105600730
restart of comfyui, I mean.
Anonymous No.105600748 [Report] >>105600758
>>105600730
Try turning teacache coeffs off and divide the threshold by 10, ie 0.150 should be 0.015
Anonymous No.105600758 [Report] >>105600770
>>105600748
it doesn't happen with disabled teacache. I don't want to turn it of because of Skip Layer Guidance WanVideo. thanks for trying to help though.
Anonymous No.105600770 [Report] >>105600816 >>105601392
>>105600758
I didn't say turn it off. I said to turn coeffs off. TeaCache can run without coeffs just fine, and the coeffs were extracted from and are for normal Wan, now causvid or fusionx.
Anonymous No.105600782 [Report] >>105600799
Anonymous No.105600785 [Report] >>105600800 >>105600936
>>105600186
>>i'll tell you why you shouldn't have
>because it was requested by a repulsive avatartranny
Anonymous No.105600799 [Report]
>>105600782
this is very nice
Anonymous No.105600800 [Report] >>105601111
>>105600785
Doesn't change the fact it's 100% true. Maybe save the attempted gotcha moments for twatter or something, doesn't really work well here
Anonymous No.105600811 [Report] >>105600837 >>105600917 >>105601071 >>105601132 >>105601225 >>105602276
NSFW upscale comparison https://files.catbox.moe/o6util.jpg. Left, original image made with Chroma v26. Top, epicRealismXL. Bottom, Lustify OLT. Columns: no LoRAs, Detail Tweaker @ 1.5, Detail Tweaker @ 1.5 & epiCPhoto @ 1.0. Of them all, it seems epiCRealism with only Detail Tweaker comes out the best and most lucid.

>>105600593
Interesting. I think epiCRealism is the better of the two, but they're definitely close.
Anonymous No.105600816 [Report]
>>105600770
Okay. I hope it's not too early to say that but it looks like disabling coefficients did the trick. I'll test some more with different rel_l1_thresh values, for now I set it to 0.014 (so 0.14/10), as you said.

I got to do some research on what coefficients does and if skip layer guidance requires it or if it works just fine without and with low thresh.

If this is really the solution. You're my hero of the day.
Anonymous No.105600817 [Report] >>105602155
>>105598984
>there are no roads, when you connect the comfy nodes you're making that first footpath through the jungle
I enjoy this sort of romanticism but I always thought the actual researchers, paper publishers, were the ones making the first footpath while "we", connectors of nodes, were at least a step or two after. Perhaps those who create nodes based on those papers cut the brush away and we merely scrape away the last bits of grass.
Anonymous No.105600837 [Report]
>>105600811
That's a big woman
Anonymous No.105600848 [Report]
>>105600126
>it didn't seem any worse or better than the other neighbor threads
the intention i believe is to highlight threads that are more localgen centered, the poke thread doesnt seem to fit that bill is what i remember the discussion ending at
>>105600565
/a/ will never they are probably more anti than /ic/
>api
nah
Anonymous No.105600851 [Report]
>city96/Cosmos-Predict2-14B-Text2Image-gguf
there's my boy
I bet it sucks anyway
Anonymous No.105600862 [Report]
>>105600725
beautiful grid
Anonymous No.105600871 [Report]
>>105600160
>before previous hits limit.
Post bump limit is 312 anonie
Anonymous No.105600893 [Report]
Anonymous No.105600917 [Report]
>>105600811
can you use some form of noise manipulation/noise injection with that UI? I'd prefer that over a "detail" lora. bit of a mystery meat, those things. I tried that epicphoto lora a few times, not a fan. shifts output towards the epic sameface. this one here can be nice https://civitai.com/models/1457891/epicrealness
every model will react differently to the tokens so it's ultimately down to your taste. should try cyberrealistic (v5.7 is the latest one), really nice quality.
Anonymous No.105600936 [Report] >>105601111
>>105600785
guy was begging every other thread. it's not exactly secret lore
Anonymous No.105600987 [Report] >>105601035
Made this for another thread that got archived early so I'm gonna post it here instead.
Anonymous No.105601032 [Report]
Anonymous No.105601035 [Report]
>>105600987
workflow? prompt/lora?
Anonymous No.105601042 [Report]
>>105600602
I only use teacache because I'm trying to avoid snake oil. I don't mind waiting as long as I can get a gen I don't immediately put in the recycling bin.
>>105600616
Interesting. Of course at some point SLG stopped working with comfy core unless you used teacache. Gotta try this out later.

God, I wish there were more comprehensive guides for all this shit including Enhance A Video, whatever the fuck that does, instead of just three cherry picked examples on a model card. What start and end percents for SLG do you guys use?
Anonymous No.105601045 [Report]
Anonymous No.105601071 [Report]
>>105600811
>we're gonna need a bigger car
Anonymous No.105601103 [Report] >>105601184
Anonymous No.105601111 [Report]
>>105600936
>>105600800
why should I care? if we linked to that and /sdg/ in the OP how could it possibly make the trolling and drama ITT any worse than it already is? if anything it might remind them to go back to their hugboxes
Anonymous No.105601129 [Report] >>105601143 >>105601177
So how do I slam the openpose/canny nodes into it?
Anonymous No.105601132 [Report] >>105601204
>>105600811
give the original image and prompt/workflow, I want to compare to my controlnet upscale approach. your approach is slopping the face too much
Anonymous No.105601143 [Report]
>>105601129
"Apply ControlNet" simply takes positive and negative conditioning and returns new positive and negative conditioning. You really don't need to do anything special.
Anonymous No.105601168 [Report]
Anonymous No.105601177 [Report]
>>105601129
Anonymous No.105601184 [Report] >>105601266
>>105601103
Can you do Hex Maniac?
Anonymous No.105601201 [Report]
Anonymous No.105601204 [Report] >>105601213 >>105601225
>>105601132
Agree on the slopface, but I think that's just the high i2i strength and vague prompt. The code is >>105600150, but to summarize:
Model: https://civitai.com/models/277058?modelVersionId=1156226
Prompt: "IMG_4972.JPG raw photo"
Negs: "photoshop, illustration, 3d, 3d render, 2d, painting, cartoon, sketch, child, preteen, underaged"
Steps: 30
Sampler: DPM++ 2M Trailing
CFG: 4
I2I Strength: 0.3
Controlnet: https://huggingface.co/xinsir/controlnet-tile-sdxl-1.0
Weight: 0.5
Lora: https://huggingface.co/AiWise/Detail-Tweaker-XL_v1
Weight: 1.5
Anonymous No.105601213 [Report] >>105601275
>>105601204
huh? your prompt for the original is just a filename with no description?
Anonymous No.105601222 [Report] >>105602046
>>105600528
nice
Anonymous No.105601225 [Report] >>105601233 >>105601294
>>105600811
>>105601204
I'm out of touch. Is this because Chroma itself can't be used as an upscaler? I've tried i2i upscaling with Chroma and it always came out with weird artifacts, but I didn't know if it was just my skill issue or not.
Anonymous No.105601233 [Report] >>105601377 >>105601479
>>105601225
chroma can be used to upscale but it won't be very good until we get a tile controlnet for it.
Anonymous No.105601258 [Report]
Anonymous No.105601266 [Report] >>105601273
>>105601184
Anonymous No.105601273 [Report] >>105601310
>>105601266
Thanks!
Anonymous No.105601275 [Report]
AAAAAA Janny-sama please forgive me! It was an accident!

>>105601213
No, this is just for the upscaling. Original is same settings minus LoRAs and Controlnet.
Model: Chroma v26
Prompt: "2007 iPhone photograph, amateur photography, British suburb, terraced houses, a (((giant-sized))) coed is sitting on a car, posing for the camera, nude."
Negs: "photoshop, collage, fake"

Upscale was with UltraSharp (whatever that is). And yes, I know the parentheses don't properly work with Chroma, but they still have an effect.
Anonymous No.105601294 [Report] >>105601377
>>105601225
you can use chroma for upscaling. you may encounter nasty scanlines if you go too large but tiled resampling works just fine, slow af tho. quality was okay. >>105559027
Anonymous No.105601310 [Report]
>>105601273
you welcome
Anonymous No.105601340 [Report]
Anonymous No.105601377 [Report]
>>105601233
>>105601294
Sounds like a me problem then. I wasn't using tiles, just upscaling the latent by ~1.3x and running it through a second KSampler like I've seen people do with other models, but just getting garbage or scanlines.
Anonymous No.105601392 [Report] >>105601453
>>105600770
>turn coeffs off. TeaCache can run without coeffs just fine, and the coeffs were extracted from and are for normal Wan, not causvid or fusionx.
OHHH MY FUCKING GODDDDDDDDDD

RENTRY WAN OP ADD A SECTION FOR FUSIONX ABOUT THIS PLS
ALSO A SELF FORCING PLS
Anonymous No.105601453 [Report] >>105601525
>>105601392
yeah. I do more and more testing right now and disabling this coeffs finally solved my issue I had for weeks. It's so nice to no longer need to restart comfyui all the time. Especially since many plugins will make this take longer and longer.
Anonymous No.105601479 [Report]
>>105601233
If you want picture-perfect upscale then yeah, it's probably gonna be a while, however if you don't mind the picture changing (which I favor since I use it to fix minor issues anyways), Flux upscaler controlnet seems to work with Chroma, however I've yet to find good settings.
https://files.catbox.moe/o6c0io.jpg before
https://files.catbox.moe/acznqa.jpg after
Anonymous No.105601487 [Report]
Anonymous No.105601493 [Report]
there's a fusionx lora too now https://civitai.com/models/1678575/wan21fusionx-the-lora
Anonymous No.105601525 [Report]
>>105601453
subsequent prompts are faster now too, from ~150 down to ~120 now

why were coefficients even a thing if teacache works fine without them and you just multiple by 10x?? is it slightly better with the coefficients too??
Anonymous No.105601563 [Report]
my time is up, gotta sleep :)
Anonymous No.105601568 [Report] >>105601686 >>105602327 >>105602361 >>105602397
Easiest way to tag images for lora training?
Anonymous No.105601600 [Report]
Anonymous No.105601642 [Report]
Anonymous No.105601651 [Report]
Can you use the same Lora and model for generations and inpainting? Or is it better to use an Inpainting model for small changes? Dunno if they can recreate artstyles
Anonymous No.105601661 [Report]
Anonymous No.105601686 [Report]
>>105601568
I'd also like to know.
Anonymous No.105601696 [Report]
Is Generating Consistency Character viable yet?
Anonymous No.105601702 [Report] >>105601881
>download workflow
>it's unusable abomination that you can't navigate
every time
Anonymous No.105601765 [Report] >>105601836 >>105601861
Does anyone know a good Inpainting model for cartoon?
Anonymous No.105601819 [Report] >>105601836 >>105601881 >>105602189
Am I the only anon in this fucking general?
Anonymous No.105601827 [Report]
too many question
Anonymous No.105601836 [Report]
>>105601765
Yes
>>105601819
No
Anonymous No.105601844 [Report]
Anonymous No.105601855 [Report]
Anonymous No.105601861 [Report]
gm ai sisters
>>105601765
https://civitai.com/models/1376234
Anonymous No.105601881 [Report]
>>105601702
love the spacing, what a mess lol. I bet you find various nodes hidden behind other ones too.
>>105601819
I am here, and I am not you.
Anonymous No.105601887 [Report]
> gm
you are in the wrong thread.
if napt doesn't get removed in the next bake it might finally be over for ldg.
Anonymous No.105601892 [Report]
Anonymous No.105601928 [Report] >>105602018
>finally install comfy ui
>...
>it's actually quite uncomfortable
Anonymous No.105601932 [Report] >>105602018
Anonymous No.105601944 [Report]
>>105600126
/n*pt/ is not a local thread, they use NAI for their gooning
Anonymous No.105602018 [Report] >>105602057
>>105601932
you are just shuffling the images, right?
>>105601928
ahahaha
Anonymous No.105602046 [Report]
>>105601222
thanks
Anonymous No.105602057 [Report] >>105602065
>>105602018
there shouldnt be any duplicates if thats waht you mean, the script moves them from outputs after it makes the grid
Anonymous No.105602065 [Report] >>105602463
>>105602057
Your background scaling is a bit iffy which sampler are you using?
Anonymous No.105602075 [Report] >>105602103 >>105602144 >>105605129
is there any way in comfy to set up batch image gen so i just loop through x amount of input images for controlnets, get a prompt from the image itself, remove certain phrases and add certain other phrases to the final prompt automatically, and then make y amount of gens for each image before going to the next one? right now i'm doing it manually but it takes a lot of time and it would be nice to just leave it on while i go to work
Anonymous No.105602103 [Report]
>>105602075
25 different custom nodes but all of them are unfinished or annoying
Anonymous No.105602125 [Report] >>105602141
Anonymous No.105602141 [Report] >>105602154
>>105602125
>thinly veiled white supremacist dogwhistle imagery
Anonymous No.105602144 [Report]
>>105602075
My only advice for you and think about it before you reply.
Buy a hat for your hat :)
Anonymous No.105602154 [Report]
>>105602141
Funny, it's based on a Japanese image from "The Wind Rises"
Anonymous No.105602155 [Report] >>105602173
>>105600817
it's still messy garbage. node authors just drop seeds wherever they go so it turns back into an untreaded path
Anonymous No.105602173 [Report] >>105602213 >>105602268
>>105602155
comfyui is just a tutorial level for whatever replaces it. 1/10th of the available nodes are actually useful
Anonymous No.105602189 [Report]
>>105601819
you're not anon, I know who you are.
Anonymous No.105602210 [Report]
Anonymous No.105602213 [Report]
>>105602173
hard agree. I give comfy 1 or two years tops when everyone drops the frontend completely. if something dethrones torch it immediately becomes deprecated
Anonymous No.105602268 [Report] >>105602311
>>105602173
Which makes comfy rushing to add support for every retarded feature under the sun laughable when he can't even put in the time for a good UX.
Anonymous No.105602276 [Report]
>>105600811
You can tell they are slopped. Youd'd be better off going back to the seed, and either take more or less steps to get variations, or just add blur to the neg or HQ to the prompt itself (though it would change the image slightly).
Anonymous No.105602298 [Report] >>105602308
what was that website that lets you upload several images and have slider comparison? my google-fu is lacking
Anonymous No.105602308 [Report]
>>105602298
imgsli
Anonymous No.105602311 [Report] >>105602319 >>105602337
>>105602268
I doubt comfy cares about anything frontend and it shows. the Google grifters are the ones who "move fast and break things". this just isn't fucking doable with media creation software. Things have to fit naturally. webdev for ai was a mistake
Anonymous No.105602319 [Report]
>>105602311
They are but a stepping stone into the future promised
Anonymous No.105602327 [Report] >>105602367
>>105601568
I'm currently using this on ComfyUI:

https://github.com/miaoshouai/ComfyUI-Miaoshouai-Tagger/

With this workflow: https://pastebin.com/GbHMDYDa
Anonymous No.105602337 [Report]
>>105602311
>the Google grifters are the ones who "move fast and break things"
it happened immediately when the org started. the software was actually stable before all that bullshit
Anonymous No.105602344 [Report]
please... please upgrade from pony... please
Anonymous No.105602361 [Report]
>>105601568
I'm currently using this with pic related workflow on ComfyUI.

https://github.com/miaoshouai/ComfyUI-Miaoshouai-Tagger/
Anonymous No.105602367 [Report]
>>105602327
just give me a fucking actual interface not this node shit
Anonymous No.105602388 [Report]
Anonymous No.105602396 [Report] >>105603047
did quick test https://imgsli.com/Mzg5MTU3/0/1
>https://huggingface.co/silveroxides/Chroma-LoRA-Experiments/blob/main/chromatic_pixelwave_rank_32-bf16.safetensors
Anonymous No.105602397 [Report]
>>105601568
i use qapyq with eva large
Anonymous No.105602431 [Report] >>105602450 >>105602456
Does anyone know a good inpainting model? I don't know how things go with inpainting.
Anonymous No.105602450 [Report] >>105602857
>>105602431
flux has an I painting model but just sticking to ill/sdxl I painting is gud enough to get through it quick. it's too bad nag doesn't really do much for flux
Anonymous No.105602454 [Report]
Anonymous No.105602456 [Report] >>105602921
>>105602431
can use any model for inpainting. what UI are you using?
Anonymous No.105602463 [Report]
>>105602065
a combination of dpmpp2m, lms, and res_multistep for various bits, but is probably one of the background loras being weird
Anonymous No.105602505 [Report] >>105602524 >>105602643 >>105603066
A severe lack of interesting videos ITT
Anonymous No.105602518 [Report] >>105602534 >>105602820 >>105603436
Anonymous No.105602524 [Report] >>105602613
>>105602505
a lot of tech stagnation in the space so people got bored
Anonymous No.105602534 [Report] >>105602560 >>105602585
>>105602518
https://www.youtube.com/watch?v=xo3fM6kW7GU
Anonymous No.105602560 [Report] >>105602820 >>105603436
>>105602534
Anonymous No.105602585 [Report]
>>105602534
THANK YOU, zomg
Anonymous No.105602595 [Report] >>105602820 >>105603436
Anonymous No.105602613 [Report] >>105602631
>>105602524
Just because there isn't much being released doesn't prevent one from generating something of interest
But self-forcing for wan released a mere four days ago
Anonymous No.105602626 [Report] >>105602820 >>105603436
Anonymous No.105602631 [Report]
>>105602613
the past four days was technical support because all these opts together fuck up the memory management for most people. making vids is too much of a chore because cumfart doesn't give a shit
Anonymous No.105602643 [Report] >>105602730
>>105602505
I've been genning tons of videos lately. Sadly most of them involve people I know irl so I can't post them
Anonymous No.105602644 [Report]
>>105600452
>>105600463
>>105600467
i can gen a 5 second 640x480 video in 5 minutes and 30 seconds with causvid/accvid and fp8 wan on a 3060
Anonymous No.105602682 [Report] >>105602707 >>105602709 >>105602721
what did ani merge last night anyways?
Anonymous No.105602707 [Report] >>105602799
>>105602682
Worthless shit that doesn't improve the actual UI. Stop advertising kid
Anonymous No.105602709 [Report] >>105602721 >>105602726 >>105602753
>>105602682
https://github.com/FizzleDorf/AniStudio/pull/81
if he is this autistic about text input, we'll have something actually worth sticking with in a month or two.
Anonymous No.105602721 [Report] >>105602799
>>105602709
>>105602682
We didn't ask for your advertisement schizo.
Anonymous No.105602726 [Report]
>>105602709
no python interop but it looks like he is moving on to that next
Anonymous No.105602730 [Report]
>>105602643
>most of them involve people I know irl
the unsung hero of local vidgen
Anonymous No.105602751 [Report]
>sneedance can't into kissing
SaaSfags BTFO
Anonymous No.105602753 [Report] >>105602836
>>105602709
actually a good idea. I am surprised no other UI has qol for the text editing
Anonymous No.105602780 [Report]
Anonymous No.105602799 [Report] >>105603260
>>105602707
>>105602721
why are you so mad at ani? where is your ui?
Anonymous No.105602803 [Report] >>105602807 >>105602816 >>105603801
>>105600092 (OP)
Can I run any of this shit on a 3060TI or is there a decent AMD card I can get to replace it?
Looking at AMD specifically, because while nvidia is good for blender I wanna move to Linux.
Anonymous No.105602807 [Report] >>105602896
>>105602803
You can use nvidia on linux now
Anonymous No.105602814 [Report] >>105602825
>105602505
I've been genning tons of videos lately as well, but they've generally been too young to share here
the guys who came from /b/ asking about the fusionx stuff were directed here by me

now im down to 120 seconds per 480p video. which is insane and I really hope the honeymoon phase ends soon so I can go back to being productive and not just goonrotting
Anonymous No.105602816 [Report] >>105603635
>>105602803
yeah it werks. stick to sdxl since flux based models will just take forever. same with vidgen
Anonymous No.105602820 [Report]
>>105602626
>>105602595
>>105602560
>>105602518
fatchy <3
Anonymous No.105602825 [Report] >>105602837
>>105602814
what gpu are you using to gen?
Anonymous No.105602836 [Report] >>105602867 >>105603061
>>105602753
Except gimmicky Vim editor shit isn't what anybody is asking for.
I can already see where this is heading though. People who jerk off over their riced out Linux desktop with tiling window manager are going to love this because they no longer have to use that pesky mouse for genning anymore. Everybody else will be left scratching their head wondering when somebody will finally think about UX.
Anonymous No.105602837 [Report] >>105602878
>>105602825
16gb blackwell good saar
Anonymous No.105602857 [Report]
>>105602450
Do you have a link for the illustrious/sdxl inpainting model people use? I can't find it
Anonymous No.105602867 [Report] >>105603061
>>105602836
by the looks of it it's completely optional to have the vim controls, line numbering, syntax highlighting. it looks like he also added mouse controls too. If it's accessible like notepad++ ootb but has options for sweaty vimmers, it's a good feature
Anonymous No.105602871 [Report]
OOOOOH MASSA!
Anonymous No.105602878 [Report]
>>105602837
so a 5060ti or 5070ti/5080
doubt anons itt would be sane enough to pay for a 5070ti or a 5080 (too little vram)
so a 5060ti
impressive, very nice
Anonymous No.105602885 [Report]
by the way, the chroma pipeline was merged into diffusers 2 days ago, so svdquant is likely soon
Anonymous No.105602896 [Report] >>105602911 >>105603125
>>105602807
Can use or is worth using?
Also some of the better distros like fedora dont even ship the spyware drivers.
Anonymous No.105602904 [Report]
gazoontite
Anonymous No.105602911 [Report]
>>105602896
don't worry, comfyui is jam packed full of backdoors and telemetry it may as well be the same thing.
Anonymous No.105602917 [Report] >>105602923 >>105602928 >>105602929 >>105602940 >>105602971
Can anyone reccomends me an Inpainting model please? It seems you can't just use the same model used for generations, you need an Inpainting model.
Anonymous No.105602921 [Report]
>>105602456
I'm using forge. It seems you cannot use the same model for generation and for Inpainting, so I'm looking for Inpainting models. I was using an illustrious model and Lora for generations but It seems I can't make it work for Inpainting?
Anonymous No.105602923 [Report]
>>105602917
if you are inpainting in comfy vanilla, it's all fucked and you have to set it up properly. just use krita
Anonymous No.105602928 [Report] >>105602988
>>105602917
dude you do not need an impainting model. do your reseach.
Anonymous No.105602929 [Report] >>105603000
>>105602917
>It seems you can't just use the same model used for generations, you need an Inpainting model.
It really depends. I inpaint using Illustrious models all the time.
Anonymous No.105602936 [Report] >>105602951
Buy an ad trani
No one cares about your super slow wrapper
Anonymous No.105602940 [Report] >>105602988
>>105602917
> Can anyone reccomends me an Inpainting model please? It seems you can't just use the same model used for generations, you need an Inpainting model.
You can't? I almost never change model for inpainting unless there is some weird issue. Inpainting is more about how you set up the mask, denoising, the prompt, and the sizing of slice of the image you pass in than choice of model.
Anonymous No.105602951 [Report]
>>105602936
oh you are just a jealous little troll. poor you
Anonymous No.105602956 [Report] >>105602963 >>105602984
>day three (3)
>still no working flux NAG implementation
it is, and will remain, over
Anonymous No.105602959 [Report]
Anonymous No.105602960 [Report]
>julien
Anonymous No.105602963 [Report] >>105602970
>>105602956
can't you do it? it's just copying code into the comfy framework
Anonymous No.105602968 [Report]
which node to load Wan's lora? and where do you put it. I'm using Kijai's wrapper
Anonymous No.105602970 [Report]
>>105602963
isn't it a little sad comfyorg with full time paid employees didn't do it already?
Anonymous No.105602971 [Report] >>105602988
>>105602917
Krita. Inpaint in comfy is mighty uncomfortable.
Anonymous No.105602984 [Report]
>>105602956
its really easy to do with gemini since the code is already available you just need to port it to comfy, I sent an email to Pam (the guy from the PAG, SEG, SWG and PLADIS node) about NAG, lets see if he can do it, I managed to get it working on forge for SDXL but its behaving a bit weirdly (stil still very good despite that)
Anonymous No.105602988 [Report] >>105602999 >>105603006 >>105603140 >>105603283
>>105602928
>dude you do not need an impainting model. do your reseach.
I've been looking and everyone is telling me a different thing.

So what is it do I use the same model and Lora I used for generation or do I use an Inpainting specific model for Inpainting? Can they detect the artstyles of the image even, if it's not Inpainting that much? I'm completely lost here
>>105602940
How do you do it? My Inpaintings look like crap, do you have an example?
>>105602971
I want to use forge, I'm ok with its Inpaintings UI and don't want to install another workflow. I just want a model or if I can use the same model as the og generation, then I wanna know what I'm doing wrong.
Anonymous No.105602999 [Report] >>105603050
>>105602988
>I wanna know what I'm doing wrong.
post your metadata and we can help a lot better
Anonymous No.105603000 [Report] >>105603026
>>105602929
What model? There are many illustrious models. Do you also use the Lora you used for the OG generation if any?
Anonymous No.105603006 [Report] >>105603065
>>105602988
take a screenshot of your inpaint tab with all the settings visible and we go from there. I still have (re)forge installed, we can do this lol
Anonymous No.105603026 [Report]
>>105603000
>What model?
Typically wai or Miruku
>Do you also use the Lora you used for the OG generation if any?
Sometimes I even load a lora I didn't use for the base gen because I need it for a detail I am adding.
Anonymous No.105603047 [Report]
>>105602396
what does it do
Anonymous No.105603050 [Report] >>105603065
>>105602999
I'm basically using the default settings. Prompt is the same as the generation.
>Mask blur 4
Masked content:original
>Inpainted mask
>Inpaint area:whole picture
>Padding:32px
Anonymous No.105603061 [Report] >>105603117 >>105603208
>>105602867
>>105602836
this goes completely against basic software engineering and the unix philosophy. there's no need to rewrite Vim, if he wants a Vim-like editing experience he should just write a plugin for NVim and set up integrations to make it easy to write prompts in a separate NVim editor instance.

It's his project, but this is hobbyist software practices, not how you create a viable product. the sad reality is that it's already DOA from him openly associating pedo gens with his project. if he got funding, the funders would oust him the moment this became widespread knowledge.

the reason ComfyUI is winning is because it has a lower-level abstraction than a rigid Gradio UI design and has composable elements. the speed of development and adding support for new models comes from this design. A1111, Forge, and AniStudio quickly accumulate excessive tech debt by entangling their different components in a single official UI implementation. I wish we had a better alternative to Comfy, but this is the reality.
Anonymous No.105603065 [Report]
>>105603050
>>105603006
There aren't many options there. For example, I wanna change a bracelet but it doesn't change it, it generates the same and almost no change is made. If I change denoise or other parameters I get pure noise or nothing resembling the prompt.
Anonymous No.105603066 [Report] >>105603082 >>105603311 >>105603316
>>105602505
Is this interesting?
Anonymous No.105603082 [Report]
>>105603066
yes
Anonymous No.105603117 [Report] >>105603141 >>105603285
>>105603061
he literally just used an imgui extension and extended the controls nocoder. comfyui is also anti unix. what the fuck are you trying to prove here?
Anonymous No.105603125 [Report]
>>105602896
5000 series only works with the open drivers, the only issue I have on tumbleweed is sleep and hibernation locking the computer out forcing a hard reset
Anonymous No.105603140 [Report]
>>105602988
It's been a while since I used forge, but there's a way to make it crop the image outside of your mask + customizable padding pixels. Maybe the "crop and resize" option in your screenshot.

I think what gets a lot of people with inpainting is they are inpainting at the size of the full image, which may be upscaled. Most models other than flux-based struggle with images that go much higher than 1 megapixel total size and your image loses coherence. Or they always set denoising to 1.0 and run into issues where it draws the entire prompt in the mask leaving obvious seams.

If I want to, say, change a bracelet, I would do the following:
1. (optional) open the image in GIMP and roughly color it (assuming its color changes) the new color with opacity set low to keep the basic appearance of a bracelet.
2. Load the image, and apply a mask that covers the bracelet and extends to natural boundaries, basically places where colors clash. This helps prevent seams from appearing. But keep it small enough to not make your image large.
3. Arrange it so the generation covers only a subimage containing the mask + some padding, to give context to the model where necessary. This is where I don't remember how to do it in forge but it is definitely possible because I always did it that way and when I moved to Comfy I had to make some custom nodes to help me do the same there.
4. Adjust the prompt so that it only covers what will be in the subimage and not the whole thing.
5. Start with a lowish denoising and gradually raise it as needed if it is not working
6. (optional) take the resulting image into gimp and clean it up if necessary
Also, rather than inpainting, a lot of things can be fixed in the upscale.
Anonymous No.105603141 [Report] >>105603151
>>105603117
Stop talking in third person you boozed up fruit
Anonymous No.105603151 [Report]
>>105603141
attacks against a boogeyman doesn't accomplish anything
Anonymous No.105603208 [Report] >>105603229 >>105603285
>>105603061
>accumulate excessive tech debt by entangling their different components in a single official UI implementation.
he separated the gui entirely from the ecs. if anything it's the same as comfyui but less restrictive. I don't know what you are trying to say here
Anonymous No.105603229 [Report] >>105603241
>>105603208
>he separated the gui entirely from the ecs. if anything it's the same as comfyui but less restrictive. I don't know what you are trying to say here
You are and will always be a lolcow
There is not a single thing JulienStudio does well or even on the same level as proper UIs while being even more hardcoded than auto1111. Kek
Anonymous No.105603241 [Report] >>105603263
>>105603229
can you show me the areas where the gui is hard coded into the backend?
Anonymous No.105603255 [Report] >>105603266
Just fuck off Ani, We're tired of your bullshit.
Anonymous No.105603260 [Report]
>>105602799
think before you speak. why would he need a UI you dumb retard. god you suck so much. time will forget you fast.
Anonymous No.105603263 [Report] >>105603300
>>105603241
So how do i write a custom plugin or whatever you call it. Having written custom auto1111 extensions and custom comfy nodes i'm very curios. Show me a full example now (and i mean an actual one, not hello world tier)
Anonymous No.105603266 [Report] >>105603271 >>105603293
>>105603255
I'm curious. you bitched about all that and you have no proof. nta btw, I just have a fascination with your no code soapboxing
Anonymous No.105603271 [Report]
>>105603266
I'm not even the anon you're arguing with you schizo faggot
Anonymous No.105603283 [Report] >>105603510
>>105602988
ok here:
mask the image, prune the prompt and only keep things like tokens related to the style, loras etc and add what you want to inpaint. "hand" for example.
the rest you should be able to see from the attached image. important is that you unlock the seed, otherwise you just render the same thing over and over again. and the 'inpaint area' needs to be set to "only masked". I also used 'soft inpainting' here but disabled it for the screencap (with default settings). a solid combo for inpainting was always ddim/ddim uniform but you need to find something that works for you. when in doubt, use the sampler/scheduler combo you used for the image gen. https://imgsli.com/Mzg5MTc3
Anonymous No.105603285 [Report]
>>105603117
>>105603208
if he's just using a vi-like editor plugin without much additional work then I consider myself corrected.

I still stand by my statement that comfy is winning due to composability. Even if ani has a separated backend, this doesn't change the fact that the UI will be ossified into one workflow.
Anonymous No.105603293 [Report]
>>105603266
you have no self awareness
the reason why everyone can see you samefagging is because it is plainly obvious to anyone you have released nothing of value yet but you keep glazing yourself in third person. keep praising your projects that only run on your machine(tm). i'm fine with it. better the schizo everyone can see than the one you have to convince others of.
Anonymous No.105603300 [Report] >>105603329
>>105603263
https://github.com/FizzleDorf/AniStudio/tree/dev
he has some plugin stuff here and with the hot reloading he was talking about I think. it says he also has a gui editor packaged with the repo so technically it blows a1111 and comfy extension workflows out of the water
Anonymous No.105603311 [Report]
>>105603066
no because I'm not gay, therefore I feel no sexual arousal from looking at another man's semen
Anonymous No.105603316 [Report]
>>105603066
prompt and lora?
Anonymous No.105603317 [Report] >>105603326
>Debo and ani shilling
>Still no engagement for project
>So desperate need to beg in a thread they actually hate
Anonymous No.105603326 [Report] >>105603338
>>105603317
nah it's funny watching the schizo nocoder (You) make shitty accusations against the only C chad in the thread
Anonymous No.105603329 [Report] >>105603339 >>105603356 >>105603371
>>105603300
So you just linked to the repo and dodged the question.
No example, no extensions, only "i think he talked about".
So we are back with trani needing 2 years to learn imgui to wrap sd.cpp.
What a loser.
Guess i will not develop extensions for it then like i do for comfy and auto.
Anonymous No.105603335 [Report] >>105603360 >>105603441 >>105603557 >>105603595
Give it to me straight bros. Will my 5080, that should arrive this week, be able to generate (good) videos?
I have yet to look deep into the rentries, but looks like I can offload into RAM or use quantized versions (I think these remove the least important layers?).
I'm not asking to be spoonfed right now, just asking for about how much I can expect.
Anonymous No.105603338 [Report] >>105603359
>>105603326
C chad that can't into nix and releases code that doesn't compile on any machine except his own, rmao
Anonymous No.105603339 [Report] >>105603359
>>105603329
He has to spend his time begging in this thread because he burned every bridge in his sphere for being a griefing faggot so now he's struggling.
Anonymous No.105603356 [Report]
>>105603329
no, he did it with vulkan fist apparently looking at the earlier commits. he probably got sick and tired writing that shit so much kek. opengl is much more accessible even if it's slower
Anonymous No.105603359 [Report]
>>105603338
>>105603339
>singular schizo cope broken for the 999th time
Go away trani!
Anonymous No.105603360 [Report] >>105603600
>>105603335
>Will my 5080, that should arrive this week, be able to generate (good) videos?
good depends more on the users aesthetic sensibilities than the card
read the guide in OP tho
Anonymous No.105603371 [Report] >>105603389 >>105603410
>>105603329
there is a fucking example plugin in the fucking plugins folder. have you never used git before too?
Anonymous No.105603376 [Report]
We hate you ani!
Anonymous No.105603389 [Report] >>105603420
>>105603371
tran is a schizoid nocoder that needs handholding in everything she does. considering it's Father's Day and she's black, probably has nothing to do today since her dad is still out getting cigarettes from 15 years ago
Anonymous No.105603409 [Report]
i love ldg
Anonymous No.105603410 [Report] >>105603435 >>105603460
>>105603371
That doesnt even implement anything
It just shows architectural fails with all the "CRITICAL: You need to do this or nothing works!"
That is your example? Because it really shows bad architecture. So much code for not even a function "plugin"
Anonymous No.105603420 [Report]
>>105603389
>considering it's Father's Day
o shit thanks for the heads up
Anonymous No.105603435 [Report] >>105603480
>>105603410
I can make comfy nodes without registering anything?
Anonymous No.105603436 [Report]
>>105602518
>>105602560
>>105602595
>>105602626
Plaptchouli
Anonymous No.105603441 [Report] >>105603600
>>105603335
i got the impression most "sample workflows" are made for 24gb cards.
Soit's up2you to find correct optimizations for 16gb ang 5xxx series specically.
Other than that, i dont see why cant it run properfly. People still generate videos with 8gb cards
Anonymous No.105603460 [Report] >>105603470
>>105603410
Just took a look and you're right
Imagine being proud about that kek
Anonymous No.105603470 [Report]
>>105603460
how did you get custom nodes to work without registering? I'm quite curious
Anonymous No.105603473 [Report]
I had no idea Ran could become multiple organic anons all with different arguments with the only common thread is being that they believe you are a giant fucking schizo loser faggot.
Anonymous No.105603480 [Report] >>105603502
>>105603435
Dude thats hundreds of lines just to be registered at all
I'm talking about extension writing and a concrete example. How would i implement something from a research paper into that mess?
Anonymous No.105603481 [Report]
guys I'm REALLY mad
mad about image model inference UIs, specifically
Anonymous No.105603498 [Report] >>105603531
>he still unironically believes it's a singular schizo anon
You really need help julien
Anonymous No.105603502 [Report] >>105603526
>>105603480
copypasta then throw imgui widgets in the view. I don't think the components or systems are necessary unless you want to use his method of memory management. it's C for christ's sake. are you that much of a codelet?
Anonymous No.105603504 [Report]
>be me
>3060 vramlet
>share that i get fast results with workflow i quickly slapped together
>anons start screeching fp8.. le bad
>here we go again
>for the 10th time this year, i take the bait and slap together a gguf workflow equivelant to kijai's
>picrel
I HATE GGUF I HATE GGUF I HATE GGUF I HATE GGUF I HATE GGUF I HATE GGUF I HATE GGUF I HATE GGUF I HATE GGUF I HATE GGUF I HATE GGUF I HATE GGUF I HATE GGUF I HATE GGUF I HATE GGUF I HATE GGUF I HATE GGUF I HATE GGUF I HATE GGUF I HATE GGUF I HATE GGUF I HATE GGUF I HATE GGUF I HATE GGUF I HATE GGUF I HATE GGUF I HATE GGUF I HATE GGUF I HATE GGUF I HATE GGUF I HATE GGUF
ps: 3060 power limited to 100w, both gens are the second gen after torch compile, 5s video
>kijai workflow: https://litter.catbox.moe/a2nmqnwxa9gsht44.json
>gguf workflow: https://litter.catbox.moe/kntkwsko388mk31d.json
Anonymous No.105603510 [Report] >>105603580
>>105603283
I'm using those settings and it's changing all the image instead of only the masked area.

I opened the image on Photoshop and painted black the part I wanted to change, the rest completely white, and uploaded into forge as mask.
Anonymous No.105603526 [Report] >>105603536
>>105603502
You know what
If you're that hostile i will not develop a single thing for your ui
No idea why you're such an asshole for me asking about how i would contribute
Well then the other UIs get my time
Fuck you
Anonymous No.105603531 [Report]
>>105603498
Singular schizo theory is often right in my experience, at least on other boards. It's one of the unfortunate weaknesses of the 4chan format that one unemployed and mentally ill person can completely ruin a general or even a whole board if they're relentless enough.
Anonymous No.105603534 [Report] >>105603562
Anonymous No.105603536 [Report] >>105603588
>>105603526
but you are a tranny nocoder and it's not even my project. I just wanted clear and concise proof of what you were saying is true or not but instead you shidded your diaper
Anonymous No.105603540 [Report]
Anonymous No.105603548 [Report] >>105603562 >>105603590 >>105603797
Anonymous No.105603557 [Report] >>105603578
>>105603335
>5080

>Actually buying the scam

Return and get a 4090
Anonymous No.105603562 [Report]
>>105603534
>>105603548
these are so shit lol. please post ones without temporal garbage
Anonymous No.105603578 [Report]
>>105603557
geg
Anonymous No.105603580 [Report] >>105603604
>>105603510
nonono you paint over the image opened in the inpaint tab, it creates a mask. no need to go external for that
Anonymous No.105603588 [Report] >>105603616
>>105603536
>not even my project
what is your project then mr cisgender yescoder? link to your github? oh wait, you can't answer that.
Anonymous No.105603590 [Report] >>105603602
>>105603548
About on par with some of the terrible AI animations you see in anime these days
Anonymous No.105603595 [Report]
>>105603335
no, return and get a 24gb card, if you want to stick with a 16gb card get an intel a770, should save you a minimum of 700$
Anonymous No.105603600 [Report]
>>105603360
>>105603441
Alright I will read the guides tomorrow at work. But sounds like it should be doable. Nice
Anonymous No.105603602 [Report]
>>105603590
I've seen better hyvid gens unfortunately
Anonymous No.105603604 [Report] >>105603629
>>105603580
I know but I wanted to mask with Photoshop better so I can mask exactly what I want. Let me reset and see if it works.
Anonymous No.105603613 [Report]
I tried meme self forcing and it is garbage at this point in time
Anonymous No.105603616 [Report] >>105603658
>>105603588
the fact you don't know what unix is, you think there isn't separation of concerns and you still think it can't be compiled is how I know you are the resident unemployed schizo nigger self replying to your low effort slop.
Anonymous No.105603629 [Report]
>>105603604
there is a keyboard shortcut to display the image in full screen mode in forge when its loaded in the inpaint tab, need to hover over the image when you press the key I think? look it up.
Anonymous No.105603635 [Report] >>105603684
>>105602816
SVDQuant on 3060 ti shouldn't be too bad. From what I recall it's 5k CUDA cores, that is faster than 3060 so if you can fit in 8GB you are good to go, and 4bit flux should fit in 8GB.
Anonymous No.105603643 [Report]
Fresh

>>105603632
>>105603632
>>105603632

Fresh
Anonymous No.105603658 [Report]
>>105603616
Omg hi ani! You won't believe it, but I've not posted in any reply chain that mentions Unix, and I'm gainfully employed in software engineering unlike you, thank you very much.
Anonymous No.105603684 [Report]
>>105603635
Also with Flux you can even pair that with a turbo LoRA as well. Should be under 1 min wait time.
Anonymous No.105603797 [Report]
>>105603548
which horror anime is this?
Anonymous No.105603801 [Report]
>>105602803
If you don't already have AMD I don't recommend it. It's getting better, but latest optimizations like SVDQuant or CausVid etc... don't work on AMD. I recommend stretching your budget to get used 3090 for $700-$1k. No other card under $1k can replace that, trust me. I used to have a 3060 ti and that was my only move.
Anonymous No.105603802 [Report]
>>105600108
Thank you for including my gen in the collage
based anon of blessed friendship
Anonymous No.105605129 [Report]
>>105602075
>loop through x amount of input images for controlnets
Load Image Batch from WAS Suite