Discussion of Free and Open Source Text-to-Image Models
PoopDickSchizo can't keep getting away with these shitty bakes edition
Prev:
>>105606862https://rentry.org/ldg-lazy-getting-started-guide
>UISwarmUI: https://github.com/mcmonkeyprojects/SwarmUI
re/Forge/Classic: https://rentry.org/ldg-lazy-getting-started-guide#reforgeclassic
SD.Next: https://github.com/vladmandic/sdnext
ComfyUI: https://github.com/comfyanonymous/ComfyUI
>Models, LoRAs, & Upscalershttps://civitai.com
https://civitaiarchive.com
https://tensor.art
https://openmodeldb.info
>Cookhttps://github.com/spacepxl/demystifying-sd-finetuning
https://github.com/Nerogar/OneTrainer
https://github.com/kohya-ss/sd-scripts/tree/sd3
https://github.com/derrian-distro/LoRA_Easy_Training_Scripts
https://github.com/tdrussell/diffusion-pipe
>ChromaTraining: https://rentry.org/mvu52t46
>WanX (video)https://rentry.org/wan21kjguide
https://github.com/Wan-Video/Wan2.1
>MiscShare Metadata: https://catbox.moe | https://litterbox.catbox.moe/
Img2Prompt: https://huggingface.co/spaces/fancyfeast/joy-caption-beta-one
Archive: https://rentry.org/sdg-link
Samplers: https://stable-diffusion-art.com/samplers/
Txt2Img Plugin: https://github.com/Acly/krita-ai-diffusion
Bakery: https://rentry.org/ldgcollage | https://rentry.org/ldgtemplate
Local Model Meta: https://rentry.org/localmodelsmeta
>Neighborshttps://rentry.org/ldg-lazy-getting-started-guide#rentry-from-other-boards
>>>/aco/csdg>>>/b/degen>>>/b/celeb+ai>>>/gif/vdg>>>/d/ddg>>>/e/edg>>>/h/hdg>>>/trash/slop>>>/vt/vtai>>>/u/udg>Local Text>>>/g/lmg>Maintain Thread Qualityhttps://rentry.org/debo
>>105611096 (OP)so what is the general consensus for wan genning? comfyui or wan2gp?
>>105611124>wan2gpthis if you don't want to constantly break the ui
>>105611096 (OP)Self-forcing 14b T2V
https://huggingface.co/lightx2v/Wan2.1-T2V-14B-StepDistill-CfgDistill
>>105611155fucking based. anyone get this working?
>>105611155so no NAG? I don't get it
unironically moving here to get away from the schizopisting
SOMEBODY STOP THIS FUCKING MADMAN PLEASE!!!
>>105610691THIS... AI software is insanely unoptimized. Software optimizations are free and will give you much better progress than waiting for GPU manufacturers to stop raping us. 7900 XTX owner btw
> Software optimizations are free
go ahead. make those optimizations. you dumb cunt. see how free they really are.
>>105611342as opposed to making the GPU hardware optimizations myself? lmao
once someone publishes a good optimization for our software, we can all get a copy of the optimization and apply it for free. this is clearly what I meant.
>>105611324it's all python niggers. it won't happen
>>105611124ldg wan workflow if you actually want to get any quality
12345
md5: 625c6a6e4f75b1a465d4df1712daca7c
🔍
Is there a benchmark for gus that's purely for AI performance? I want to fiddle with clocks and voltages.
>>105611392We've had some nice speedups already. low step loras like dmd2, attention optimizations, svdquant tea/fbcache... really there are tons of optimizations that people barely use because they're not widely known or built in to default workflows.improvements in torch and gpu drivers too.
yeah python devs and researchers are terrible when it comes to this, but eventually we will see more widespread improvements.
also, current model architectures are really inefficient. researchers are improving on this, though they focus too much on heavy/bloated MOAR PARAMETERS models.
the holy grail is if we make a breakthrough in local training performance that allows us to make our own models on a consumer GPU. if we get that, then local will enter a new golden age.
>>105611450kind of old but not much has changed. the tech really stagnated
>>105611463No I mean like 3d mark, but for AI shit.
>>105611459there has been 0 improvements where you don't have to rape the quality of the model by quants. there is no way to get creative with optimizations being stuck using fucking torch all the time. it puts the responsibility on a very few people that already have to maintain the repo and even then memory management is a complete mess. fuck python niggers
how do I train a chroma lora? Can I just use easy scripts flux settings but change base model?
>she raises her fiery sword and slashes from the top right to the bottom left. cloth and hair flow softly in the wind
>wan 14b 480p q8, torchcompile on, teacache thresh: 0.190 start: 0.10, SLG on, adaptiveguider thresh:0.9995
>gen time 20 mins, first attempt
turn on adaptive guider and gen time increases. huh
>>105611195nag is snake oil
Can you train a Chroma lora without using diffusion-pipe? I don't think it's supported in kohya_ss yet
>>105611520ai-toolkit has it I believe
>>105611096 (OP)blessed thread of true frenship
573
md5: 83f091696a8fe52c678665edafba6194
🔍
How is python even real...
cosmos predict2 gguf
https://huggingface.co/city96/Cosmos-Predict2-14B-Text2Image-gguf
>>105611758fucking Christ anon
>>105611758thank you niggeranov, those millions of $ in funding is going to good use (affirming transexual's existance)
>>105611780but this is much less space than the image anon posted. it's still slop but not as much
>>105611812122MB every time you play with settings and the script OOMs
>she raises her fiery sword and slashes from the top right to the bottom left. cloth and hair flow softly in the wind
>wan 14b 480p q8, torchcompile on, teacache thresh: 0.250 start: 0.10, SLG on, adaptiveguider thresh:0.9995
>gen time 17.5 mins
was wondering why these were taking forever, forgot I had sage and fast off
I will never post in a Miku thread
>>105611865incredibly based
https://huggingface.co/Kijai/WanVideo_comfy/blob/main/Wan21_T2V_14B_lightx2v_cfg_step_distill_lora_rank32.safetensors
>>105611865>>105611870Reminder that mikutroons that are permanently spamming and making these generals are jannies
This thread is just /sdg/. You could've just gone to /sdg/.
>>105611903that is the other thread throwing shit around calling each other avatarfags. this one is chill
>>105611903Yeah we noticed what your goal is "anon"
>>105611935can we not do this here? the schizo thread is for that kind of language.
anyone have any luck training wan loras on a 3090?
>>105611873but wait! there's moar!
>https://huggingface.co/lym00/Wan2.1_T2V_14B_LightX2V_Step_Cfg_Distill_VACE-GGUF>https://huggingface.co/strangerzonehf/Wan2.1-t2v-14B-Person-LoRA
>>105611903take your bullshit somewhere else
>>105611974what's the point of the person lora? just better human cohesion?
the vace thing looks interesting? haven't really played much with vace.
>>105611096 (OP)>/sdg/ is now /b/ teir ped0 slop>/ldg/ is now tantrum central>/gif/ AI threads devolved into dick spamWhen did it all get so cursed?
>>105612106>>/gif/ AI threads devolved into dick spamBut that's based
>>105612106this is /ldg/ and I see no tantrums
NoobAI workflow with adetailer type of face detection and fix?
>>105612173it's honestly better to just make it yourself. what is the issue with that?
mag cache is pretty damn good
>>105611155Need a GGUF quant before I can test it. Only 5090kings can test this out for us right now
Excited to see how much better it will be than FusionX if at all. Now that the coefficients thing fixed FusionX I'm fine with self forcing 14B being a dud
>>105612106sane ppl can only take so much shit before they leave for good.
>>105612173https://www.youtube.com/@drltdata/videos
>>105611356>as opposed to making the GPU hardware optimizations myself? lmaogeohot did it and embarrassed Lisa Su, nocoder "lmao"
in either case it was not free.
>>105611505What the fuck are you doing why are you using adaptive guidance
Why are you deliberately not reading the instructions for the values to set for teacache
Why are you using q8 instead of GGUF quants
I hate people with brainwaves like you where they feel they know better about everything and subconsciously live a shittier life as a result
>>105611974Seconding the purpose of the person Lora. Sounds ripe for merging into the sequel to fusionX but not worth loading on its own. I have no issues with persons, not really at least and I'd like more information on what the Lora actually helps with, which the description on HF does not provide
>>105612439nta but
>why are you using adaptive guidanceadaptive guidance increases speed at the expense of slight quality loss
>Why are you deliberately not reading the instructions for the values to set for teacacherel_l1_thresh set to 0.19 is perfectly fine for medium quality wan2.1 iv2 480P. it's listed as a value in the info tooltip.
>Why are you using q8 instead of GGUF quants ..Q8 is a gguf guant. You know, wan2.1 i2v 480p Q8.gguf
maybe im confused but is this anon trolling or something?
>>105612031>just better human cohesion?No idea but think so, I still have yet to test it
>>105612422Yeah these threads are only good for the latest news
>>105612465Kek, yeah the baker of the Fusion model mentioned its the MPS rewards lora that changes the faces. It also has moviigen baked in which also changes your input image into plastic square jaw flux face https://civitai.com/models/1678575
>>105612494>trolling?Big words from someone using all these quality destroying nodes with shitty values when he could just go down to Q6
i wish a better shake ass lora existed. the current one is so hit or miss its near useless. sometimes you get an amazing seed but 80% of the time she's doing crazy shit. have to spend nearly an hour just to get 1 decent result
>>105612601Train one then.
>>105612571Is this also for the t2v? Would explain why my girls have a lot more flux face with FusionX than with base wan
>>105612601its the best, super wobbly and sloppy. wan really knows fat asses and can part those cheeks like the red sea. but yes, half the time it will freak out and do something stupid.
>>105612716>wan really knows fat assesIt really does. I was surprised to see one of my Brazilian girls on the beach have sand on her asscheeks without me prompting for it which was a pleasantly lewd surprise
mag cache and that self-force lora for 14b are pretty good. can gen a 1008x560 (don't @ me) in ~90 seconds.
I just like seeing funny AI pictures. Don't care about goonslop.
>>105611469Your voltage and frequency doesn't mean that much because you're stuck with X amount of cuda cores. Denser the model is, slower it is on your X amount of cores.
It's as simple.
>>105612757but is the output hot garbo or not what you prompted
it's gotta be one of the two
>>105612781no it works well so far. and it actually follows the prompt and loras. unlike the 1.3b sel-forcing.
honestly with the speed these random performance enhancers are coming out i'd expect them to be old news within 2 weeks.
>>105612805most of them will just be snake oil anyways
>>105612818as is tradition.
hope we get a new video model soon or flux kontext so we can get an entire new wave of bullshit
>>105612842except they keep bloating the fucking models with multimodal shit so I expect 48gb min to even run the new ones in fp16
>>105612928>May I see it?no
>>105611764>cosmos predict2 ggufanyone tested this?
>>105613444it's shitty cosmos so no
>>105612805here.
i take it back. it prosuces on average a pixely mess.
is it just me or are realistic extreme body proportions impossible? the lighting and texture always turns into 2.5d clay sloppa if you try to do anything not realistic proportionally.
>>105613148Then your claims are worthless, anon. Why may I not see it?
whats the best model for NSFW image inpainting? im currently using juggernautXL, but I'm open to trying new models
>>105613480I stick to the model I genned the stuff with which is either lustify (v5, not 6) or cyberrealistic 5.7. pussys need a lora unless you're into roastbeef.
>>1056134901 out of 3 is an actual female, not bad.
>https://github.com/Zehong-Ma/ComfyUI-MagCache
just got chroma support
>>105613471that wasn't me.
after playing a bit more with it i take back what i said. ita definitely more pixely than default. and it also causes that degeneration of quality over generations. 2 gens later and it produces only a fully noisy image. waste of time and snake oil confirmed.
>>105613643I hope not, they should make lora for the parent model
>>105613444was gonna ask if you get the same errors on the edge(s) of your chroma gens but looking at your gens, yep lol
>>105613648ok turns out i might be retarded. am using the wrong/not the recommended samplers etc.
someone else test this while i go eat asphalt.
>>105609317Cute. There better be an innocent follow-up.
>>105613686Yeah I get them. Sometimes they are really nad, like whole rbg scale goes trough one edge
>>105613792what's sigmoid offset, does it improve results?
>>105613819it's some form of scheduler from silveroxides specifically for chroma. you can select it in the basic scheduler node too once you install it but comes with it's own node with additional 'things'. can find it in the manager
kino
md5: 0ea5bbd46c8966af6fcfd7c62ad19724
🔍
Man I'm trying to do porn but it's giving me such cinema cityscape
>>105613885plz share a catbox of the uncensored
>>105613904It's diapers, so no.
>>105613911Please share workflow anon
>>105613911most stuff in a catbox link is not against the rules, especially diapers.
Why do my videos go fucking crazy when I use torch+sage+tea cache? With the regular out of the box WAN I get pretty decent videos, they just sorta look low framerate. When I add all this shit with the same LORAs and prompts the subjects go fucking crazy and fight each other
>>105613916>>105613929>>105613904https://files.catbox.moe/4irk6u.json
https://files.catbox.moe/ukmf35.png
I definitely got a vacation a few times even for catbox porn on blue boards.
>>105613958Damn anon I asked for it but this is way better than I expected. Extremely based, thanks
>>105613958I'm the actual anon that asked for the catbox. Thank you for sharing.
>>105613638So can someone confirm if this is snake oil or not? Any quality loss? Is it a must have?
>>105614010You're right, but I also asked for the workflow and anon delivered. So I'm also grateful bro
>>105614012what am i looking at
>>105613638>>10561401137.10s -> 19.63s
It does really crank up the speed, but the quality loss is huge. Until someone can take the time to dial in the perfect settings I think this one's a miss.
https://imgsli.com/Mzg5NjQ4
>>105614268Damn that's pretty rough quality hit
why does high denoise with noise mask give such bad image to image results? the model doesn't take into account other parts of the image when denoising?
>>105611096 (OP)>still no naptew
>>105611865honestly, same sis
>>105614298Although, since chroma quality can vary so much, it might be worth using magcache to hunt for good seeds?
>>105613686that happened on flux too
and ive also gotten it on sdxl but not as common
flaccid penis lora for wan just doesn't look right. it doesn't have good dick feel. its like it was trained on those silly strap on videos
what denoise sequence do you use when you do iterative image 2 image inpaint bwos
>>105614617TWO mikutroons? grim
Name 1 reason why anyone would ever need more than picrel, a bed, and autosucc?
Protip: You literally can't.
How do I prompt something that is off-screen without the AI trying to bring it on screen?
>>105614657i wanted to do this with a campfire to have only the lighting. couldn't do it
>apply 6 layers of snake oil
>somehow they all work together quite well and the quality's not taking that deep of a nosedive
Huh. Wan's the first model that can handle that much shit. 2 steps of MPS/HPS reward loras + Causvid/Accvid at CFG 5.5, then 6 steps of Causvid + Accvid + Self-Forcing + NAG + RescaleCFG at CFG1. And also some (((((quality)))))) loras because they actually somewhat help with contrast https://files.catbox.moe/1i3r5x.mp4
btw for anyone who's used FusionX in i2v and wondered why the faces keep changing significantly, MPS/HPS loras are the main culprits behind it. Sadly they do boost prompt adherence somewhat so removing them completely is not ideal. Just gotta find the right settings
>>105614678>the quality's not taking that deep of a nosedivebecause it was never there to begin with, with that flux plastic grainy slop of an image, let alone the initial huge shift in the video and grainy shit motion and fps
how did he stand long enough to spray that? you can tell the man is all fat and absolutely no muscle
Hey guys I want to make videos like vid related, what ALL do I need?
is there a node that takes a mask and gives a mask of the mask boundary
>>105614653to feel her warmth against my skin, to hear her breath contently as our child grows in her.
>>105614657crop the image afterwards
>>105614740The question is will that warmth be worth the nagging until the inevitable divorse when she gets bored after 5 years and says one word to end your life before taking half the shit and the kids?
>>105614775then don't marry the first person you meet who will inevitably cheat on your lil dick with tyrone
>>105614804>he doesn't know
>>105614746I want the reflection of [described light] on the screen but it keep giving me the light itself on the pic into random places.
>>105614617THESE posts are unpruned every thread
rgal is always nuked
the absolute state
fuggers
md5: 38cc8b6667720210cdc6dca2755421cf
🔍
>>105611133is this post facetious?
>>1056121064chan was never good
but i must admit, i have never seen such autism regarding the copypasta for the baked threads
cringe.
>>105613243would
>>105614997hey
just to let you know.
the containment thread is the other one. please go and post there.
thank you.
>>105615048Don't go! You are very important person here and people are jealous.
>>105615070so long lonesome
>>105613638>>105614268>Prompt executed in 12.15 secondsWorks pretty good if you fix up the ratios for detailed using the output from the calibration node. https://imgsli.com/Mzg5NzEw
>>105615222 (You)
>"chroma": np.array([1.0]*2+[1.09766, 1.11621, 1.07324, 1.07227, 1.0459, 1.04297, 1.03418, 1.03516, 1.04004, 1.04102, 1.01465, 1.01562, 1.0293, 1.0293, 1.02344, 1.02441, 1.02148, 1.02148, 0.99609, 0.99609, 1.0166, 1.01758, 1.00586, 1.00586, 0.99561, 0.99561, 1.00488, 1.00488, 1.00098, 1.00098, 1.00781, 1.00781, 1.00293, 1.00293, 1.00684, 1.00684, 0.99072, 0.99219, 1.00488, 1.00488, 0.98877, 0.98877, 0.98242, 0.98193, 0.98584, 0.98633, 0.96924, 0.9668, 0.91553, 0.9165]),
picrel is magcache_k to 5 from default 2
>Prompt executed in 9.94 seconds
type sdg
this yucky ldg pops up
Has stable diffusion gotten any faster/better yet over the last year or so? I stopped keeping track with SD 1.4 and SD XL/turbo lora
>>105613638how do i use this shit i get "The inference steps of chroma must be 26." even though i have it set to more, i'm low iq
>>105615469Xl was already lightning fast, nobody in their right is gonna waste time speeding it up when more modern models take an age
>>105615481Set it to 26 steps dummy
>>105615493why does it have to be 26 steps THOUGH
>>105614916what kind of light and what model?
also, could you show me an example pic?
i assume you've already tried putting the light source object in the neg prompt, so it will probably have to be done through different phrasing in the positive. directly mentioning the object is going to put it in the image because clip is stupid like that
>>105615493lmao i was reading it as "at least 26" i'm retarded
speedrun this thread anons
>>105615541ok, posting the worst gens from my folder
>>105615549comfyui problems, amirite?
>>105615541>>105615549sounds like a good idea with failed gens
>>105615512I wanted the muzzle flash reflected in the lens, but it kept putting it in the pic.
>neg the light sourceI haven't thought about that. I'm retared
>>105615238 (You)
>>105615513>i'm retardedSame. Here's new ratios without a custom lora enabled
> "chroma": np.array([1.0]*2+[1.10059, 1.09473, 1.08691, 1.08594, 1.05176, 1.05273, 1.0332, 1.03516, 1.04199, 1.04199, 1.01562, 1.0166, 1.0293, 1.0293, 1.02441, 1.02539, 1.02148, 1.02148, 0.99512, 0.99561, 1.0166, 1.0166, 1.00586, 1.00586, 0.99414, 0.99414, 1.00488, 1.00391, 0.99951, 0.99951, 1.00684, 1.00781, 0.99951, 1.0, 1.00488, 1.00488, 0.98926, 0.98926, 1.00195, 1.00195, 0.98389, 0.9834, 0.97656, 0.97656, 0.98047, 0.97998, 0.9585, 0.95898, 0.90137, 0.90186]),
>>105615702Only white men find this mantis phenotype attractive.
>>105615705Of course, some sdg anons took it upon themselves to hijack ldg, make a shit bake, then double down with fake collage bake.
>>105615672>>105615702How are you using NAG with Chroma? Is it this? https://github.com/Clybius/ComfyUI-ClybsChromaNodes
chroma upscale x2, 4 tiles, 10 steps, 0.4 denoise, no detailer or inpaint. I like the upscales so far with v37. tried one of silveroxides experimental hyper loras but not seeing any improvement at 10 steps.
>>105615222very nice
>>105615705your post is bugging me
>>105615773yes
>>105615702she cute
t. White devil
>>105615561could also try something like "firing gun, (reflection:0.7), closeup" with barrel in negative
pol white power nixchecker spillover is so sad
>>105615513i found that magcache can be run at an arbitrary amount of steps with chroma when magcache calibration is connected
>>105615786Nag gives faster gens?
>>105615914does not speed up gens, but they are more coherent/detailed even without explicitly trying to remove things with the negative
with magcache it helps reduce it a little bit without affecting the quality, make sure the calibration node is set cause otherwise it removes detail
>>105615871>the mere existance of a white man ruins his daybroootal
>>105615961She looks like she enjoys long philosophy talks with White human men.
https://huggingface.co/Kijai/WanVideo_comfy/blob/main/Wan21_T2V_14B_lightx2v_cfg_step_distill_lora_rank32.safetensors
this shit works so well, takes me 1.30 mn on my 3090, really impressive
This isn't about AI samplers...
magcache throws non-singleton dimension 1 errors when i'm not genning at 896x1024 precisely. oh well
>>105616023is NAG any useful on CFGs greater than 1?
>>105616164yes, nag_scale sort of acts as the cfg scale but it's independent of the sampler's cfg. it's not a cfg replacement. from the paper; "NAG is a general enhancement to standard guidance strategies, such as CFG, offering advancements in multi-step models."
>>105616164it's supposed to replace CFG, so you have to put CFG 1 and deactivate it, that's the point, it's supposed to get the negative prompt effect of CFG while getting the speed of cfg 1
>>105616191>chroma_naghow did you make it work on chroma?
Is the ballsack grip legit?
>>105616217https://github.com/Clybius/ComfyUI-ClybsChromaNodes/blob/main/chroma_NAG.py
>>105616208honestly i'm not sure if this implementation is made for cfg 1 on the sampler specifically, previous gens i posted have cfg 5 on the sampler and nag_scale at 5.
if i set cfg to 1 it makes everything schizo like usual.
>>105616238https://github.com/Clybius/ComfyUI-ClybsChromaNodes/blob/b7fc257cac0948f03bb180b5acbb500a342739d9/chroma_NAG.py#L230
>he commented the nodesdid you uncomment them to make it work?
>>105616254weird, they are commented but load properly.
>>105616238>if i set cfg to 1 it makes everything schizo like usual.that's weird, because NAG is supposed to make it work at cfg 1
SO MUCH FUCKING SNAKEOIL
JESUS CHRIST
>>105616238>https://github.com/Clybius/ComfyUI-ClybsChromaNodes/blob/main/chroma_NAG.pyit's not working for me
>RuntimeError: mat1 and mat2 must have the same dtype, but got Half and BFloat16
>>105616348oiling my snake rn
>>105616365try removing rescalecfg. and set clip type to stable_diffusion, install the fluxmod nodes and put "padding removal" after the prompt conditioning. this is because comfy's implementation of chroma (with the min_padding 1 node) does not work properly and he still hasn't corrected it.
>>105616348that one is not a skane oil at all, it keeps wan's quality while only waiting for 1 mn
>>105616023
>>105615541failed gens? ok
This is probably my finest work yet. No, I am not going to share my workflow.
>>105616389still got that error
>>105616444>(((globe earth)))
>>105616429>blank expression>robotic movement>'bro it keeps the quality'*yawn*
>>105616446honestly fucked up hands never bothered me, it's almost nostalgic now
>>105616479entirely antisemitic
>>105616473What do you mean? It's not fucked. Retards like you should not be allowed to even post.
>>105616481Prompt and lora? I can't get something similar with any of the ones that I tried from civit
>>105616481i like how she already had some in her mouth
>>105616389>>105616450what torch version do you have? maybe that's the issue?
>>105616348>>105616472Literally everything after slg has been some bullshit cope that'll speed wan/whatever, and the cost is either a huge visual quality loss, the loss of prompt adherence or weird, uncanny movements and dead looking faces
they always say "dude it's so fast". they never post a good video gen
>>105616502this. magcache is SHIT. It doesn't degrade the quality much but it absolutely fucks with the motion and makes it non-existent.
>>105616518>fluxd>retarded statementthat's debo right? another one to my filter kek
I want more quality not more speed
>>105616502>bullshit copeYeah, because VRAMlets and retards with ancient GPU's keep bitching about how slow Wan is, so other retards try to clout farm them by offering cheap "fixes"
>>105616533>I want more quality not more speedyeah if veo3 gets a little cheaper with larger amount of credits monthly i'd jump on it for a few months
>>105616348slurp slurp sllllluuuuuuuuuurrrrrp
>>105616526enjoy your safespace buddy
>>105616227tried a girl soldier prompt
>most interesting videos to be posted ITT just happen to be posted right now and we will probably never see that anon again
hm...
>>105616492It's the big splash, titty jiggle and cumshot loras. I posted a catbox in the other thread.
>>105616518>they always say "dude it's so fast". they never post a good video genhow about that one? it's a 720p I2V render and that took me only 4 mn (4 steps)
>>105616481>>105616742Peter Parker strikes again
>>105616726>>105616481no need for drama man, one dude posted a vid showcasing distill lora which is great.
>>105616481and this dudes is nice
>>105613528this one was good too
>>105616742what other thread? its not in any recent ldg threads
>>105616695>that rightmost girl manifesting the glass out of nowherekek
>>105616792Hmm couldn't find it, here's a link for one from this thread.
https://files.catbox.moe/klrfp9.webm
>>105616365Anyone made x/y plot of nag? Is it another snakeoil
>>105616961I'm not dealing with that spaghetti
>>105616961not a xy plot but you can see NAG VS no NAG here
>>105616023>>105616078>>105616171
>>105616979Any without some shitty distillation lora?
>>105616979interesting, a sidegrade for video gen? I wanna see how it performs with Chroma
>>105616990nope, be the change you want to see
>>105616990I'm not posting comparisons, but it turns a nearly 20 minute minute 50 step 480p gen into 7 minutes on my 3090. Unfortunately, the quality is pretty bad compared to a normal gen. And SLG doesn't seem to work with it when using a non-distilled Wan, or needs different settings than the default.
So yeah, seems to be useful for causvid and whatever-the-fuck lora users but might otherwise be a wash.
>>105617020>seems to be useful for causvidwhat? it's supposed to replace causvid
>>105617020>I'm not posting comparisons, but it turns a nearly 20 minute minute 50 step 480p gen into 7 minutes on my 3090. Unfortunately, the quality is pretty bad compared to a normal gen.but if you go for 720p you'll wait longer, but still less than 20 mn on a 480p gen and the quality will be higher?
>>105617026Eh, whatever, I don't use crapvid or any distillation because the gens are shit. It makes sense though, because with NAG, the gens are also shit.
>>105617034I dunno, try it yourself maybe. The quality was so bad compared to the normal gens at the same seeds that I gave up. I tried a few different settings too. If anyone else has better results/settings, post em I guess.
>>105617041>If anyone else has better results/settings, post em I guess.there's this
>>105616744
https://huggingface.co/Kijai/WanVideo_comfy/blob/main/Wan21_T2V_14B_lightx2v_cfg_step_distill_lora_rank32.safetensors
anyone test this
>>105617071"This works with I2V 14B. I'm using .7 strength on the forcing lightx2v LORA (not sure if that's right but just left the same as Causvid). CFG 1 Shift 8, Steps 4 Scheduler: LCM. I'm using .7-.8 strength on my other LORAs as well but I always do so probably no change there."
will try
first test, 75 seconds
using
>>1056170714080, 75s, with the settings from
>>105617077
>>105617085try with strength 1, works fine to me that way
>>105617085oops, had it at unipc, not lcm, will try that next.
>>105617085now with lcm, miku drinking water. 73s on a 4080.
this is VERY impressive, before gens would take 300-400s with teacache enabled.
>>105617041>because with NAG, the gens are also shit.nah, NAG definitely improves the quality of the video at cfg 1
>>105617106also, the neat thing is this is a t2v lora. but it works absolutely fine with i2v.
>>105617091will try
>>105617126>will tryadd NAG aswell, it's a pretty neat addition
it's amazing how good wan 2.1 is even before all these speed tweaks. teacache got me from 15 mins to like 5 mins. now I can gen with 4 steps in just over a minute.
>>105617139and it looks like garbage. fuck off
>>105617139also I should note a lot of that time is interpolating the clip, it's actually faster than that. my rentry workflow has regular + interpolated video.
>>105617139>now I can gen with 4 steps in just over a minute.yeah I'm really impressed by it, it's way better than causvid that's for sure, the only caveat so is that it doesn't seem to be responding to SLG
put em up
strength set to 1.0
>>105617157I have teacache/slg bypassed I think they didnt play well with causvid either. Still, this is rapid i2v generation. Even online sites weren't this fast. Even Google isn't this fast yet.
Open source always wins.
>no one sperging out about miku in the thread baked to get away from miku
>nb4 "spergout"
this is amazing. 83 seconds on a 4080 and i'd say 10 seconds of that is interpolating.
delicious mikudonalds
>>105617178that one is clean but she's an amputee now :(
>>10561718874 seconds
it just works
welp
md5: 851488566d13656f1f2667868c51d8fa
🔍
this lora + 14b wan is genuinely impressive, idk how it works but it does. 78 seconds (with interpolating)
this is the regular video.
1.0 strength, cfg 1, shift 8, steps 4, lcm scheduler
>>105617216>hatsune... to mik-ukek
>>105617201>it just worksyeah, soon enough they'll find a way to keep the quality at 1 step, we'll be reaching the peak we're so back, it even got the KJ Boss seal of quality
https://www.reddit.com/r/StableDiffusion/comments/1lcz7ij/comment/my4nuq2/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button
>They have truly had big impact on the Wan scene with first properly working distillation, this one (imo) best so far.
>>105617254at this point, I wonder if BFL is delaying kontext dev because they want to try this distillation method aswell and get a better dev quality out of it
>>105617254i2v is the most fun thing with AI and now we can make videos super fast. this in tandem with illustrious/noob or flux for base images, means we can do anything.
>glazing distillation that makes the model look a generation behind what it is
*yawn*
a doll of anime girl Miku Hatsune puts on a black top hat and bows.
plushie was generated, then i2v'd. kinda neat desu
>>105617276>i2v is the most fun thing with AI and now we can make videos super fast.that's true, I had a lot of fun with Wan I2V when it was released but at some point I gave it up because I had to wait 20+ minutes to get a decent video out of it, now that it's way faster I can play with this toy again, feelsgoodman
power1
md5: 79b2d0fcd45831dc1f012fcb60b322d4
🔍
ai generating is using as much power as my 500tb beefy server, fuck. this is about $240/mo in electricity alone. generating makes the room hot so i need the a/c on.
>>105617294time was always the issue, now it is basically a minute for a gen, originally 15 min even on a 4090. open source is pretty neat.
also this is a test gen and not what I wanted but look at the reflections, pretty cool how an AI model can figure this stuff out with no actual physics or material simulation.
a plushie with pink hair does a backflip.
getting closer! in any case, gens are far faster now, i'd love to know *how* this lora accelerates the process, I know there are turbo SDXL loras but im not sure how they work exactly.
>>105617278It's the same anon that over hypes every quality destroying speedup. As the other anon said, I only care about quality. Patience is a virtue.
>>105617315>>105617319cute, thought she was going to brap in the first one
>>105617319>I know there are turbo SDXL loras but im not sure how they work exactly.it's a new distillation method called "self forcing", I was never a fan of distillation stuff because it always reduced the quality hard, but that's the first time I can confidently say the speed increase is worth it
>>105617329look how clean this jacket wearing is, it's a billion times better than causvid was.
Alright shillies, I'll try it.
You better not be lying.
we are entering a new age of rapid i2v genning anons.
>>105617345How well does it work with other loras? What about realism?
>>105617360>>105617254check the reddit link this anon posted, the guy said he is using other loras with it.
>>105617360not sure, just testing various i2v gens, only the speed lora at 1.0 strength.
Can I videogen with 12 gigs of vram?
so the only thing worth it is just the distilled Lora? nothing else?
yep, this is the real deal. fast and quality.
an asian girl puts on a white baseball cap that says "LDG" in black text.
>>105617378https://rentry.org/wan21kjguide
Get the Q4's.
>>105617254top kek, kino is back to the menu boys!
>>105617378you can use plenty of stuff with multigpu node. for wan I use q8 with a 4080 (16gb) with virtual vram set to 10.0.
>>105617378>>105617386you can get a bigger quant and offload a bit to the ram with that node, anyway you get all the details on the rentry guide
>eve, what's the best platform to have?
>but it has no quality
wrong.
an asian girl takes off her green jacket to reveal a black bra.
it's safe.
>>105617419>but the color is wrong!this one got it right. last eve then ill test new stuff.
>>105617381>so the only thing worth it is just the distilled Lora? nothing else?I think it's the combinaison of this lora + NAG that makes it so good, it's funny that the two of them appeared within a week of interval, almost as if they were destined to work together
>>105617430I havent even used NAG yet and my outputs are decent. What does it do? Works in tandem with the lora?
so magcache + nag + light2v for max wan?
>>105617441Gives you back functioning negative prompt when using CFG 1 with Wan 2.1. Normally when using CFG 1, negative prompt is skipped which is where half the time savings from using CausVid, AccVid, and now Self Forcing comes from.
>>105617441>What does it do?it replaces CFG, you get the negative prompts effects while getting the speed (kinda) of cfg 1
https://chendaryen.github.io/NAG.github.io/
>>105617449>so magcache + nag + light2v for max wan?no, teacache and magcache are supposed to skip some "useless" steps, but when you're on 4 steps there's nothing to skip lol
>>105617471so if im using this i dont need to use teacache and slg at all then?
>>105617483yeah you don't need teacache anymore, and for slg, I tried to use it but it didn't give me something different
>>105611463>2060 near bottomThat's my boy :)
is there a list for expressions and stuff? like i cant for the life of me find out how do people refer to that one :3> (:3 but open mouth) face
>>105618916https://danbooru.donmai.us/wiki_pages/tag_group%3Aface_tags