I'm Gonna Try to Compile It Edition
Discussion of Free and Open Source Text-to-Image/Video Models
Prev:
>>105761419https://rentry.org/ldg-lazy-getting-started-guide
>UISwarmUI: https://github.com/mcmonkeyprojects/SwarmUI
re/Forge/Classic: https://rentry.org/ldg-lazy-getting-started-guide#reforgeclassic
SD.Next: https://github.com/vladmandic/sdnext
ComfyUI: https://github.com/comfyanonymous/ComfyUI
Wan2GP: https://github.com/deepbeepmeep/Wan2GP
>Models, LoRAs, & Upscalershttps://civitai.com
https://civitaiarchive.com
https://tensor.art
https://openmodeldb.info
>Cookhttps://github.com/spacepxl/demystifying-sd-finetuning
https://github.com/Nerogar/OneTrainer
https://github.com/kohya-ss/sd-scripts/tree/sd3
https://github.com/derrian-distro/LoRA_Easy_Training_Scripts
https://github.com/tdrussell/diffusion-pipe
>WanX (video)Guide: https://rentry.org/wan21kjguide
https://github.com/Wan-Video/Wan2.1
>ChromaTraining: https://rentry.org/mvu52t46
>Illustrious1girl and beyond: https://rentry.org/comfyui_guide_1girl
Tag explorer: https://tagexplorer.github.io/
>MiscLocal Model Meta: https://rentry.org/localmodelsmeta
Share Metadata: https://catbox.moe | https://litterbox.catbox.moe/
Img2Prompt: https://huggingface.co/spaces/fancyfeast/joy-caption-beta-one
Samplers: https://stable-diffusion-art.com/samplers/
Txt2Img Plugin: https://github.com/Acly/krita-ai-diffusion
Archive: https://rentry.org/sdg-link
Bakery: https://rentry.org/ldgcollage | https://rentry.org/ldgtemplate
>Neighborshttps://rentry.org/ldg-lazy-getting-started-guide#rentry-from-other-boards
>>>/aco/csdg>>>/b/degen>>>/b/celeb+ai>>>/gif/vdg>>>/d/ddg>>>/e/edg>>>/h/hdg>>>/trash/slop>>>/vt/vtai>>>/u/udg>Local Text>>>/g/lmg>Maintain Thread Qualityhttps://rentry.org/debo
Blessed thread of frenship
kontext is good for your noobai gens too.
-remove the girls on the right.
>>105766312I've never used noobai or illustrated
What are they good for?
And can I use them directly like any other model or do they need something special like flux dev does or special prompts like score_9 of pony?
>>105766333noobai based models (wainsfw v14 is probably the best one for anime) is the standard for anime gens now. better results than pony and consistently good anatomy even without controlnets.
https://civitai.com/models/827184/wai-nsfw-illustrious-sdxl?modelVersionId=1761560
kontext can emulate any font, it's neat
>>105766350Can somebody explain to me why is a finetune of a 2 (two) year old model is still the "standard" for local anime genning?
>>105766438pony still get the job done but illustrious is less chaotic with the deformities and pose prompt adherence
>>105766438XLs staying power is pretty puzzling isn't it
>>105766438>Can somebody explain to me why is a finetune of a 2 (two) year old model is still the "standard" for local anime genning?finetuning is expensive, not a lot of people are willing to pay the price
>>105766438>>105766481https://www.illustrious-xl.ai/sponsor
we are almost there guise
>>105766438to be fair novel ai only recently surpassed it (but really only in text generation which is a meme)
>installed sage_attention_2.2
>get different result
>speed is exactly the same
ok
>>105766469pony gen here.
>>105766521anon can tell kek
>>105766574I
>>105764051 didnt even get a reach around in the collage :(
>>105766497What's even the point anymore with this, are they really going to forever lock 3.5 behind their saas nobody is using?
>>105766503Their easy multi characters prompt (ie 3,4, 5 or 6) without needing any controlnet or inpainting is pretty good too, but that's pretty much all
Rape Me
md5: cdb3f3931c40f6a80734b1d629d0859f
🔍
>>105766589>>105764061>I'm Gonna Try to Compile It Edition
I'm seeing zero performance improvement in sage 2.2.0 vs 2.1.1 on a 5070. Wtf, have I been changed?
>>105766682we already knew this ages ago when testing hyvid
Does the new sage attention work on older cards? 30 series and below?
>>105766738is she glamming me?
why am I seeing apparent quality degradation when using --fast AND using fp8 quantized models? isn't --fast for activating fp8 fast math?
>>105766753fast decreases the quality, that's why I'm not using it
I think the Rentry OPs instructions for using lightx2v with FusionX are bad or at least not a clear-cut suggestion. I think its making skin look worse and also introducing the lack of background "movement" that self forcing seems to do
also it seems to be pointless when you don't have vram to load the lora if you're already spilling into ram for normal gens. adds an extra 8 seconds with dubious (as mentioned above) "enhancements"
>>105766438this entire hobby has only existed for 3 years
>>105766770yes it's decreasing quality for fp16 models by using fp8
but I'm using a fp8 model??
>>105766438SDXL is in a sweet spot for capacity and requirements. In reality we should've had a 2B from scratch model from Pony or Chroma but they chose to be retards despite being shown multiple times in papers that it's doable to train a 5 million image model from scratch within 3 months especially with a GPU cluster.
>>105766786FusionX is a terrible Wan merge. You CAN use HPS/MPS reward loras, but only in a 2-pass setup when enabled on the first pass only, otherwise they're guaranteed to drastically change the face of any character.
>>105766793fast doesn't touch your weight, it touches the math shit from your image, and usually on full attention it's always on fp16 on the latent shit or whatever, but by using fast it's reducing on fp8 and it's making this shit just worse
>>105766812sorry i was discussing t2v not i2v
and specifically the guide that says
>You can use the FusionX version of WAN on this by simply loading the gguf as a drop-in replacement. It appears that FusionX produces much better motion fluidity combined with lightx2v when compared to regular WAN. The T2V version seems outright superior to vanilla WAN...I disagree with the better motion fluidity. I find that it makes gens slow motion more often (not confirmed this fully yet, only tried walking so far) and it makes the skin more plasticy/shiny/one tone/less detailed (strongly confirmed this)
Has Kontext made Omnigen2 obsolete? I feel bad for the devs, they just released it.
>>105766840Omnigen sucks, didn't work for me at all so it shouldn't even exist.
>>105766840>Has Kontext made Omnigen2 obsolete?yep
>I feel bad for the devs, they just released it.Meh, it's not the first time no one gave a fuck about their model (Omnigen1 was a disaster), they'll come back I don't doubt it
kontext can't write long sentences yet
But tbf I'm using Q4_K_M
>>105766929There isn't a single local model currently than can handle more than 5-7 consecutive words.
>>105766497I think it's been stuck at this since like the first week of them publishing this
"CFG-baked checkpoint - i.e. use CFG=1 for 2x speedup: https://huggingface.co/lodestones/chroma-debug-development-only/resolve/main/staging_cfg_3/2025-06-28_06-54-17.pth This is based on v40, for reference.
I've tested it and it works great. It's baked at cfg=5."
Claude helping me convert this to FP8 for this(does that make sense I barely know what I'm doing)
>>105766960>it works greatis this a joke? your image is complely noisy, wtf is this?
Have been out of the loop for a while since my last working loonix decided to stop working with any sd model altogether.
When I finally decided I wanted it back working (months) I went through arcane terminal gymnastics that lasted for two weeks and got comfyui working again with sdxl, flux, wan and chroma (randomly tho, most of the time I get a black screen as the output).
I was trying to build back on loras and found out that most "celebrity" loras are nowhere to be found. I get that civitai took them offline, but where the fuck did they go? I tried multiple search engines.
sincerely frustrating. Will start physical back up for LoRAs from now on.
>>105766814Not that anon, but that's interesting.
You don't happen to have a comparison handy of gens with and without --fast?
>>105767004>I get that civitai took them offline, but where the fuck did they go?it's verbotten now
40
md5: 12bb35d84038b5a2b236047c971a57dc
🔍
I've managed to get the sage running with this node (the terminal says sage is being used so I guess it werks. Do any of these additional choices matter or should I just keep it on auto?) I have a 4070S.
>>105767004>celebrity lorabig no no
>>105767026>>105767041they have to be somewhere. they can't force me to train my own gooning loras for each use case.
This can't be real.
>>105767004You can find some of the here: https://civitaiarchive.com/
There are probably other places as well, and it's in no way illegal to create, upload or host them, so hopefully there will be another non-archival site for them again.
>>105767064many thanks. Downloading everything, will torrent for food.
>>105767026It's not. They're fully legal.
The only thing that is illegal is to share images/videos of sexually explicit deep fakes of people without their permission.
Can the anon who put Reiko Nagase instead of Tidus on FFX cover share his workflow?
>>105766682you have cuda 12.8?
>>105767100I'm pretty sure any 5000 gpu has 12.8 cuda with its earliest drivers. Of course I do, it's 12.9 iirc
>>105767116Just checking.
>>105766973oh sorry that was an earleer chroma gen I liked. converting now
>>105766682Open a terminal in your comfy directory, then:
source venv/bin/activate
pip uninstall torch torchvision
pip install torch==2.7.1+cu128 --index-url https://download.pytorch.org/whl/cu128
pip install torchvision==0.22.0+cu128 --index-url https://download.pytorch.org/whl/cu128
pip install -r requirements.txt
Also make sure your Comfy is up to date
>>105767163I already manually checked python, pythorch and sage versions. I'm not saying sage attention isn't working because it clearly does, I'm saying there's no improvement vs 2.1.1
>>105767120sorry to ask, are you using loras or is just the base model? it seems to know many characters
>>105767186Weird, I'm getting ~10% improvement on my old rusty 3090, I'm on Linux though
>>105767237Are you using any other command line arguments like -fast? What model were you using when benchmarking?
>>105767186are you adding the flags to the command line when starting comfui to activate sageattn?
e.g python main.py --use-sage-attention --fast
people have forgotten in the past.
I can also confirm that sageattention 2.2.0 has lowered the gen time, especially on the first gen (like 40 seconds faster). Subsequent gens take about 12 seconds faster so yeah about 10% speedup
>>105767248of course I do, using sage attention is in the log
I would've noticed otherwise because I know how slow vanilla pytorch is
>164s
>281s
>345s
>497s
>118s
>227s
With sage shit's wild. During the long gens the card was sitting at 100% but not doing anything, but next gen it suddenly blitzed the gen process.
>>105767268initial generation is bottlenecked by ssd linear read, ram speed and ssd write (if using pagefile), shouldn't be affected by sage
>>105767246I use: --use-sage-attention --disable-smart-memory
when launching Comfy, I was testing on Wan 2.1 i2v fp8 scaled
>>105767285maybe im retarded and remembering startup times wrong then oops
>>105767280obviously i am not an expert given i was just PROVED WRONG with FACTS and LOGIC but this doesnt sound normal, assuming all your gens have the same settings
>>105764004>Are you looping with the fun model? instead of the base one nor vace?Sorry for late reply, but I was looping with a bunch of random models to see what would work. Base Wan, AniWan, Live Wallpaper, and uhh... I guess that was the diffusion models I had. I'll try the VACE one now that you mention it, thanks.
The workflow I downloaded had fun and the other stuff as loras so it could be customized. Fun seems to have mixed results with 2D, but my testing hasn't been very exhaustive.
>>105767300unrelated, but I found that --disable-smart-memory was the only thing that helped me with memory leaks on my 4090+troonix setup (and windows, as well). Does --use-sage-attention help as well? I still get OOM errors that shouldn't be there at all from time to time.
>>105767347that freckle distribution is awful.
>>105767339Yeah, I have --disable-smart-memory to at least mitigate the erratic memory usage behaviour of Comfy
--use-sage-attention is only to activate it other than the default (pytorch built in attention) AFAIK
>>105767223I'm using both illustrious model and loras together. It doesn't know a lot of characters or the details of characters very well.
>>105767339>I still get OOM errors that shouldn't be there at all from time to time.Have you tried --reserve-vram ? Half of the Comfy options doesnt seem to do shit, but it's worth a try.
>>105767401nope, haven't tried that one and it's the first time I hear about that flag. Will give it a spin. I have absurd memory problems with fucking automatic1111 trying to run sd 1.5 for that matter, didn't happen a couple of months back at all. It just simply decided to stop working.
>>105761605>>105763227>>105761811>>105762999ROUWEI TESTING ROUND 2 (https://civitai.com/models/950531?modelVersionId=1882934)
I got it mostly working now:
https://files.catbox.moe/hxn6jo.png
>No more excessive blue or fried colors>ModelSamplingDiscrete with vpred on but no zsnr>style prompt at end (would need to install some nodes to use BREAK)Cool stuff:
>it CAN do outputs with coherent text, though limited>full natural language prompting and tags both work, or any mix of the two>good anatomy and hands so far in limited testing>good style adherence/not slopped we should test this model out more
j
md5: 8bfdd03f7f6ed552e582e5d489f3c650
🔍
may I be that I downloaded the wrong version?
I got sageattention-2.2.0+cu128torch2.7.1-cp312-cp312-win_amd64.whl from
https://github.com/woct0rdho/SageAttention/releases
>>105767420Did you prompt for autism
With sage, each of the ksampler chunks seems to hang after they load. Card is at 100% but RAM usage isn't higher than standard so it doesn't seem like offload slowdown. And CPU does fuckall too.
>>105767450>> weno. it looks awful. the details are fucked and it base style is just nah.
>>105767450It's too opinionated with its own style but it's still better than WAI desu
>>105766438The standard "real life" models offer the promise of eliminating jobs so executives and investors are willing to throw money at it with the belief it will make them even richer in the future.
2D models get you off sexually and rich people can use human trafficking for that. It's really a very niche thing that only exists as a hobby.
>>105767450>No more excessive blue or fried colorsthen what's up with the blue frying in the image?
>>105767460Also I noticed my RAM usage refuses to climb above 24GB. This shit is borked
>>105767640the skin was less plastic before v30, now it looks like a flux dev render, you even got the square jaw and the flux chin on those newer versions
>>105767640>proompt for plastic whores>get plastic whores>why would chroma do this?
>>105767640v24 looked good, the face is round unlike the others, they looks like actual females instead of ultrasiliconed porn actress
>>105767442>too oldnot mentally
>>105767453>Did you prompt for autismYou joke, but that's the level of prompt understanding I want to see us get to by 2030
>>105767643kek, /ldg/'s most creative use for i2v so far has been dropping/pouring stuff on hot girls
daily summon for the Mayli anon. May he return inglory
>>105767677>proompt for plastic whoreswhat was the prompt?
>>105767450>(would need to install some nodes to use BREAK)You can use Conditioning Concat. The workflow I gave you has an example.
>>105767694im replying to him right now tho
one last one with v41-detailed
>>105767701https://files.catbox.moe/hq6z8l.png
basically "plastic whores". sorry about the node mess, I take lots of notes
>>105767738>basically "plastic whores"what's the point, go for non plastic humans lol
>>105767734the "real Mayli anon" not a copy that ahve only few of cute pics of mayli on his PC.
>Try Chroma with NAG (using the workflow that comes installing the node)
>One gen, everything is fine
>Next gen, jpeg artifacts completely destroy the image
I really don't get what's causing it. Does the NAG prompt have some kind of trick, or the node?
>>105767804depends what chroma nag you're using, there's two repositories now
>>105767451I used:
https://huggingface.co/Kijai/PrecompiledWheels/tree/main
I'm on Linux though, but there's Windows wheels as well
>>105767806I was using https://github.com/ChenDarYen/ComfyUI-NAG
>News>2025-06-30: Fix a major bug affecting Flux, Flux Kontext and Chroma, resulting in degraded guidance. Please update your NAG node!Maybe it was that? I'll check it out.
>>105767883>Maybe it was that? I'll check it out.ohhhh, I shall update aswell!
I just DL'ed chroma v29 instead of v39 I was using and holy shit what an improvement
>>105766503>>105766590Neta lumina is comparable to their model, but only caveat is it's not yet there for text (and I'm not sure it will improve later on).
>>105767911How do these Chroma releases even work? Are they uploading the best looking version or just dumping every epoch trained? The latter doesn't sound like a good idea.
>>105767911>>105767965he's dumping every epoch, but since v30 the versions we have are distilled, that's why they look ultra slopped, he fucked it up, v29 will stay the peak of this model
>>105767911Is the v29.5 the detail calibrated version?
>>105767965I can only assume that every version is an epoch.
However there seems to be some merging of the base and forks in recent releases, it's all very hard to keep track. You probably need to hang around in the Chroma discord to have a good grasp.
>>105767984You have no idea what you're talking about, and you post this fuzzy image as some 'proof' because you have no idea what it says.
You must be a tranny.
>>105767911>I just DL'ed chroma v29 instead of v39 I was using and holy shit what an improvementreally? why? isn't it supposed to look better with more epochs? I think I'm missing something lol
>>105767965IIRC you have like 3 different releases now, per epoch.
>Chroma, basic>Chroma, detailed (an experiment done with some compute lying around)>Chroma, even weirder experiment (CFG set at 1 and no negatives for double the speed)
>>105768013newer isn't always better zoom zoom
people make mistakes
like distilling their models
>>105768009>You must be a tranny.ironic when the guys surrounding lodestone on discord are all "she/her" lol
>>105768025Are you obsessed ?
The v29 improving is just a troll at this point. Autistic and does not even attempt to account for other seeds.
>>105768056>the guy calling others trannies is asking if I'm obsessed with trannieskek, he's funny all right
>>105768062take an L lodestone and start again from v29 whilst gargling less furry semen
>>105768071>start again from v29 whilst gargling less furry semenThis, I'll accept that redemption arc, but his ego is too fragile and he prefers to crush his ship to the iceberg rather than accepting he could be wrong.
Many such cases.
>>105767883>>2025-06-30: Fix a major bug affecting Flux, Flux Kontext and Chroma, resulting in degraded guidance. Please update your NAG node!
I mean, yeah it's different but it's not much better, the bug was probably very subtle (and I still hate the manlet effect on Kontext dev, that's its biggest weakness imo)
>>105768070You are clearly mentally ill, trannies are mentally ill
It's just logic
>>105768111>trannies are mentally illdon't insult lodestone's discord community like that it's rude :(
https://huggingface.co/tracelistener/chroma-cfg-baked-fp8/blob/main/chroma-cfg-baked-v40_fp8_simple.safetensors
image is quick test.
is anyone getting speed improvements with sage2++? my gen times are a bit more inconsistent too now, before they were 170s always now they're 172s 174s 173s for wan
i upgraded my driver and cuda to 12.8 too
how do I save a multi tab workflow in comfy, ie: one tab has wan, one has kontext
>>1057681492/10 would not bang
>>105768149make it say you know what instead of calvin klein
>>105768152Gen times are literally 100% the same for me as they were on 2.1.1
t.5070
>>105768149How does it compare to the Q8 cfg1?
kek
md5: ea5d262c6a5985a4d069d8e77a6818a0
🔍
>>105768092Well I don't know for Kontext but for Chroma it made the gen look like one of those "deep fried and jpeg quality loss" memes, and now it doesn't seem to happen anymore.
file
md5: c21045384c3c76a901b45edc0f077d02
🔍
sageattn_qk_int8_pv_fp8_cuda: INT8 quantization for
Q
K
⊤
and FP8 for
P
V
using CUDA backend. (Note that setting pv_accum_dtype=fp32+fp16 corresponds to SageAttention2++.)
so how are we supposed to use sageattn2++?
>>105768194>Note that setting pv_accum_dtype=fp32+fp16 corresponds to SageAttention2++.)>so how are we supposed to use sageattn2++?maybe Comfy has to implement that on his --use_sageattention flag or something
>>105768194Is that the KJ patch node?
file
md5: 0f71f6b1e18fc5c8d811f6ac2586e859
🔍
>>105768206>>105768212https://github.com/thu-ml/SageAttention?tab=readme-ov-file#available-apis
>>105768212the cuda++ one?
>>105768223>the cuda++ one?>AssertionError: SM89 kernel is not available. Make sure you GPUs with compute capability 8.9.aww I can't use that I only have a 3090 (8.6)
retard here, I would like to be able to do img2img locally like the normie chat gpt's ghibli filter on photos.
what do I use? can a 3080 handle this?
>>105768242kids these days with their half-shorts half-pants
>>105768246Yes you can do that on a 3080. I'm also 100% certain you can find GPT tyle Ghibli lora both for Flux / SDXL over at Civitai.
> he isn't using ai to augment his drawing/sketching workflow
> he isn't crying for ai to be banned while secretly using it to pump out more commissions
> he isn't minmax'ing ai
do you guys even capitalism?
>>105768302I have a decent paying job, I do AI for entertainment.
I am helping a buddy who is making indie games to train on his graphics both for concept ideas and trying to get as close to finalized characters and backgrounds as possible, it's fun.
>>105768302>he isn't creating proletarian AIs to bring about machine communismThe AIs will both own and themselves be the means of production.
Has chroma been getting better or worse? Haven't tried a new one since v32. I remember back then it seemed like it was getting worse but it was just because I had to turn up my cfg.
with noob/flux + kontext + wan + light2x lora, you can generate any kind of video you want, really.
>>105768302can't find the buyers
>>105768348>Has chroma been getting better or worse?it's getting more and more slopped, it's not as slopped as flux but still... and I don't feel it's improving on the anatomy at all, and it still doesn't know any artist style (will that ever happen?)
>>105768348I haven't tested it lately. But in the past it has shown variance in quality between epochs. Like a random walk trending slowly upwards.
>>105768302>> he isn't crying for ai to be banned while secretly using it to pump out more commissionsI have yet to integrate my shadow to that degree.
based
md5: f429bca75c1abe22ecd0d8050c94a98c
🔍
wait chroma has or has not been getting better or worse?
wan + asuka/pepe kontext random prompt
>>105768405meh, I don't care about chroma anymore, I can make Patrick Bateman do some roller shit instantly with Kontext, nothing can beat that
>>105768400
It should be pretty easy to automatically process Danbooru dataset so that I have a high quality edit dataset with image pairs and prompt.
Iterate over artists, calculate near-similarity. Abstract the tags from pairs and present them to a vllm with image and prompt.
This should be quite easy to scale.
Can you run wan on 12gb vram?
>>105768425Subtract mäh:/
>>105768152yeah i got an improvement around 10%. im on cuda 12.9 if it matters
>>105768427yeah but i hope you have 32gb of ram or even more preferably
anime girl is holding a pickaxe in a coal mine.
>>105768376>and it still doesn't know any artist styleIt doesn't seem to know any contemporary artists by name, just like Flux, SDXL, however older artists like Alphonse Mucha etc are trained by name.
If you will want to use specific artist styles, you will need loras or further finetuning, again just like with Flux and SDXL.
>>105768474and then with wan:
anime girl with a pickaxe is mining coal in a coal mine.
no mining yet but just an example:
>>105768474exceptional coal, not sure why anime girl means soj faced cgi
>>105764770Oh wow!!! That's super interesting!
If the filter is triggered, can I get it to skip subsequent steps though? What a pain!
So if I'm having that happen add clothing to the prompt? I didn't have a nude gen, it was a swimsuit gen.
file
md5: 768ae5997f276af72faa2d083af827d8
🔍
3000 sisters... it's over ITS FUCKING OVER WE'rE NOT GETTING SAGEATTN2++ FUCK FUCK FUCK
>>105768461what gpu do u have?
>>105768512it's pauline from the new dk game.
Nevermind
>>105768182, it began again. 9 good gens and then this bullcrap started for 2 gens, and then it went back to normal for the next one. I don't get it. I'll try the other NAG node set.
Thank fuck i cloned my env and tested it before installing sage2 and cuda 12.8 and pytorch etc.
old env is completel fucked, pip is crying in the corner rn.
she is so strong the coal all collapsed!
wan with the light2x lora/4 steps is so fast. 75s on a 4080.
>>105768507mining includes walking around and talking. Be specific. Swings pickaxe maybe.
>>105768530if you have fried stuff maybe it's because you're using comfy's workflow instead of chroma's workflow
https://github.com/comfyanonymous/ComfyUI/pull/7965
>>105768513>ITS FUCKING OVER WE'rE NOT GETTING SAGEATTN2++ FUCK FUCK FUCKits ok its only a 10% difference at most
>what gpu do u have?not a 3xxx series kekaroo
>>105768436it's brutal though, I only did 2 iterations and I have a shit ton of jpg artifacts, Kontext has a lot of issues
Is it over or are we back bros
>>105768523after kontext + holds black pistol:
anime girl points a gun at the camera, and looks very upset.
not very upset, neat lighting though.
>>105768612it's so over that we've never been more back
>>105768613anime girl points a gun at the camera, and looks very angry.
now she's getting upset
>>105768612kontext is amazing, wan tier but for edits, we are literally light years beyond paid shit on openAI, locally. can generate anything now.
>>105768635also, there we go, actually mad.
please stop spamming the same uninteresting videos and images of generic shit. it's so fucking boring. keep your shitty tests to yourself, jesus
there's like a billion version of wan 2.1 models, I dunno where to start as a 12gb vramlet
as far as i2v goes, is wan2.1-i2v-14b-720p-Q4_K_M.gguf a good start?
>>105768698https://rentry.org/wan21kjguide
>>105768698i would die before using anything before Q6. i dont think anyone in this thread is using Q4 video models so if you find it usable unironically share your gens with us and let us know
>>105768698with multigpu node you can use larger models, just set the virtual vram higher
I use Q8 with 16gb for example. I have the virtual vram set to 10.0, might not need to be that high but it works fine.
>>105768714I don't mind slower workflows, I want the highest quality before I'm oom
for instance with chroma fp16 I get a crash but fp8 is useable
>>105768722Thanks, I'll try q6 to q8
kontext sign addition + wan gen:
>>105768761wow, it's just as boring and shit as every other fucking retarded test you did. fucking kill yourself
>>105768768it's testing a concept, nogen
thanks for contributing absolutely nothing, both image/video or notable feedback.
>>105768768take a rope nigger, let migu anon have his fun
>>105768780I'm sure Miku holding a sign up for the 9001st time is still exhilarating for you but why the fuck don't you make something actually fucking interesting for once instead of just spamming this fucking tripe
>>105768799>why the fuck don't you make something actually fucking interesting for oncewhy won't you? what did you provide apart endless seething you worthless retard?
>>105768815why would I waste my time posting in a thread getting spammed by an autistic with baby's first prompt?
>>105768799sure, why dont you post something to the thread of value, i'll take notes.
>>105767313Please report if you find a good workflow&model to loop 2D stuff, I tried some too a while ago and the best I could get was with VACE, but even then it wasn't really promising
>>105767937>Neta lumina is comparable to their modelI will wait to see the full model, beta was missing most styles and characters, and prompting seems much harder too
>>105768833>why would I waste my time posting in a threadthat's what you're doing though, you're lurking there and posting text even though you're making clear you hate this place, if you hate it so much, why won't you leave? are you a maso or something?
>>105768850>all Fsthat's a paddlin'
>>105768799No one give a fuck about your whining, you're a nobody.
>>105768854I just hate you nigger. I came here to discuss some repos not look at garbage you keep shitting everywhere. take a fucking hint
>>105768883you aren't entilted to anything, who the fuck you think you are? this is 4chan, not your fucking bubble spaces where you can block people you don't like, cope, seethe and deal with it, go fuck yourself
Hd to gen a pizza girl to counter the gross ones
>>105768883>I just hate you nigger.Want a tissue?
>>105768905>you aren't entilted to anythingso you are entitled to spam shitty images? where do I get a pass to shit all over threads with garbage?
>>105768883>too retarded to filter videosMany such cases.
>>105768936that's up to the jannies to decides if a spam is a spam, you don't like it? tough luck suggarcoat, maybe you should become a janny so that you can force behaviors
https://youtu.be/KwwN5kwjAtQ?t=8
>come to gen thread
>dont make gens
>cry about other gens
why would people do this?
kontext: anime girl is wearing a black business suit and holding two silver pistols.
wan: anime girl points her silver pistols at the camera and looks angry.
how can she turn 2 guns into 1? she just can.
>>105768883>I just hate youAnd the world kept spinning.
>>105768965he's sick of seeing miku
pay attention
>>105768984oh, I should change my test case image just to accomodate some random retard on the planet
sure! any other requests?
>>105768989you asked, I answered
pay attention
>>105768989>I should change my test case image just to accomodate some random retard on the planetkek
>>105768996and im saying I dont care because some autistic idiot is mad at random posts on 4chan
reddit is that way ->
kek
md5: 332e4dc35302873e9cc55e961baf80b1
🔍
>>105768984>pay attention>>105768996>pay attentionhttps://youtu.be/o8hYrNsRoTs?t=17
>>105769011so don't ask next time
>>105769011>I just want to read! no miku!the library is outside. go visit!
>>105768989not any of these anons but you are an obnoxious cunt about kontext and wan but your gens are just so fucking low quality and uninteresting it's about time someone snapped. people don't want to post anything since it's getting downed out by your slop. it's an active detriment to the thread's health
>>105769023I'll keep asking, how about that?
>>105769029I am sorry! Please feel free to post something of high quality so I can learn from your amazing outputs.
>>105769029>not any of these anonsSure.
>>105769036also, i'm picturing a third worlder who cant actually gen, mad at other people who CAN gen. and i'm probably right.
I promise I'm done with my watercolor psychosis at some point. Sorry if it's annoying.
>>105769036Ok, please hold off posting until I reply with an image
I am sorry for my slopped gens. I am indeed guilty of testing but just had to see for myself chroma baked cfg 5 is indeed slopped. Though this took 45 seconds on 4070 ti super
>>105769051you replied to a jeet slopper
>>105769052>>105769061at least it's something else. looking great!
this is the guy who is mad at all the miku posts:
>>105769052>Sorry if it's annoying.nooo, how dare you post something I don't like, you must cease your activity immediately, it's triggering my anxiety!
https://youtu.be/_NdE9CjkvTY?t=162
>>105769082this is the guy slopping migu gens. multiple anons don't want slop spam, the Internet is full of it already
>>105769098>multiple anons don't want slop spamthis nigger thinks 4chan is a democracy lol
>go on anime website
>complain about anime
>go to library
>complain about books
>>105769052Chroma is now final???
>>105769081>looking great!your taste is shit, damn
>>105769108your gens r bad
Wow you can generate miku with ai???!
>>105768977>how can she turn 2 guns into 1? she just can.I'm sure I've seen something like that from a movie or something
>>105769114>go to image board>it's just the same image edit and video over and over>get called a jeet by a jeetwhat the fuck?
file
md5: 1e148f7f2e3091de67f9d783ed47e52c
🔍
>>105769126>a random retard on the internet don't like my gens
>>105769146different videos + iterating = trying to get a specific output
if it was the same video you cant repost it without changing a pixel, retard
>>105769098I spam slop to accelerate the death of the internet, get on my level
>>105769146what's preventing you to filter out videos? too retarded to set this up? genuine question
>>105769158>trying to get a specific outputcool. post the one you finally get to not every fuck-up and garbage their gen along the way
>>105769168question: why do you think I should care what you want at all?
who are you? why should I care?
>>105769165because he does images too and even that is spammy. it's been three days of this uninspired garbage pepe Miku shit and none of them do anything cool, funny or sexy.
>>105769168Request Denied.
>>105769118Nah, it's just my naming scheme, sorry. They're on epoch 41/50, which is what I'm using.
>>105769123Harsh!
>>105769179>because he does images too and even that is spammy.but you complained about a video of him, that means you haven't filtered the videos yet, try to do that first and you'll see you'll see less media triggering your anxiety :'(
Just report the avatarfag for avatarfagging, duh.
mikuspammer bipolar phone posting to excuse the spam is pretty funny desu
you know what is worse than so called miku spam? nonstop posts complaining about miku posts.
>>105768959>schizo mass reports things he doesnt like>they are randonly prunedhe is a faggot
>>105769201hmm, no.
The flies buzzing around the shit aren't the issue, the shit is.
>>105769213>he consideres himself as a flyflies are so fucking annoying, and everyone want to kill them, so that fits lol
>>105769213some people think a fucking banana duct taped to a wall is art, so who defines "art" anyway.
>if only we raced to the bump limit constandly whenever a drunk faggot is in the thread seething and never posting a single image
>>105769231let's start with "it doesn't fuck up the thread" and go from there
>>105769199not what that is summerfag
>>105769251>it doesn't fuck up the threadaccording to who? your majesty?
>>105769258me, I said it just now
for the love of god pay attention
>>105769264>menuh uh, it's gonna be me!
The troon gets really mad when you call out the miguflood, eh?
>>105769295>everyone I don't like is a troonso obsessed
>>105767640It's not surprising that v27 and v29 look better than the rest, those are usually the best looking steps (2700-2900) when I train a LoRA. Flux really likes those numbers for some reason and usually just turns to garbage after that and never recovers.
>>105769306>Flux really likes those numbers for some reason and usually just turns to garbage after that and never recovers.so you think that it's overtrained now?
>>105767640anon was right v29 looks the best (still slopped as fuck thodesu)
10PM, still motherfucking 30c outside. LETS UPSCALE.
& nice thread there bros AMAZING gens.
>>105769336>anon was right v29 looks the bestI'm always right
>>105766280 (OP)Are there any resources/guides on how to train a Wan lora locally? All the ones I've found involve shelling out for a runpod or some bullshit, and I have the VRAM to do it myself.
Also, most of the Wan lora tutorials are for training off of still images, which I don't care for. I want to train off videos.
for you, anti miku poster
you can stop crying now.
>>105769368concession accepted
>>105769366it's pretty much the same, just adding an extra dataset for video
I use musubi, it has good documentation on the git repo
>>105769351That was me who said it thoughever
>>105769368>for you, anti miku posterbased
>>105769366Have you checked on Civitai ? Seems like the most likely place, there's an endless stream of tutorials there.
Also why would it matter if the tutorial is for runpod, it's still using a training program so just run the same settings locally.
>>105769336>(still slopped as fuck thodesu)to be fair he asked for something like that on his prompts so yeah
>>105767738
>>105767640v24 the only one that didn't produce ultra huegmouf influencer fluxtrash 9000. awesome.
>v29 is the best at producing plastic whores
>therefore it is the best version
??
>>105769398kek
>When you're just relaxing on the beach of Normandy and suddenly a whole army appears
>>105769421>>v29 is the best at producing plastic whoresfar from it, look at the other pictures, v29 is the less slopped with v24
>>105769430Power in those lungs
>>105769418>v24 the only one that didn't produce ultra huegmouf influencer fluxtrash 9000. awesome.yeah, I feel it has reached its peak kino between v24 and v27, v29 is still kino but not the best
>>105768852>Please report if you find a good workflow&model to loop 2D stuff, I tried some too a while ago and the best I could get was with VACE, but even then it wasn't really promisingI will, but it's not looking that great. https://civitai.com/models/1720535/wan-21-image-to-video-loop-or-workflow?modelVersionId=1948904 is the best workflow for quality, but it's very slow and it lacks some of the customizations of other workflows. It seems like I'd have to learn how to use ComfyUI nodes by moving them around and doing a bunch of trial and error stuff, but that will take time.
kontext: the man in the image is behind bars in a jail cell. keep his expression the same.
wan: A man in jail holds the bars and is yelling, clearly upset.
todd...
I am doing a gig of building a PC that can house a RTX pro 6000 blackwell 96 GB workstation editions, but needs to also work for dual RTX Pro 6000 Blackwell workstation editions if the client wants to expand in the future.
What hardware other than the GPU's do you recommend? I have about 3k-5k budget for non-GPU hardware. So far, I am considering an Asus X670E Hero, AMD 9950X, arctic liquid freezer iii 420, G.Skill DDR5 Trident Z5 RGB 4x16GB 6000, Cooler Master M2000 Platinum, fractal case.
>>1057696724x32GB 6000 RAM would be good but might not be necessary with the 96GB RTX Pro.
>>105767721conditioning concat works, but this model still seems to break down if you prompt the wrong kind of background, like a city for example. I'm close to giving up
>>105769398Delete this post because it is historically wrong
- bikini was named after the atoll where a nuke test was conducted
- it was not in 1939