← Home ← Back to /h/

Thread 8613148

901 posts 264 images /h/
Anonymous No.8613148 [Report]
/hgg/ Hentai Generation General #007
Sovl edition

Previous Thread: >>8600493

>LOCAL UI
reForge: https://github.com/Panchovix/stable-diffusion-webui-reForge
Comfy: https://github.com/comfyanonymous/ComfyUI

>RESOURCES
Wiki: https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki | https://comfyanonymous.github.io/ComfyUI_examples
Training: https://rentry.org/59xed3 | https://github.com/derrian-distro/LoRA_Easy_Training_Scripts | https://github.com/bmaltais/kohya_ss | https://github.com/Nerogar/OneTrainer
Tags: https://danbooru.donmai.us/wiki_pages/tag_groups | https://danbooru.donmai.us/related_tag
ControlNet: https://rentry.org/dummycontrolnet | https://civitai.com/models/136070
IOPaint (LamaCleaner): https://www.iopaint.com/install
Upscalers: https://openmodeldb.info
Booru: https://aibooru.online
4chanX Catbox/NAI prompt userscript: https://rentry.org/hdgcb
Illustrious-related: https://rentry.org/illustrious_loras_n_stuff
Useful Nodes/Extensions: https://rentry.org/8csaevw5

OP Template/Logo: https://rentry.org/hgg-op/edit | https://files.catbox.moe/om5a99.png
Anonymous No.8613153 [Report] >>8614649
>there's actually a pretty significant bake difference between adamw and adamw8bit
Anonymous No.8613160 [Report] >>8613163
>there's [headcanon]
Anonymous No.8613163 [Report]
>>8613160
lil bwo has one joke between two generals >>8613158
Anonymous No.8613176 [Report] >>8613179
>>8613090
What was your end control step for the tile upscale? I've noticed that CN only works well on artists the model knows but on my loras it does hallucinations below 0.95 end step. Idk what's happening.
Anonymous No.8613177 [Report]
>lil [headcanon]
Anonymous No.8613179 [Report]
>>8613176
0.8 denoise first pass, 0.7 denoise second pass
both passes at 0.35 strength and 0.8 guidance end
Anonymous No.8613183 [Report]
>>8613090
This is stellar, if it's not the same style would you mind posting a box for this one as well?
Anonymous No.8613190 [Report]
>>8613090
thanks, final result looks great
Anonymous No.8613224 [Report] >>8613820
>used my image for bake
now if only I could figure out how to bake that lora again but less melty
Anonymous No.8613342 [Report]
https://files.catbox.moe/9o94qg.png
>>8603720
thought that style looked familiar as hell and it turns out i was right
https://www.youtube.com/watch?v=NPU0O5mUJbs
>>8613090
i need some new upscaling snake oil i should try that out
Anonymous No.8613358 [Report] >>8613376 >>8613392
Anonymous No.8613360 [Report] >>8613392
Anonymous No.8613362 [Report] >>8613392
Anonymous No.8613363 [Report] >>8613392
Anonymous No.8613364 [Report] >>8613376 >>8613392
Anonymous No.8613366 [Report] >>8613392
Anonymous No.8613367 [Report] >>8613371
based wildcard enjoyer?
Anonymous No.8613370 [Report]
Anonymous No.8613371 [Report] >>8613376 >>8613376
>>8613367
just started with this shit
how did i do
Anonymous No.8613372 [Report]
Anonymous No.8613374 [Report] >>8613376
Anonymous No.8613375 [Report] >>8613376 >>8613554
i have more but i'll wait for some feedback
don't even know if this is the correct board to post this
Anonymous No.8613376 [Report] >>8613379
>>8613371
>>8613375
Well, if you want me to being honest
>>8613358
>>8613364
>>8613371
>>8613374
Those are """decent""" but way too similar from each other, the style is more or less good
This is the right board but avoid spamming your images this much, especially when they have the same setting
Anonymous No.8613379 [Report]
>>8613376
ok, will do anon
Anonymous No.8613392 [Report]
>>8613366
>>8613364
>>8613363
>>8613362
>>8613360
>>8613358
that's what cats get for always sticking their stupid buttholes in people's faces
Anonymous No.8613416 [Report]
>wonder what a big tit artist style would look like applied to a loli
>it looks hideous because the artist normally draws wide shoulders
Anonymous No.8613554 [Report]
>>8613375
They are pretty good, but posting a lot of variations of the same pic is considered spamming.
Anonymous No.8613596 [Report] >>8613800
>full_body in the prompt
>gen is a cowboy shot
Anonymous No.8613738 [Report]
came doesn't want to work for me hm
Anonymous No.8613768 [Report]
>>8607584

close-up box please?
Anonymous No.8613800 [Report] >>8613868
>>8613596
Based resolution ignorer.
Anonymous No.8613819 [Report]
Any Intel Arc B580 owners? Have you tried training?
Anonymous No.8613820 [Report]
>>8613224
Mission accomplished, I think? It's not quite as sharp as I'd want but whatever
Anonymous No.8613848 [Report] >>8613852
>outlives /hdg/
We fucking did it, sisters.
gooner No.8613849 [Report]
guys where are the good gens?
Anonymous No.8613850 [Report] >>8614416
Optimizers done, now only schedulers and loss (and huber schedule) and I should be done.
Anonymous No.8613852 [Report] >>8613858 >>8613859
>>8613848
oh wtf lmao what happened
did the janny just archive it because of the /aco/spam?
Anonymous No.8613858 [Report]
>>8613852
i feel like it could be because op pic sorta looks like futa autofellatio kek
Anonymous No.8613859 [Report]
>>8613852
Seemed cursed from the getgo with the pruned related generals, highlightfag not highlight fagging, and I think some anon reported it early on as NAI shilling on irc.
Anonymous No.8613863 [Report]
/hgg/ also got archived because of the spam and reports one time, now that i think about it
Anonymous No.8613868 [Report]
>>8613800
Umm, sweaty? You can have wide shots in landscape resolution?
Anonymous No.8614013 [Report] >>8614033
Anonymous No.8614033 [Report] >>8614035
>>8614013
you still live?
Anonymous No.8614035 [Report] >>8614055
>>8614033
Never left, I decided to not post at all on the last thread
Anonymous No.8614055 [Report] >>8614067
>>8614035
the thread was that slow?
Anonymous No.8614067 [Report]
>>8614055
It has nothing to do with that, I was schizo testing ,uhm, things
Anonymous No.8614416 [Report] >>8614588
>>8613850
>he tests one variable in equilibrium and thinks he's done
Anonymous No.8614573 [Report]
I like these panels but I need to get better at prooompting
Anonymous No.8614588 [Report] >>8614595 >>8614602 >>8614655 >>8614692
>>8614416
dunnyo what the hell you're saying cuh but i shall continue testing
Anonymous No.8614595 [Report]
>>8614588
They're waiting for you Freeman, in the test chamber.
Anonymous No.8614602 [Report] >>8614614
>>8614588
can you check if compass supports degenerated_to_sgd = true as an optional optimizer argument and test it?
Anonymous No.8614614 [Report]
>>8614602
Compass didn't even want to work on my end so no lol
Anonymous No.8614649 [Report]
>>8613153
>got another round of xyzs
>adamw8 looks like dogshit stylistically in comparison to adamw
lol
all these years of "it's exactly the same just faster"
Anonymous No.8614655 [Report] >>8614657
>>8614588
what are you doing anon?
Anonymous No.8614657 [Report] >>8614671
>>8614655
I just 1:1d every easyscripts setting.
Anonymous No.8614671 [Report] >>8614675
>>8614657
I mean, waht is the final goal? are you training / fine tuning a model? loras? just larnign ai concepts?
Anonymous No.8614675 [Report] >>8614685 >>8614692
>>8614671
A good lora config more or less
Anonymous No.8614685 [Report]
>>8614675
oh I see, good luck!
Anonymous No.8614692 [Report]
>>8614675
>A good lora config more or less
>>8614588
>dunnyo what the hell you're saying cuh
you're about to
CammyAnon No.8614727 [Report] >>8614749
Anonymous No.8614747 [Report]
it's nai
Anonymous No.8614749 [Report] >>8614756
>>8614727
very nice angle/pose
CammyAnon No.8614756 [Report] >>8614769 >>8614778
>>8614749
What prompt can help me get that angle? I can't force it via img2img on CivitAI? Any specific prompt could help?
Anonymous No.8614769 [Report] >>8614775
>>8614756
That's not your picture?
CammyAnon No.8614775 [Report] >>8614784
>>8614769
I did gen them + using bed invite lora. But i can't get angles right. It is random
Anonymous No.8614778 [Report]
>>8614756
you mean the pov?
I thought you'd knew yourself lol, it's quite nice
Anonymous No.8614781 [Report]
wait what, this IS /hgg/ what the fuck are those posts
Anonymous No.8614784 [Report] >>8614786
>>8614775
Catbox??
Should just be lying on side, pov, under covers. Might need close-up too at 1.2 or so.
https://files.catbox.moe/54bsvc.png
CammyAnon No.8614786 [Report] >>8614791
>>8614784
https://files.catbox.moe/m2orli.jpeg does this help?
Anonymous No.8614791 [Report] >>8614793 >>8614802
>>8614786
Woah this was made on civit itself?
Anonymous No.8614792 [Report] >>8614799
https://files.catbox.moe/5myigu.png
CammyAnon No.8614793 [Report] >>8615045
>>8614791
indeed
Anonymous No.8614799 [Report] >>8614889
>>8614792
nice. does the accidental thighjob gen often with this prompt?
Anonymous No.8614802 [Report] >>8614808 >>8614814
>>8614791
So? Prompting and loras work the same way, it's the rest of the stuff that's lacking. Adetailer feels really weird, can't use controlnet or regional prompter, fewer upscaling options, etc.
Anonymous No.8614808 [Report]
>>8614802
I didn't say whatever you're insinuating, was just surprised by the metadata is all.
Anonymous No.8614814 [Report] >>8614819
>>8614802
People still think you need their 10 layer of snake oil to get a gen
Anonymous No.8614819 [Report]
>>8614814
it's more important to apply the snakeoil during lora training
Anonymous No.8614889 [Report]
>>8614799
I'm pretty sure I got lucky, but I don't see it being that hard to nudge if you added thighjob to the tags.
https://files.catbox.moe/8ad0w4.png
Anonymous No.8615045 [Report]
>>8614793
>>>/y/
Anonymous No.8615052 [Report]
Anonymous No.8615110 [Report] >>8615666
>starting to notice the weird unsymmetricality of higher batches
it's over
Anonymous No.8615264 [Report]
Anonymous No.8615304 [Report]
>>8615288
disgusting
Anonymous No.8615325 [Report]
>>8615288
kino
>series
what about idolmaster or persona?
Anonymous No.8615344 [Report] >>8615345
>>8615288
>Just did this
It's clearly ai generated anon.
Anonymous No.8615345 [Report]
>>8615344
Proofs?
Anonymous No.8615378 [Report]
>>8615288
wrong thread
Anonymous No.8615426 [Report]
ermmm....
Anonymous No.8615463 [Report] >>8615470 >>8615594
Anonymous No.8615470 [Report]
>>8615463
tomimi's
BIG
FAT
tail
Anonymous No.8615475 [Report]
>>8615422
>anal DP
B A S E D
Anonymous No.8615522 [Report]
>>8615422
More DAP
Anonymous No.8615527 [Report] >>8615550 >>8615596
I was the anon saying v4.5 was shit. I figured it out, not so shitty anymore but I do prefer 4.0 aesthetics
Anonymous No.8615550 [Report] >>8615556 >>8615557
>>8615527
>but I do prefer 4.0 aesthetics
you mean v3, right?
Anonymous No.8615556 [Report]
>>8615550
not when it hits right
v3 is just all over the place

composition wise, style wise is depends on many things
Anonymous No.8615557 [Report] >>8615567
>>8615550
No, anon just loves artifacts and blur.
Anonymous No.8615567 [Report]
>>8615557
Me too but only if it's called sovl because it's on 2005 gay cgs
Anonymous No.8615569 [Report] >>8615572
I'm confused, are you supposed to use the adafactor scheduler with the adafactor optimizer always, or it is just some specific one? I know it doesn't work on other optimizers.
Anonymous No.8615572 [Report]
>>8615569
Oh wait it automatically does use it, never mind.
Anonymous No.8615594 [Report] >>8615607
>>8615463
Repost so early?
Anonymous No.8615596 [Report] >>8615600
>>8615527
Post a picture, sweaty?
Anonymous No.8615600 [Report] >>8615602
>>8615596
no
Anonymous No.8615602 [Report]
>>8615600
Mmmm nyes?
Anonymous No.8615603 [Report]
>came suddenly started working
ogey
Anonymous No.8615606 [Report] >>8620609
Anonymous No.8615607 [Report]
>>8615594
ah, my bad
I just dump them in one folder, lose track sometimes of which ones I already posted
Anonymous No.8615666 [Report]
>>8615110
>went back to batch 1
>suddenly optimizers are completely different
predictable but i'm glad i went back to them
Anonymous No.8615674 [Report]
>hdg got archived again
oh rmao
Anonymous No.8615676 [Report]
hdgsissies just can't catch a break. kek
Anonymous No.8615677 [Report]
this is what you get for removing NovelAI from the OP~
Anonymous No.8615678 [Report] >>8615679 >>8615691
can jannies not do that
i don't want a rapefugee flood here
Anonymous No.8615679 [Report]
>>8615678
They just tabbed over as we all do, anonie.
Anonymous No.8615681 [Report] >>8615682 >>8615684
the fuck is going on with hdg, and what the fuck is the difference with this thread?
Anonymous No.8615682 [Report]
>>8615681
We exhibit self control here and laugh at the hdg purges.
Anonymous No.8615683 [Report]
you guys were posting so much trash hdg killed itself again...
Anonymous No.8615684 [Report]
>>8615681
i assume
>janny archives previous thread for acospam and assumedly reports
>janny sees scat spam and acospam take up half the new thread
>there now exists a calm alternative so he doesn't care if it gets archived again
Anonymous No.8615685 [Report] >>8615717
If you don't have a post archived in /hgg/ #001, we don't want you. Shoo, refugees. Shoo.
Anonymous No.8615687 [Report]
whatever, I'll just post some hentai
Anonymous No.8615688 [Report]
Uh oh. That one got her boilin' n bubblin'.
Anonymous No.8615689 [Report]
Anonymous No.8615690 [Report]
janny should have taken this thread down, this one is the even lower quality hdg
Anonymous No.8615691 [Report]
>>8615678
Sorry...
Anonymous No.8615693 [Report]
Anonymous No.8615694 [Report]
adafactor's kyinda good
Anonymous No.8615695 [Report]
This thread is superior because it wasn't baked by highlightanon
Anonymous No.8615697 [Report] >>8615700
As /hgg/'s first pillar, I should inform you all of the ground rules.
1. No shitposting
2. miquellas are encouraged
3. No catfag
Keep those in mind and you'll get extended residency until your jeet general returns.
Anonymous No.8615698 [Report]
Anonymous No.8615699 [Report]
>>8615692
Box?
Anonymous No.8615700 [Report] >>8615709
>>8615697
>1. No shitposting

NYOOoOOOOooooOOOooOOO!!
Anonymous No.8615709 [Report] >>8615712 >>8615713
>>8615700
I know it's tough, anonie, but we've found here that a great way to fight the urge is autistically discussing lora baking for days on end. I believe in you.
Anonymous No.8615712 [Report]
>>8615709
Bakeranon, you should write a rentry.
Anonymous No.8615713 [Report]
>>8615709
isn't this general actually dead
Anonymous No.8615717 [Report]
>>8615685
This but unironically. I was in the screencap. In fact if you didn't post a pic with the original metadata then you need to go.
Anonymous No.8615720 [Report] >>8615721 >>8615725 >>8616175 >>8616207
what are you n-words doing this time
Anonymous No.8615721 [Report]
>>8615720
we dindu nuffin
Anonymous No.8615725 [Report]
>>8615720
Looks like mass reporting is the new black.
Anonymous No.8615738 [Report] >>8615739 >>8616921
Anonymous No.8615739 [Report]
>>8615738
I should really learn to inpaint...
Anonymous No.8615741 [Report] >>8615746
anyone tried wavelet loss yet for training?
Anonymous No.8615746 [Report] >>8615747
>>8615741
yes
Anonymous No.8615747 [Report]
>>8615746
nice
Anonymous No.8615774 [Report] >>8615809
Anonymous No.8615807 [Report] >>8615834
my old config was hilariously fucked up
>three times the lr
>takes twice as long to converge
glad i went and tested this shit
Anonymous No.8615809 [Report]
>>8615774
love sakura
Anonymous No.8615827 [Report] >>8615833 >>8615836 >>8616210
What's the best way to translate a specific style from one model to another? My first approach would be to gen a lot of images on the first model with the style I want and then train a Lora with that for the second model but I'm wondering if anyone has a better Idea and I know that training AI on AI is bad.
Anonymous No.8615833 [Report] >>8616210
>>8615827
>I know that training AI on AI is bad.
It's not if the images are properly selected
Anonymous No.8615834 [Report] >>8615851
>>8615807
gonna share your findings?
Anonymous No.8615836 [Report]
>>8615827
There is plenty of loras trained on AI to replicate specific style mixes of NAI grifters on local and they don't have any specific issues, problems happen when you train whole models mainly on AIslop like LLMs do.
Anonymous No.8615851 [Report] >>8615862
>>8615834
i am still doing it but i can write something up
Anonymous No.8615862 [Report]
>>8615851
oh, nah, if you're still testing then please continue
Anonymous No.8615894 [Report] >>8615901
back from vacation for posting march 7th with small tits
Not sure if this function already exists somewhere, but I made a couple custom nodes for comfy since I was annoyed tard wrangling mixes with washed out colors if anyone is interested, just drop the files imto custom_nodes folder if you want to try them
Main useful one applies a luminosity s-curve to images, meant to be run on images right after vae decode (and then sent to hires pass nodes). Should be fine on defaults, just don't raise y3 unless you want to go blind. (or are doing some spot color/monochrome style) Defaults may not have good effect on images with backlighting.
luminosity s-curve file: https://files.catbox.moe/50x9cm.py
Other one was meant to do monkey color grading by a warm/cool axis but it also applies a multiplier on chroma which is some more snake oil to have fun with, use this node with low strength
color warmth grading file: https://files.catbox.moe/73o3uz.py
Anonymous No.8615901 [Report] >>8615902
>>8615894
3 months ban?
Anonymous No.8615902 [Report]
>>8615901
kek'd
Anonymous No.8615906 [Report] >>8616019 >>8616144
>three days of testing just to bake a single lora
worth
Anonymous No.8616019 [Report] >>8616163
>>8615906
post lora
Anonymous No.8616144 [Report]
>>8615906
>three days of testing just to bake a single lora
My loras end in _XX, imperial. Can you say the same?
Anonymous No.8616163 [Report]
>>8616019
I gotta do it on the whole dataset first, but I'm starting to write up that "guide".
Anonymous No.8616175 [Report] >>8616181
>>8615720
box please
Anonymous No.8616181 [Report]
>>8616175
it's just ixy m8
Anonymous No.8616207 [Report]
>>8615720
coomin my brains out because of the marvel announcement. we're so back.
Anonymous No.8616210 [Report]
>>8615827
>>8615833
>training AI on AI is bad
it's actually always bad if you're using plain eps gens to train a ztsnr model because you're overwriting weak low snr knowledge with the specific pattern that occurs in your dataset
training on nai or noob outputs is ""okay""
training on pony is bad
Anonymous No.8616235 [Report] >>8616290 >>8616298 >>8616958 >>8622760
https://rentry.org/hgg-lora
Anonymous No.8616290 [Report] >>8616294 >>8616295 >>8616299
>>8616235
>The base model you are training on, it should be either Illustrious0.1 or NoobAI-Vpred 1.0 for most users.
Woah woah woah. What's wrong with baking on epred?
Anonymous No.8616294 [Report] >>8616308
>>8616290
illustrious is eps
no real reason to bake on eps noob
Anonymous No.8616295 [Report] >>8616308
>>8616290
its epred
Anonymous No.8616298 [Report] >>8616300 >>8616317 >>8616328 >>8616331 >>8616819 >>8622767
>>8616235
nice but and the end of the day it's still some list of incomprehensible magic spells, there's too little evidence and even some incorrect stuff, still way better than whatever is in the op
>Scale V pred loss: Scales the loss to be in line with EDM
this is wrong, it just multiplies the loss with the snr curve, nothing super fancy, but this is mutually exclusive with min snr and it should not be used under any circumstances except if you're training the model using (neweset) v-pred debiased estimation (which is also mutually exclusive with minsnr and it's basically the same thing as min snr with gamma=1 but smooth)
>Flip Augment: Flips the latents of the image while training. Causes quality degradation. Keep on.
kek
>Max Grad Norm
it prevents too large gradients (basically gradients are the proposed changes to the network weights) from throwing off the training
>Noise offset
any kind, must be disabled for vpred/ztsnr
>Multiple batches allow for quicker training but also exhibit problems with symmetricality in the end bake. Keep at 1.
what? sounds like an issue with bucketing
>Keep Tokens Separator: Unsure of a practical use for it.
it's for nai-style training when you want to separate some meta tags (and keep it in place) from other tags, for example "1girl, artist, character ||| whatever", basically keep tokens but more flexible
>Cache Latents
this also increases vram usage unless you cache it to disk
>Dropout: Used to drop out parts of the model. Causes degradation of quality. Keep off.
most likely you've used way too large numbers here, it should be below 0.1
>Prior Loss Weight
>Regularization Images: For specifying regularization images, a method of supposedly reducing overfitting. Uncommon use.
these options are specifically for dreambooth-style datasets, prior loss weight is a multiplier of the regularization image loss
Anonymous No.8616299 [Report] >>8616305 >>8616308
>>8616290
noobpoint5 bros?
oekakianon cultural erasure
Anonymous No.8616300 [Report] >>8616303 >>8616304 >>8616312 >>8616322
>>8616298
this is why i didn't even want to make this rmao
Anonymous No.8616303 [Report]
>>8616300
are you scared of the truth?
Anonymous No.8616304 [Report] >>8616311
>>8616300
Not him but do you see no benefit in discussion? Debate isn't supposed to be about making the other person look stupid or feeling superior but about putting our heads together to get closer to the truth. Some things you have there are accurate while others aren't. Why not update it to reflect his corrections so the whole rentry is correct?
Anonymous No.8616305 [Report] >>8616307
>>8616299
I highly refuse to believe that anyone that isn't him is still using that version
Anonymous No.8616307 [Report]
>>8616305
pretty sure cathag uses it because all his images are washed out
Anonymous No.8616308 [Report] >>8616359
>>8616294
>>8616295
>>8616299
Ok hold on bros I'm freaking out now and I have to make all my loras. What would be the difference between baking on base illu and vpred? Besides the settings. Shit my current bake just finished and btfo'd me too I'm so sick of this shit.
Anonymous No.8616311 [Report] >>8616321
>>8616304
well more like everyone's an expert until it's time to sit down and write lol
i don't even have the code for it anymore, it's more of an AAR of my 1:1s than a proper guide, imho the most interesting parts of it are the fp8 stuff since you can say "well actually x works better for me than y" just like i did about actual parameters
Anonymous No.8616312 [Report]
>>8616300
What you can do is paste everything there into somewhere else so others can review and edit things around, have some overall revisions to then have the final "mostly correct/general agreed" current way to train a lora
Good initial effort tho
Anonymous No.8616317 [Report] >>8616328 >>8616342
>>8616298
Nice,
I'm also surprised dropout causes quality degradation but maybe it's not as compatible with lora due to the low fidelity nature of how the lora values propagate into the resulting model on application
Another thing I'm surprised about it scale weight norms, I would've expected that to help counteract frying but I guess not
At the end of the day, I imagine the most impactful settings will be the dataset quality itself and sampler/optimizer settings anyways. What's the verdict on that and did your opinion on it change from whatever you had beforehand?
16/32 Locon AdamW cosine 1e-4?
How many steps?
Anonymous No.8616321 [Report] >>8616328
>>8616311
>well more like everyone's an expert until it's time to sit down and write
Yeah you're not wrong either. I've had my fair share of this, specifically his brand of thing where everyone will say nothing for months only to show up and tell me I'm wrong and that "this has been known since 2008" when I finally do a write-up and post results.
>well actually x works better for me than y
Yep a lot of it comes down to just pure empiric testing which is why I'm saying your contribution is good. The more info we all have, the better and the less time we all collectively have to spend testing. Ideally if we knew *why* x worked better than y then every lora would improve, but just having some solid evidence is a good start.
Anonymous No.8616322 [Report]
>>8616300
If it's any consolation I appreciate you putting something together and having to collate info from the barrage of people claiming their way is correct way. I owe you a gobby.
Anonymous No.8616328 [Report] >>8616338 >>8616342 >>8616444
>>8616298
>this is wrong, it just multiplies the loss with the snr curve
TBF I was going with the ezscripts definition.
Maybe it really should be put onto some doc for fixing up and reuploading.
>>8616321
I think the saddest part of this is that this field just doesn't do "why". Shit's a black box.
>>8616317
If you meant to @ me then yeah definitely, Locon was the straightest no subtlety improvement here, at least for a characters.
AdamW, like I said, instead of 8bit which logically (it is 8bit!) does make it slightly worse, I'm more surprised that nobody mentioned it before, but I bet a lot of it was just "waow less vram more good" during the early days when it became the "SOTA".
Cosine did surprise me because people recommend with restarts but yeah, the restarts version just kept more of a consistent "fry" in the images, especially relating to clothes.
Anonymous No.8616331 [Report] >>8616337
>>8616298
>most likely you've used way too large numbers here, it should be below 0.1
Dropout is a method from ancient times. Only ML practicioners working on toy models or learning from old material actually use it these days.
Anonymous No.8616337 [Report]
>>8616331
or people experimenting
I find it makes the lora work better on further merges and finetunes
Also haven't noticed any quality degradation, and it wouldn't make sense for there to be any. Just slows down the training a bit.
Anonymous No.8616338 [Report] >>8616339
>>8616328
oh yeah I replied to the wrong post lol
I guess it makes sense @ restarts since it cranks the learning rate back up when the chances you're at an actual minima during training in high dimensional space gradient descent is incredibly miniscule, meaning you're not even getting the advantage it's meant to provide in the first place.
How many steps do you usually shoot for?
Anonymous No.8616339 [Report]
>>8616338
I think I'm still team epoch but I have some diverse full bakes to do now to be sure.
Anonymous No.8616342 [Report] >>8616364
>>8616317
>I'm also surprised dropout causes quality degradation
to be completely honest this is also my experience with loras
>scale weight norms
iirc it rolls back the weights that become larger than a specified threshold, this is basically only useful for stable training using high lr and small alpha, and even then you'd need to find an equilibrium
>I imagine the most impactful settings will be the dataset quality itself
as long as you're not training the model in the wrong way (by enabling noise offset for vpred/ztsnr model for example), this will always be the case, other things will only affect how quickly you train the model and possibly how much it will forget
>>8616328
>I think the saddest part of this is that this field just doesn't do "why". Shit's a black box.
projecting as is, it's just that educated people rarely visit 4chan's /h/.
Anonymous No.8616349 [Report] >>8616352 >>8617286 >>8617370
>want a specific body type that a certain artist draws but want another artist's style
>use [artist1:artist2:0.6]
>it just werks
Man, I've been using these kinds of techniques for ages but never thought about applying it to this use case until now somehow.
Anonymous No.8616352 [Report] >>8616355 >>8617370
>>8616349
sounds like me when I finally started blending hair colors. [red hair:dark brown hair:0.3] makes a nice auburn.
Anonymous No.8616355 [Report]
>>8616352
Oh yeah, I've been using it for color blending for quite a while. Also using negative sometimes to try and make it more consistent.
Anonymous No.8616359 [Report] >>8616362
>>8616308
Are you baking for vpred though? That's the question.
Anonymous No.8616362 [Report] >>8616367
>>8616359
So if I bake on illu and run on vpred will it really be that much worse? I'm running on 102d and all the epred bakes I did seem okay but after my latest bake I'm not so sure anymore.
Anonymous No.8616364 [Report]
>>8616342
I read up about it a little just now and the dropout making things worse makes sense. Due to the nature of adding multiple passes of gaussian noise in steps, diffusion models expects/requires a lot more consistency in terms of network output that dropout had no expectation of doing back when it was made and is counterproductive in regards to.
I'm inclined to agree with not using dropout now and just trusting that the layer/group norms in SDXL should be doing its job (not that I know for sure they're used in ezscripts lora training if anyone wants to confirm)
Anonymous No.8616367 [Report]
>>8616362
It's most visible in stuff like inbuilt styles on non-vpred character loras. You just get way less accuracy, and a lot of greyness because of the incompatibility. It's not bake ruining but it's worth comparing.
Anonymous No.8616369 [Report] >>8616377 >>8616378 >>8616385 >>8616391
Actually, "OP" here, I was thinking, since I use full shuffling now, would adding an artist tag for a style bake still make sense?
Anonymous No.8616377 [Report]
>>8616369
Eh fuck it I'll bake twice and be empirical about it kek
Anonymous No.8616378 [Report]
>>8616369
No there's no point.
Anonymous No.8616385 [Report] >>8616438
>>8616369
I'd imagine it wouldn't hurt, especially if there's some vestigial knowledge of the artist from danbooru so as to nudge that knowledge to the surface
If it wasn't in danbooru/after cutoff (or if the model native artist tag just fucking sucks), I'd say may as well leave it out since style loras seem to work fine without tags, why add another piece of overhead during runtime
Anonymous No.8616391 [Report] >>8616397 >>8616438
>>8616369
I've done it with the "3D" tag once by accident and it still worked like a trigger despite shuffling. Tag order doesn't seem to matter as much on illu/noob anyway.
Anonymous No.8616397 [Report]
>>8616391
3d tag is a super special case imo.
Anonymous No.8616403 [Report]
Anonymous No.8616407 [Report]
>new config also converges styles way faster than old one
That old one needed like 10 epochs more than for characters, I guess it was all the dropouts.
It's kinda funny how much it can change depending on how you test it, but I think the bulk of the old setup I grandfathered from Pony.
I'll test it a bit more and probably post it tomorrow.
Anonymous No.8616438 [Report]
>>8616385
>>8616391
Hm yeah it's still needed.
Anonymous No.8616444 [Report]
>>8616328
>Cosine did surprise me because people recommend with restarts but yeah, the restarts version just kept more of a consistent "fry" in the images, especially relating to clothes.
ime restarts do help but it depends on dataset size and lr and probably a lot of other stuff. i did some tests a while ago and settled on one restart for every 2400 examples of a concept in the dataset at batch size 1.
Anonymous No.8616451 [Report] >>8616457 >>8617188
Coolio, I'm gonna go on a bake binge soon.
Anonymous No.8616454 [Report] >>8616455 >>8616459 >>8616461 >>8616465 >>8616467
any videos i can use to learn?
Anonymous No.8616455 [Report]
>>8616454
This is a reading hobby, sweaty?
Unironically people here would send you to the research papers back in the day.
Anonymous No.8616457 [Report] >>8616468
>>8616451
エロ
Anonymous No.8616459 [Report]
>>8616454
nyot really
the people making vid tutorials are making them for the common jeet denominator
Anonymous No.8616461 [Report]
>>8616454
depends if you have a programming/maths baseline
if you don't, lmao, go follow karpathy course and good luck
if you do, having chatgpt break down the components of sdxl for you until you get to the level of technical detail you're satisfied with isn't a terrible idea.
Anonymous No.8616465 [Report]
>>8616454
learn what
Anonymous No.8616467 [Report]
>>8616454
If you're completely new, then go ahead and search for any video tutorial on YouTube. I started learning by watching those and then reading some guides from the OP a year and a half ago.
Anonymous No.8616468 [Report] >>8617188
>>8616457
More like
Anonymous No.8616482 [Report]
where's the nai gens?
im tired of localslop
Anonymous No.8616483 [Report]
wrong thread, chud
Anonymous No.8616488 [Report]
where's the ironic posts?
im tired of genuineslop
Anonymous No.8616530 [Report] >>8616531 >>8616532 >>8616937
Best way to improve face quality?
Anonymous No.8616531 [Report]
>>8616530
make it >30% of the image
Anonymous No.8616532 [Report] >>8616936
>>8616530
>inpaint
Forget that. Use adetailer on a hires pass and type in the face details at 32 padding.
Anonymous No.8616715 [Report] >>8616721 >>8616912 >>8616918 >>8616921 >>8617226 >>8617820
Weird, a couple months ago gens would make me diamonds. Now they barely do anything. Are the models getting worse, have all the good genners left or has my taste just drifted?
Anonymous No.8616721 [Report]
>>8616715
Unironically take a break, gooner.
Anonymous No.8616817 [Report] >>8616830 >>8616910 >>8616912
Any other shitmix testing purveyor want to run this model through their saved prompts? I've been awfully impressed with some of the outputs albeit it's not without issue. Curious if it's just good gacha on my part or what.
https://civitai.com/models/832573?modelVersionId=1677841
Anonymous No.8616819 [Report]
>>8616298
>Multiple batches allow for quicker training but also exhibit problems with symmetricality in the end bake. Keep at 1.
>what? sounds like an issue with bucketing
How
Anonymous No.8616830 [Report]
>>8616817
I refuse to use shitmixes, if I want to fry my model with shitty loras I can do it myself without loading a whole new checkpoint.
Anonymous No.8616836 [Report]
k
Anonymous No.8616910 [Report] >>8616923
>>8616817
What was it that impressed you? I threw a bunch of prompts at it and didn't see much of a difference. Didn't even include 1+29 or 102d custom because they're even more similar to base v-pred.

I think bottom line every merge is just some mix ratio of illu+noob with diluted knowledge and a different base style. As soon as you prompt an artist above 300 pics or add a lora, all the checkpoints sort of drift together.
Anonymous No.8616912 [Report] >>8616915 >>8616923
>>8616715
>he was jerking to other people's gens
ishiggydiggy
>>8616817
I wouldn't say it's better than 102dcustom but it's better than most shitmixes I've tested, primarily regarding inbuilt artist replication.
Anonymous No.8616915 [Report]
>>8616912
kek
Anonymous No.8616918 [Report]
>>8616715
whose gens did you like most bro
Anonymous No.8616921 [Report]
>>8616715
You kidding me? Look at this shit >>8615738 its getting even more amazing, and apparently now you can animate them too? Things are getting better and better.
Anonymous No.8616923 [Report]
>>8616910
My go-to's are 291h and custom. Flip between the two as I find 291h to be soft (good or bad thing depending on the artist mix) but have more artist fidelity whereas custom has more color depth but can overpower the style of certain artists.
This shitmix I stumbled on is giving me a nice in-between of both those models which is ideal. But it wouldn't be the first time I find a model I think has potential and then dump a week later.
>>8616912
>primarily regarding inbuilt artist replication
Yeah that's what I liked about it. Gives me the fidelity of 291h but with more color depth but not diving into fried or blown contrast territory.
Anonymous No.8616936 [Report] >>8616939 >>8616940 >>8616941
>>8616532
>and type in the face details at 32 padding
Sorry I'm retarded. Can you explain what this part means?
Anonymous No.8616937 [Report]
>>8616530
higher base res and inpaint
Anonymous No.8616939 [Report] >>8616940
>>8616936
nta and he might not be too smart either since Adetailer is the same as inpainting.
But anyway when inpainting change your prompt to describe only the face (and style) not the whole scene. Adetailer gives you a separate prompt window to simplify this. Mask padding is in the settings.
Anonymous No.8616940 [Report] >>8616947
>>8616936
In the adetailer prompt type the info for her face plus your quality tags, artists tags, lora etc. I organize my prompts so it's all easy to copy/paste. For your settings put the resolution to 1024x1024 and 32 mask padding at 0.4 denoise and faces should come out much better.
>>8616939
You're so retarded since I'm telling him to automate his inpainting rather than doing it manually. You've already conceded the fact that they're the same so what's the issue?
Anonymous No.8616941 [Report]
>>8616936
booru tags that describe said character's face.
>yellow eyes, short eyebrows, scar across eye, etc
32 padding is the area the model will derive info from to change your masked zone. 32 or 64 are usually safe bets along with soft inpainting to eliminate any inconsistencies in the masked area.
Anonymous No.8616947 [Report] >>8616948 >>8616949
>>8616940
The post made it sounds like you thought adetailer was somehow superior to inpaint beyond just finding the face automatically. A misunderstanding then, thought at least we clarified for the newfag.

I would apologize for calling you "not too smart", but now you insulted me even worse so get fucked.
Anonymous No.8616948 [Report]
>>8616947
Based grudge holding anon.
Anonymous No.8616949 [Report] >>8616951
>>8616947
Basic 4chan discourse I'm afraid, but I will apologize anyway since I see my insult was unnecessary.
Anonymous No.8616951 [Report]
>>8616949
Based bigger man anon.
Anonymous No.8616958 [Report] >>8616973
>>8616235
>https://rentry.org/hgg-lora
were you the one who was empirically testing various settings and posting those grids?
if you are, could you share the collection of grids as well?
would like to see if i would draw the same conclusions
Anonymous No.8616973 [Report]
>>8616958
mmm nyo (i deleted them)
Anonymous No.8616977 [Report]
shitty nyogen impersonator...
Anonymous No.8617086 [Report] >>8617089 >>8617093 >>8617094 >>8617098 >>8617100 >>8617118
How are you guys using BREAK?
I feel like the more I use it, the less I know. Are you supposed to organize your tokens by importance so that important things are early in each prompt segment (ie masterpiece, best quality are not in the same segment, but split into different segments as the starting tokens)? Are you supposed to group by concept, so like you put all the clothing tags together, all the background tags together, etc? Or perhaps do you group the tokens by visual proximity, so perhaps tags close to one subject get a group, tags close to each subject's faces get their own groups, etc.
Anonymous No.8617089 [Report]
>>8617086
i'm not
Anonymous No.8617093 [Report]
>>8617086
I don't. Some have said that it resets unet or something so that the very next token after BREAK gets full importance which can be helpful but I don't really use it. Organization is done with pressing enter rather than BREAK.
Anonymous No.8617094 [Report]
>>8617086
Specific use snakeoil which doesn't work the way you are intending to use it.
Anonymous No.8617098 [Report] >>8617103 >>8617113
>>8617086
>How are you guys using BREAK?
The only legitimate use case for it is to prevent getting tags split between blocks. For example, if you are at 73/75 tokens and have "foreshortening" as your next tag, it would be good to use BREAK, so the "forshorten" and "ing" tokens don't end up in separate token blocks.
Anonymous No.8617100 [Report]
>>8617086
I'm not
Anonymous No.8617103 [Report] >>8617108
>>8617098
this
everything else like "separating characters" is pure placebo
Anonymous No.8617108 [Report] >>8617110 >>8617111
>>8617103
Isn't that also how you separate adetailer face prompts, or regional prompter areas? I don't use webui/forge, just thought I read that somewhere. Might be where their confusion comes from.
Anonymous No.8617110 [Report]
>>8617108
I mean technically yeah, but you're changing its function when you use it with extensions like regional prompter. By default it simply separates clip chunks which is only useful to prevent tag being split into separate tokens
Anonymous No.8617111 [Report] >>8617114 >>8617116
>>8617108
Woah woah woah. You can separate adetailer face prompts? With regional prompter?
Anonymous No.8617113 [Report]
>>8617098
This isn't even happening in modern UIs though? If you don't put BREAKs, they only split by commas, never in the middle of a word. Well I only tested forge/reforge as they have convenient token counter, maybe comfy does it like you say.
Anonymous No.8617114 [Report] >>8617116
>>8617111
>You can separate adetailer face prompts?
no, adetailer has its own syntax for prompt splitting and uses [SEP]
Anonymous No.8617116 [Report]
>>8617114
Right, sorry I mixed those up.
>>8617111
If you have multiple faces in the pic you can give them separate adetailer prompts and it goes through them all, left to right I think. It doesn't use regional prompter, just inpaints them one by one.
Anonymous No.8617118 [Report]
>>8617086
I haver never used BREAK under any circumstances
Anonymous No.8617150 [Report] >>8617154
kusujinn https://files.catbox.moe/46yjxk.safetensors
Anonymous No.8617154 [Report] >>8617188
>>8617150
Thanks? Can't image how it'll look, he's gone through like five different styles.
Anonymous No.8617157 [Report] >>8617188 >>8617195
How many images do I need for a character lora?
Anonymous No.8617188 [Report] >>8617245
>>8617154
>>8616468
>>8616451

>>8617157
depends but it can work with as little as like 10
Anonymous No.8617195 [Report]
>>8617157
Fewer the better to get a strong style, it just won't generalize well without a varied dataset.
Anonymous No.8617197 [Report]
Anonymous No.8617226 [Report]
>>8616715
Had the same thing. In my case I was trying to prompt increasingly complex stuff and never noticed the subtle decrease in quality and frying step-by-step. Going back to simpler less convoluted prompts fixed it for me.
https://files.catbox.moe/qqxvkl.png
Anonymous No.8617245 [Report] >>8617248 >>8617251 >>8617254
>>8617188
Did you train this on a merge? It looks completely different on noob. Not in a bad way.
Anonymous No.8617248 [Report] >>8617251
>>8617245
Also I can't tell if "kusujinn" is supposed to be a trigger prompt or it's just picking up the model's existing knowledge from 150 booru pics.
Anonymous No.8617251 [Report] >>8617255 >>8617266
>>8617245
Nyo u can see it's baked on 1.0
>>8617248
It is
Anonymous No.8617254 [Report] >>8617266
>>8617245
Is that a character? If not what is the prompt for that hairstyle? Parted hair, wavy hair?
Anonymous No.8617255 [Report] >>8617268 >>8617298
>>8617251
>It is
working with noobs existing artist tags is inadvisable in my opinion. They are often misaligned and overtrained. It's possible to somewhat fix them with te training, but you're usually better off training a new tag from scratch or training the style into uncond.
Anonymous No.8617266 [Report] >>8617271
>>8617251
Okay, that really brings out the /aco/face, thanks

>>8617254
Okumura Haru
I thought she was about as mainstream as it gets
Anonymous No.8617268 [Report]
>>8617255
Unless they're not misaligned. Then it's a huge benefit and you only need a little bit of training to drive it home.
Anonymous No.8617271 [Report] >>8617530
>>8617266
I haven't played persona 5 nor any persona game because I was never interested. Worse, the mainstream nature of the fifth game made me lean out not in. Maybe I'd like it if I tried it.
Anonymous No.8617279 [Report] >>8617302 >>8617651
Seems like the anon who was finetuning a vae for noob back then is now responsible for neta's lumina 2 bake? https://huggingface.co/heziiiii
Anonymous No.8617286 [Report]
>>8616349
Thanks for the tip, I should experiment more with prompt editing. You can probably replicate a lot of controlnet stuff with it.
https://files.catbox.moe/kkzluz.png
Anonymous No.8617298 [Report]
>>8617255
I mean I tested it 1:1 and it was better with, meh.
Anonymous No.8617302 [Report] >>8617325
>>8617279
Is this good or bad news?
Anonymous No.8617325 [Report]
>>8617302
I dunno, it's interesting, maybe he could give us some cool insider info
Anonymous No.8617335 [Report] >>8617349
>https://files.catbox.moe/deadsd.toml
Anonymous No.8617349 [Report]
>>8617335
>deadsd
Anonymous No.8617356 [Report] >>8617439 >>8617543
oh yeah i was gonna post that toml
it converges roughly around the 11-13 epoch but it's safer to keep 15
https://files.catbox.moe/a22hr0.toml
Anonymous No.8617370 [Report] >>8617378
>>8616349
>>8616352
What does [a:b:numerical value] mean?
Anonymous No.8617378 [Report] >>8617397
>>8617370
>What does [a:b:numerical value] mean?
tag "a" is in the prompt for "numerical value" percentage of the steps, after which, tag "b" replaces "a".

i.e. [cat:dog:0.5] - 20 steps
for the first 10 steps, cat exists in the prompt, at step 11, cat is replaced with dog
hope this helps bwo
Anonymous No.8617397 [Report] >>8617455
>>8617378
Oh, perfect, thanks anon!
Anonymous No.8617439 [Report] >>8617464
>>8617356
is it for style or character?
Anonymous No.8617455 [Report] >>8617477 >>8617503 >>8617700
>>8617397
welcome bwo
Anonymous No.8617464 [Report] >>8617534
>>8617439
both
Anonymous No.8617477 [Report] >>8617481 >>8617491
>>8617455
Hot. Catbox?
Anonymous No.8617481 [Report]
>>8617477
Don't worry I know.
Anonymous No.8617491 [Report]
>>8617477
stealth meta bwo
still testing the lora tho
Anonymous No.8617503 [Report]
>>8617455
i absolutely hate how this only looks crisp if you don't open it in full resolution
Anonymous No.8617510 [Report] >>8617512 >>8617519
If that's fuzzy then my pics are hot garbage holy shit.
Anonymous No.8617512 [Report] >>8617515
>>8617510
more like it's a vae problem
Anonymous No.8617515 [Report] >>8617524
>>8617512
Which vae are you using?
Anonymous No.8617519 [Report]
>>8617510
some prefer their gens mushy and blurry while others want digital art level of sharpness, at this point it's just a matter of taste, (You)r taste
Anonymous No.8617524 [Report] >>8617527
>>8617515
same blurry shit as you most likely
Anonymous No.8617527 [Report] >>8617529
>>8617524
fixFP16ErrorsSDXLLowerMemoryUse_v10?
Anonymous No.8617529 [Report] >>8617544
>>8617527
>fixFP16ErrorsSDXLLowerMemoryUse_v10
isn't that just the fp16 vae
Anonymous No.8617530 [Report]
>>8617271
you're not missing out on anything good, it's shit megami tensei for the bottom of the barrel r*dditors
Anonymous No.8617534 [Report] >>8617535
>>8617464
I thought they were supposed to be different
Anonymous No.8617535 [Report] >>8617631
>>8617534
that can happen but nah it works, i baked a style and a chara and it just werks
about to rebake derpixon and skuddbutt out of curiosity
Anonymous No.8617543 [Report] >>8617546
>>8617356
oops i forgot to add a subset with the shuffle captions
Anonymous No.8617544 [Report] >>8617548
>>8617529
vae_trainer_step_90000_1008?
Anonymous No.8617546 [Report] >>8617547 >>8617798
>>8617543
oh ffs and i baked two loras including the kusujinn without it because i forgot
lmaoo
i need to rebake them
Anonymous No.8617547 [Report] >>8618450 >>8618562 >>8618634
>>8617546
here's the fixed one lel https://files.catbox.moe/uvldis.toml
Anonymous No.8617548 [Report] >>8617550
>>8617544
this one?
https://huggingface.co/heziiiii/noob_vae_test/tree/main
Anonymous No.8617550 [Report]
>>8617548
Yeah those are the only two weird VAEs I've seen people use. Otherwise I just use SDXL.
Anonymous No.8617631 [Report] >>8617738
>>8617535
Sweet, i'll test then I need to rebake a character
Anonymous No.8617647 [Report] >>8617710 >>8617736
Okay bakers how much dim should I use? I heard someone say something about overfitting on style with high dim or something. I don't remember this being a thing.
Anonymous No.8617651 [Report] >>8617657 >>8617664
>>8617279
For fucks sake, can someone please tell him to finetune flux's VAE? I don't want lumina 2 to come out with the artifacted, washed out, biggest shit that flux's VAE has, I really want the next gen model to be as less fucked as possible
Please tell him
Anonymous No.8617657 [Report] >>8617661 >>8617670
>>8617651
Post a VAE comparison. However bad Flux's might be, SDXL's is 100x worse.
Anonymous No.8617661 [Report] >>8617672
>>8617657
You can't use SDXL's VAE on lumina, how would that comparison even work
Anonymous No.8617664 [Report]
>>8617651
This might be his civit account https://civitai.com/user/li_li/images all I did was search the noob discord for hezi and this name popped up
Anonymous No.8617670 [Report]
>>8617657
nta but
>However bad Flux's might be, SDXL's is 100x worse.
not an excuse to not try to mess with it just to get better details, and then a better model. in fact, this is the best moment to do itsince lumina 2 isnt finished so there is still time to improve the vae
Anonymous No.8617672 [Report] >>8617679
>>8617661
https://huggingface.co/spaces/rizavelioglu/vae-comparison
Anonymous No.8617679 [Report]
>>8617672
Oh that's pretty cool, thanks for showing it to me.
Anonymous No.8617700 [Report] >>8618297
>>8617455
interesting how this one came out with less paper/canvas texture on the outlines, I kinda liked that effect
Anonymous No.8617710 [Report]
>>8617647
I use 16
and I used 16 on my last config too
Anonymous No.8617736 [Report] >>8617740
>>8617647
don't remember this being a thing with what? SD1.5?
Dims are much different because dims don't exist in a vacuum if so, it's proportionate to the actual model size. a 16 dim lora on SDXL is much more "powerful" than a 16 dim lora on SD1.5 hence more sensitive to frying as larger dim values are used.
You probably don't need such a big lora (could just also train/resize 32 dim down to 16 and see what method you like) unless you're training many concepts, at which point just finetune and extract.
Anonymous No.8617738 [Report] >>8618040
>>8617631
just remember to shuffle the captions unlike me
Anonymous No.8617740 [Report]
>>8617736
>at which point just finetune and extract
Any rentrys for this?
Anonymous No.8617797 [Report] >>8617810
Anonymous No.8617798 [Report]
>>8617546
>retrained two of them without it again
am i fucking retarded or something
Anonymous No.8617810 [Report] >>8617814
>>8617797
that's literally me
Anonymous No.8617814 [Report]
>>8617810
I like your white hair.
Anonymous No.8617820 [Report] >>8617848 >>8617928
>>8616715
I've been working on getting a mix for a simpler style but something about the faces/bodies just lack the "intensity" to get me going.
I may have conditioned myself to needing prominent toned belly/ribcage/hipbones or else I won't see it as erotic
Anonymous No.8617848 [Report] >>8617850
>>8617820
>but something about the faces/bodies just lack the "intensity" to get me going.
it's called "context"
Anonymous No.8617850 [Report]
>>8617848
you're right chief, I should go back to just making unquestionably rape/ryona pics
Anonymous No.8617909 [Report] >>8617922
/hgg/, slopposting yea or nay?
https://files.catbox.moe/s38rup.png
Anonymous No.8617922 [Report]
>>8617909
as long as you don't spam it should be fine, post whatever you want just don't over do it
Anonymous No.8617928 [Report]
>>8617820
Idk just seeing my cock slide into her wet pussy in pov always gets me. Bonus points if it's a waifu from one of my character cards.
Anonymous No.8617944 [Report] >>8617947 >>8618531 >>8618531 >>8618531
Can someone help me bake this lora? I'm not sure what I'm doing wrong.
https://litter.catbox.moe/o0vbte3jla83xfx1.rar
Anonymous No.8617947 [Report] >>8617949
>>8617944
>these two feet shots
https://files.catbox.moe/flhv1q.gif
but if you mean for a style lora, give it some tag, and prune all those style descriptors like "anime coloring" because it just dilutes the output
Anonymous No.8617949 [Report] >>8617953
>>8617947
No I don't even care about feet but that anime has really good nails for both hands and feet which I wanted for the lora. I had added those tags because I thought it helped but I guess not? I'll try training it for longer.
Anonymous No.8617953 [Report] >>8617963
>>8617949
i think styles generally do need "trigger" tags for "concentrating" them, never had much luck post 1.5 without them
and yeah those tags would just dilute that baked in tag and pull on the pretty strong inbuilt knowledge
i can bake it tomorrow out of curiosity, just say what trigger tag you want
Anonymous No.8617963 [Report] >>8618450
>>8617953
>i think styles generally do need "trigger" tags for "concentrating" them
Oh yeah? What I noticed back in the pony days was that some datasets needed them but others didn't. However I haven't had much issue with styles at all until I attempted this and I've been having trouble with it for weeks.
>just say what trigger tag you want
I doesn't really matter I just need a proof of concept and metadata. Some faggot (who's probably here now) was showing off his lora but when he posted the lora he cut out the metadata so I can't even see what he did.
https://files.catbox.moe/z4p83a.png
Anonymous No.8618040 [Report] >>8618343
>>8617738
oh yeah good call
no noise offset tho?
Anonymous No.8618044 [Report] >>8618045 >>8618048
hello, I've been using NoobAI-XL-Vpred-v1.0+v29b-v2-perpendicular-cyberfixv2 as my daily driver, snake oil be damned, is there anything "better" that has been released since or maybe a paradigm shift that popped up overnight that I missed? Thank you.
Anonymous No.8618045 [Report] >>8618048 >>8618052 >>8618270
>>8618044
Is that the one with the shit backgrounds? We've all moved on to 102d custom bro.
https://civitai.com/models/1201815?modelVersionId=1491533
>he recently release 2.5d boost
Interesting.
Anonymous No.8618048 [Report] >>8618052 >>8618215 >>8619688
>>8618044
>NoobAI-XL-Vpred-v1.0+v29b-v2-perpendicular-cyberfixv2 as my daily driver
it's not that bad, you don't need any snake oil with that one, it does suffer of very noticeable artifacting with some styles tho
As >>8618045 post, we use 102d custom nowadays for a better and more easy time
Anonymous No.8618052 [Report] >>8618067
>>8618045
>>8618048
Thank you, I'll try this model out.
Anonymous No.8618067 [Report]
>>8618052
the euler cfg++ kl optimal 28-32 step @ 1.5cfg settings they suggest give pretty neat outputs for hires pass, just don't use rescale cfg with that if you try it
Anonymous No.8618117 [Report] >>8618157
Artist mix is settling out nicely. Though it has a tendency to make girls cute and funny, I guess I did sign up for that with the artists I chose and it's not like I have reason to post that often
Anonymous No.8618134 [Report]
Anonymous No.8618138 [Report] >>8618164
Anonymous No.8618157 [Report]
>>8618117
Nai is looking good
Anonymous No.8618161 [Report] >>8618193
it's local
Anonymous No.8618164 [Report]
>>8618138
luv me anime pubes
Anonymous No.8618165 [Report] >>8618166
i hate pubes
Anonymous No.8618166 [Report]
>>8618165
fluffy pubes get a pass
Anonymous No.8618170 [Report]
tufts and fuzz are acceptable, it's those overexaggerated pussies with flaps and individual strands going everywhere trying so hard to be "realistic" that it shoots right past is where it gets appaling
Anonymous No.8618193 [Report] >>8618208
>>8618161
yeah nah
Anonymous No.8618208 [Report] >>8618330 >>8618516
>>8618193
I don't provide metadata so you just gotta believe me bwo (I guess I base gen at 896x1152 too if that even matters)
Anonymous No.8618215 [Report]
>>8618048
Cute Vibes. Cute Lize.
Anonymous No.8618254 [Report]
Anonymous No.8618270 [Report] >>8618280
>>8618045
>Interesting.
>This 2.5d boost model provides a model that deviates from flat 2D to a slightly 2.5D orientation.
Why do they do this? Reminds me of chromayume, for whatever it's worth, started off as flat and then ventured into 2.5 slop.
Anonymous No.8618280 [Report]
>>8618270
hey look, two cakes
it's not much effort to just merge in more custom udon for another model. variety is the spice of life
Anonymous No.8618297 [Report]
>>8617700
>paper/canvas texture on the outlines
that effect was due to mistagging (lack thereof) motion line tags on some images in the set with finer, less noticable, motion lines drawn near the outlines.
missed out on tagging a couple again which were in another folder, so back to the kitchen with this
Anonymous No.8618317 [Report] >>8618320 >>8623477
Anonymous No.8618320 [Report] >>8618363
>>8618317
i like that the penises are small
Anonymous No.8618330 [Report] >>8618340
>>8618208
Feels like wagashi but more generic.
Anonymous No.8618340 [Report]
>>8618330
yeah, it indeed started out with trying to find settings to make wagashi + a wagashi lora play nice with noob. For some reason, my setup would always fry body parts with it
Anonymous No.8618343 [Report] >>8618579
>>8618040
hm? no
Anonymous No.8618363 [Report] >>8618376
>>8618320
b-bro that's average...
Anonymous No.8618376 [Report]
>>8618363
oh nyo nyo nyo nyoooooooooooooooooooooo
Anonymous No.8618396 [Report] >>8618398 >>8618399 >>8618403 >>8618430 >>8618431
what's the difference between this and hdg? I haven't been here in months, where do I post sex gens?
Anonymous No.8618398 [Report]
>>8618396
if you have to ask, you're meant for /hdg/
Anonymous No.8618399 [Report]
>>8618396
This one's constantly near-death, so shitposters see it as too much effort for no payoff. Not worth it imo, now I'm stuck having to refresh five threads instead of 4.
Anonymous No.8618403 [Report] >>8618412 >>8618412
>>8618396
less shitposting, less gens and more technical talk
Anonymous No.8618411 [Report]
Anonymous No.8618412 [Report] >>8618417
>>8618403
>>8618403
I just wanna use hentai gens to make e-girls perform things they can't or won't in real life.
like nigri, lyumos, katz and other cosplayers getting fucked out by tentacles.
it's challenging indeed for brainlet coomers like me.
Anonymous No.8618417 [Report]
>>8618412
>3dpd
may your journey to other boards be swift and final
Anonymous No.8618430 [Report]
>>8618396
Until a janny decides to start enforcing the shitposting rules on /hdg/ it's unusable to me. Maybe 1 out of 10-15 posts is genuine these days, if you exclude the spam. This thread might be slower, but it has 100 times less of the cancer that's now in /hdg/.
Anonymous No.8618431 [Report]
>>8618396
>this bait again
On the 1% chance that you're serious, just look at the previous /hdg/ thread
Anonymous No.8618434 [Report] >>8618494 >>8622170
This worked surprisingly well. Although it does have a bit of a darker bias because of the 1st ep.
skuddbutt https://files.catbox.moe/nivhgd.safetensors
Anonymous No.8618438 [Report] >>8618441 >>8618463
Ever since anons suggested to run without negatives my gens have noticeably improved on Noob models, so I'm pretty convinced on that front. But what about quality tags? I see some anons using [<quality tags>:x] or even [<quality tags>::y]. Anyone experimented with what works best?
Anonymous No.8618441 [Report] >>8618447
"curvy" tag my beloved
>>8618438
I found them to have little to no actual effect. Maybe it has some meaning if you run no artist base model but like... why?
Anonymous No.8618447 [Report] >>8618456
>>8618441
No effect? They have a crazy strong effect even when using artist tags for me on chromayumeNoobaiXLNAI_v40.
Anonymous No.8618450 [Report] >>8618531
>>8617963
>when he posted the lora he cut out the metadata
I guess you're talking about my basedbinkie lora? I can give you the toml if you want but anon's config here: >>8617547 is probably better, I was gonna use it myself for my next lora
Anonymous No.8618451 [Report] >>8618455
Reminder to never leave your prompts with a hanging tag. Always have a comma at the end.
Anonymous No.8618455 [Report]
>>8618451
wait why? is not having a comma at the end that impactful?
Anonymous No.8618456 [Report]
>>8618447
Chromayume is pretty close to base model.
Well, I guess I meant positive effect. Even in your pics they kinda screw up the look and anatomy imho. When I was testing it myself it was pretty minimal.
Anonymous No.8618460 [Report] >>8618462 >>8618494
Rebaked Kusujinn
https://files.catbox.moe/ridycu.safetensors
Anonymous No.8618462 [Report] >>8618740
>>8618460
as for the difference between shuffling captions or not, the rightmost one was not shuffled
it's pretty minimal but maybe the flatness of the shuffled ones corresponds to the style a bit more? who knows at this point
https://files.catbox.moe/cyhpct.png
Anonymous No.8618463 [Report] >>8618466
>>8618438
It's not that negs are bad, although maybe they are. It's that there's a technical quirk with noobAI, where just having anything at all in negs causes quality degradation versus leaving them completely empty. Even if it's just a single letter, or an underscore. The effect does partially go away with merges but it's easily visible on base noob.
Anonymous No.8618466 [Report] >>8618474
>>8618463
It's not NoobAI as much as reForge. Pony can get a similar effect. I think people just didn't see because everyone needed negs back then. Or maybe the dude changed the backend between Pony and Noob because it is caused by how it processes the uncond.
Anonymous No.8618474 [Report] >>8618493
>>8618466
Made me load up Pony again and check. It also goes away with merges, and everyone used source_pony in negs so I guess we never noticed.

Thing is A11/Forge/reForge pass the uncond as empty, instead of encoding an empty string and passing the output of that.
Anonymous No.8618493 [Report] >>8618496 >>8618505
>>8618474
>Thing is A11/Forge/reForge pass the uncond as empty, instead of encoding an empty string and passing the output of that.
Then, why does an empty uncond look better on SDXL models?
Anonymous No.8618494 [Report]
>>8618460
>>8618434
thanks anon(s)
Anonymous No.8618496 [Report]
>>8618493
we just don't know.gif
Anonymous No.8618505 [Report]
>>8618493
It would be kinda funny if there was some other weird quirk that was handicapping outputs. The reForge/Comfy/Classic output differences show that it's not always the exact same thing depending on how you run the model.
Anonymous No.8618516 [Report] >>8618524 >>8618537 >>8618771
>>8618208
>I don't provide metadata
Why are you proud of this?
Anonymous No.8618523 [Report] >>8619004
Anonymous No.8618524 [Report] >>8618536
>>8618516
seethe prompt thief
Anonymous No.8618531 [Report] >>8618535 >>8618543 >>8618598
>>8618450
Did you make the lora for the pic I posted? It's not really about the config, although that matters, it's about the number of pics used, the tags used, etc. Having all the information is best but since dataset is 60% of what makes a good lora, I'm still pretty much in the dark without any idea of how he did it. Also:
>>8617944
>>8617944
>>8617944
Any takers? It's a style lora that's already tagged. I thought this place had at least 10 bakers lurking around.
Anonymous No.8618535 [Report]
derpixon https://files.catbox.moe/9ws6sb.safetensors
i tried tagging the characters and herzha forms but it doesn't really want to work
>>8618531
i have it baked i need to test it
Anonymous No.8618536 [Report]
>>8618524
I'm seething that /hgg/ has started tolerating retarded attentionwhores not that I care about the metadata for artists I'm already using.
Anonymous No.8618537 [Report] >>8618539
>>8618516
Consider post natal self abortion. Nobody owes you shit, mouthbreather
Anonymous No.8618539 [Report] >>8618542 >>8618544
>>8618537
Go back faggot. I don't want you shitting up /hgg/ too. We're not your xitter fanclub.
Anonymous No.8618542 [Report]
>>8618539
just quickly posts some tired bait in the other thread and he'll be occupied for a while
Anonymous No.8618543 [Report]
>>8618531
>Did you make the lora for the pic I posted?
Nope, in that case I have no idea what you're talking about
Anonymous No.8618544 [Report]
>>8618539
Fuck yourself you worthless cunt. If you don't like people posting gens that's your problem. Again, consider suicide you fucking dipshit.
Anonymous No.8618547 [Report]
na hours
Anonymous No.8618548 [Report]
>avatarfagging doesn't exist
>desperately trying to look cool in front of strangers isn't a thing
>don't mind me I'm just posting gens
Anonymous No.8618550 [Report] >>8618560 >>8618580 >>8619579
i like asking for boxes bwos
sometimes i find some nice artists to train a lora for
Anonymous No.8618552 [Report]
>[headcanon]
Anonymous No.8618560 [Report] >>8618573
>>8618550
it's one of few things this general is good for still
Anonymous No.8618562 [Report]
>>8617547
>RuntimeError: quantile() input tensor must be either float or double dtype
just errors out for me on the fork lol
Anonymous No.8618573 [Report] >>8618577
>>8618560
eh i kinda like the technical discussions
but yeah sharing boxes is nice and comfy, who cares about the grifters
Anonymous No.8618577 [Report] >>8618580
>>8618573
If you don't want to share it's fine, but then why post here? Just keep it to yourself and enjoy your super secret recipe. In reality every other intelligent person from the SDXL creators to the forge/comfy coders could have said "I don't owe you anything" and used the tech privately but they didn't, yet your insignificant contribution is the thing that belongs to you? Retarded ladder pushers should in fact be shamed.
Anonymous No.8618579 [Report]
>>8618343
Did some test and all the output were fried, so idk what was wrong with mine
Anonymous No.8618580 [Report] >>8618584
>>8618577
wrong anon bwo, i do share... all my gens have stealth like >>8618550
Anonymous No.8618582 [Report] >>8618584
>YOU MUST SHARE OR DONT POST AT ALL
what the hell? lmao
Anonymous No.8618583 [Report] >>8618584
Yes, still not sharing metadata of my insignificant gens. If they're so insignificant why do you care
Anonymous No.8618584 [Report]
>>8618580
I was speaking generally not to you specifically.
>>8618582
>>8618583
Yes faggot. Why do you have no argument? Because your position is completely indefensible.
Anonymous No.8618586 [Report]
bros why are the hdg rapefugees still here the thread got rebaked
and i know it's you because you posted the same style in hdg too
Anonymous No.8618587 [Report]
>begger thinks he's contributing by whining and shaming image posters
it's time to go back
Anonymous No.8618590 [Report]
>attentionwhore thinks he's contributing by posting pictures where he's not wanted
Go back. /hdg/ is the perfectly place for you. You don't need to be here.
Anonymous No.8618593 [Report]
ywnbaj, this isn't your thread chud. you can't police what people post
Anonymous No.8618594 [Report]
>47:27
>49:04
>50:43
jej
Anonymous No.8618598 [Report] >>8618601 >>8618634
>>8618531
retagged and used kotonoha for a trigger https://files.catbox.moe/n125j8.safetensors
i don't know if this is what you wants but it is a style lol
i think ai always smooths out the bloomy atmosphere if that makes sense
Anonymous No.8618601 [Report] >>8618609
>>8618598
Can you post a catbox so I can post a comparison pic? Thanks for baking it btw.
Anonymous No.8618609 [Report] >>8618634
>>8618601
https://files.catbox.moe/sprs7g.png
Anonymous No.8618612 [Report]
>if you post a picture without metadata youre attenionwhoring
rofl
Anonymous No.8618623 [Report] >>8618629 >>8618637 >>8618692 >>8618841
Any other anons obsessed with horror?
https://files.catbox.moe/tnmkhn.png
Anonymous No.8618629 [Report] >>8618646
>>8618623
Not obsessed nor I do really like terror but I do like to gen unsettling gens from time to time
Anonymous No.8618634 [Report] >>8618640 >>8618643
>>8618598
>>8618609
I had to change some stuff because the lighting wasn't coming out good for some reason. Either way I think these pics showcase the differences.

This one is the original lora. You can see how the overall style is very close to the screencaps but also how detailed the desks/walls/window are. The buildings too. However his lora always has that backlighting/side lighting effect on all pics which makes me think he just used like 20 pics he found on a wallpaper website and called it a day.
https://files.catbox.moe/l1ljla.png

This one is yours which is close but doesn't quite have the detail of his, especially with the skin and the way the desk/window/buildings look.
https://files.catbox.moe/blrns0.png

This one is mine. Idk why it's doing this lighting thing. The desk/window is a bit closer than yours to the screencaps but mine looks overbaked. I don't understand. My config works for 90% of datasets but now I'm repeatedly have issues.
https://files.catbox.moe/oiq1v5.png

Is this your config? >>8617547
Anonymous No.8618637 [Report] >>8618646
>>8618623
Oh thank you for reminding me. I enjoy horror gens with nakamura regura but yeah I should make some more of them.
Anonymous No.8618640 [Report]
>>8618634
>Is this your config
ye
Anonymous No.8618643 [Report] >>8618652
>>8618634
I wouldn't be surprised if he overbaked on some shitmerge and that somehow made it better
I know the old N64 lora that was bretty kino was baked on AOM3a or something
Anonymous No.8618646 [Report]
https://files.catbox.moe/c4v4oe.png
>>8618629
I think stumbling upon a the ring porn parody when I was young was what did me in.

>>8618637
Regura is nice, should make another mix with them included. Karasu raven also makes some real nice monster girls, but they desperately need a lora for noob.
Anonymous No.8618652 [Report] >>8618969
>>8618643
Hmm I guess this is worth a try. He was using this model for his gens.
https://civitai.com/models/1442151?modelVersionId=1732221
Anonymous No.8618660 [Report]
the urge to bake a lora on those weird ass old mugen hentai animations
Anonymous No.8618663 [Report] >>8618679 >>8618827
oh yeah i also gotta retry that cursed game cg lora on this config
Anonymous No.8618673 [Report] >>8618679
Whats the difference between Hentai Generation General and Hentai Diffusion General
Anonymous No.8618679 [Report]
>>8618663
waow, the 2006 was awesome!!! https://files.catbox.moe/cadhtw.jpg
>>8618673
the middle name
Anonymous No.8618692 [Report] >>8618730 >>8618806
>>8618623
>be looking at scraped cgs
>stumble upon this https://files.catbox.moe/sfqyrf.jpg
uh i think you'd like this
Anonymous No.8618730 [Report] >>8618747
>>8618692
So uhh... I guess that's semen on the walls then?
Anonymous No.8618740 [Report] >>8618747
>>8618462
catbox for some of the images pls? i can't get the exact same style as yours with the kusujinn lora
Anonymous No.8618747 [Report] >>8618973 >>8619052
Fugtrup https://files.catbox.moe/an57lq.safetensors
It can kinda work natively but it's better with
>>8618730
Semen On The Walls is my new band name
>>8618740
Don't have those but I prompted it like this https://files.catbox.moe/0bpfaq.png
Anonymous No.8618771 [Report] >>8618784 >>8618835 >>8618896
>>8618516
not necessarily proud of it and was just preemptively addressing it if it was thinly veiled metadata bait. I'm not sure where you'd get the idea in the first place besides projecting/boogeyman but I can make up many reasons to not provide metadata
1) Spent hours figuring out what slight modifications to add to mix to get it to play nice and I'd prefer to not see my efforts end up being used in questionable subject matter by someone with what I would consider abhorrent tastes.
2) Workflow is incredibly schizo and identifiably mine. Ironically enough this reason is because I specifically want less attention because it's low hanging fruit to pick at that could easily follow me across styles if I always posted metadata.
3a) To spite you specifically
3b) It's closing in on a plausibly deniable style concerning which artists it's copying from, so it's a candidate for maybe reviving my xitter/pixiv accounts :^)
4) Mercury is almost in retrograde
Anonymous No.8618773 [Report]
gay
Anonymous No.8618780 [Report]
based
Anonymous No.8618781 [Report]
kek so it was to shill xis twitter
Anonymous No.8618782 [Report] >>8618801 >>8618809
don't worry skinny-tits-from-above-kun, your gens are already extremely identifiable even if you don't post any workflows :)
Anonymous No.8618784 [Report] >>8618787 >>8618796
>>8618771
My point is that a general should be a collaborative environment and there is no collaboration without sharing info whether that's pic metadata, lora metadata, configs, controlnet settings, etc. Outside of the social aspect, I don't see any purpose in having a general. We don't have themes, contests, challenges, requests, or anything else so that just leaves the typical attention whoring you'd see on /trash/ doubly so if you're posting porn of all things. Regardless I don't want to shit up the thread more than I already have but at least you're reasonable.
Anonymous No.8618787 [Report]
>>8618784
Uh no, seethe more style thief
Anonymous No.8618796 [Report]
>>8618784
Yeah that's a fair place to be coming from. I figure if there was a sanitized /b/ thread without all the wack shit/toddlercon, I'd be inclined to post there.
Mostly I've just been throwing in 2cents @ the lora training stuff here since I have a basic understanding of the underlying math and may have something to add beyond "empirically, this is observed" like the discussion over dropout. I figured I'd post some images to add activity to the thread since the most common comment concerning the thread is that it's "too slow"
Anonymous No.8618801 [Report] >>8618809
>>8618782
>skinny-tits-from-above-kun
underboobless-kun ;)
Anonymous No.8618805 [Report]
Anonymous No.8618806 [Report]
https://files.catbox.moe/ccpe58.png
>>8618692
Heh, funny since I did try to gen some fatal frame girls, sadly they don't work natively. I tried making a shiragiku lora that never turned out well, but maybe I should try to rebake.
Anonymous No.8618809 [Report]
>>8618782
>>8618801
valid even if potentially inorganic, I wonder if something I'm using is overfit for that.
I put bouncing breasts in the negatives earlier because something overfit for breasts looking like there's an unnatural pressure on them on an earlier version and this reminded me to remove it at least, thanks
Anonymous No.8618827 [Report] >>8618834
>>8618663
i still don't think i can get it to work lmao
Anonymous No.8618834 [Report] >>8618846
>>8618827
Yeah it's bullshit but the civitbros might have discovered something that I need to test.
Anonymous No.8618835 [Report] >>8618845
>>8618771
just curious but is this a wagashi with worldsaboten mix?
the hair highlights and lineart reminds me of wagashi but the eyes and face in general remind me of that worldsaboten lora
Anonymous No.8618840 [Report] >>8619696 >>8619729
did you know when you're training with batch size below 32 you're fucking up every other batchnorm layer
Anonymous No.8618841 [Report] >>8619790
>>8618623
dark theme, bleak ambience, dark persona, evil smile is a powerful combination
Anonymous No.8618845 [Report]
>>8618835
funnily enough, I also thought I saw cactusman in it so I added the lora at one point and it instantly made it worse, so it's not in anything I've posted.
eyes in that one are from the kindatsu lora on civitai at low weight interacting with the other artists I'm using
Anonymous No.8618846 [Report]
>>8618834
well i am the guy who made that config
the dataset is just cursed
Anonymous No.8618865 [Report]
Anonymous No.8618867 [Report] >>8618910
why are zoomers so obsessed with fish? are catgirls seen as a boomer thing now and this is their attempt at counter culture?
Anonymous No.8618873 [Report]
weird windmill to fight desu
Anonymous No.8618877 [Report]
Ellen Joe did nothing wrong.
Anonymous No.8618879 [Report] >>8618887
think it's a coincidence there were two highly visible highly produced sharks in the past few years, the latter may be caused by the former via usual trend chasing
zoomers would've bought into any theme'd girl if it garnered enough social media attention
Anonymous No.8618887 [Report]
>>8618879
what about the kraut orca?
Anonymous No.8618896 [Report]
>>8618771
>3a) To spite you specifically
based
Anonymous No.8618910 [Report]
>>8618867
they should like crocs instead
Anonymous No.8618925 [Report]
Anonymous No.8618951 [Report]
Anyone know if there's a tag for two-tone clothing in which the front and back are different colors, rather than say the bottom being different from the top, or the colors being in a more striped pattern. In biology this happens and is called "countershading". This does exist as a tag but not with many examples. Is there a term for it in clothing?
Anonymous No.8618969 [Report]
>>8618652
Funny that. I like to test rando shitmixes and recently tried the v3 one of this. Wasn't impressed.
Anonymous No.8618973 [Report] >>8619052
>>8618747
>Fugtrup
I'll give this one a try but feel fugtrup stuff works best with Pony or those 2.5/3D focused noob shitmixes.
Anonymous No.8619004 [Report] >>8619010
>>8618523
She is cute. Do some non-/h/ with her sometime
Anonymous No.8619010 [Report] >>8619022
>>8619004
All 2hoes are whores, I can't really picture myself doing something non-h with any of them
Anonymous No.8619011 [Report]
Does base sd-scripts have any annealing schedules?
Anonymous No.8619022 [Report] >>8619023
>>8619010
The internets has damaged your mind.
Anonymous No.8619023 [Report] >>8619032 >>8619053 >>8619058 >>8619061 >>8619111
>>8619022
The real reason is that I don't think the internet needs more 2hoe images, I would rather do more of my cute obscure gacha wives tbqh
Anonymous No.8619032 [Report]
>>8619023
OK that's a good one.
Anonymous No.8619038 [Report] >>8619605
so the adafactor finetune config on the rentry leaves me with around 5gb of spare vram. How do i snakeoilmaxx with that?
Anonymous No.8619052 [Report] >>8619056 >>8619098
>>8618747
>>8618973
I thought fugtrup works inherently?
Anonymous No.8619053 [Report]
>>8619023
>I would rather do more of my cute obscure gacha wives
Unfathomably based.
Anonymous No.8619056 [Report]
>>8619052
Nah. You can kinda prompt engineer a bit with tags like 3d, realistic, blender, etc but unless your model is already slopped, it's hard to replicate faithfully.
Anonymous No.8619058 [Report]
>>8619023
the true worth of AI gen
found out someone did a good train for my VN waifu on civitai with all outfits god bless
Anonymous No.8619061 [Report]
>>8619023
>The real reason is that I don't think the internet needs more 2hoe images
this is loser talk.
you are a loser.
the internet needs more 2hu not less.
Anonymous No.8619066 [Report] >>8623477
Anonymous No.8619098 [Report]
>>8619052
>>>>It can kinda work natively but it's better with
Anonymous No.8619111 [Report] >>8619114 >>8619116
>>8619023
What about not-so-obscure Vtuubas?
Anonymous No.8619114 [Report]
>>8619111
Those as well of course
Anonymous No.8619116 [Report]
>>8619111
>not .gif
missed opportunity
Anonymous No.8619132 [Report] >>8622167
fun
not something i'm gonna use every day but fun
Anonymous No.8619145 [Report]
So like, does anyone have a sense for why exactly the model doesn't simply just perfectly know how to render anal_tail? It can definitely do it, but sometimes it wants to make the tail more of a plug or vibrator, sometimes it doesn't even place one, sometimes you get one tail and one object in the anus instead of them being the same thing. There should be way more than enough samples in the dataset. Is this just a really hard concept to get for the current amount of parameters?
Anonymous No.8619158 [Report] >>8619160 >>8619161
>assless panties have 700 posts on danbooru
>surely it must work
>prompt for it
>it works but makes the character topless
>prompt for the character's clothing explicitly
>the panties turn into normal panties
It's all so tiresome.
Anonymous No.8619160 [Report]
>>8619158
have you tried "backless panties" instead?
Anonymous No.8619161 [Report]
>>8619158
Meant backless panties not assless. It's an alias so I was thinking it while making my post.
Anonymous No.8619220 [Report] >>8623365
Anonymous No.8619223 [Report]
Ohhh is *that* who this shitposter is. I should have known.
Anonymous No.8619230 [Report]
schizo gen is over there if you're into pointlessly prolonging that kind of activity
Anonymous No.8619235 [Report] >>8623365
Anonymous No.8619236 [Report]
what is lil bro talking about
Anonymous No.8619249 [Report]
all me btw
Anonymous No.8619271 [Report] >>8619541
Why are we so dead tonight, /hdg/?
Anonymous No.8619276 [Report]
geg?
Anonymous No.8619279 [Report]
/hdg/?
and it's the weekend, go outside
Anonymous No.8619541 [Report]
>>8619271
no more than usual
if you want action go to the shitposting thread
Anonymous No.8619579 [Report] >>8619640
>>8618550
I assume you are the fabled bwoposter I was referred to, could you help? >>8619527 Thank you.
Anonymous No.8619582 [Report]
>he fell for it
Anonymous No.8619605 [Report] >>8619695
>>8619038
increase batch size or train text encoder
Anonymous No.8619640 [Report] >>8619645
>>8619579
hi bwo, will post it in a day or two, currently baking so i've not got the vram to make previews
Anonymous No.8619645 [Report] >>8620497
>>8619640
Can't wait, thanks again 'bwo.
Anonymous No.8619688 [Report] >>8619689
>>8618048
>102d custom
Could u suggest other models that are equally good as this but for realistic (dandon fuga, sakimichan, zumi) and 3d (fugtrup, slash-soft) style?
Anonymous No.8619689 [Report]
>>8619688
I wouldn't say custom is that bad at them, picrel
A lot of styles just need a higher base res and or lora to get them fully correct, that goes for any artist
Anonymous No.8619691 [Report]
holy mother of /aco/
Anonymous No.8619695 [Report] >>8619696
>>8619605
both of these don't really increase quality and it converges fast enough already.
Anonymous No.8619696 [Report] >>8619699
>>8619695
>>8618840
Anonymous No.8619697 [Report] >>8619708 >>8619710
blackpill on this? no jeets as authors for once.
Anonymous No.8619699 [Report] >>8619707
>>8619696
why?
Anonymous No.8619707 [Report] >>8619709 >>8619711
>>8619699
the inputs become way too noisy, batchnorm layers will fit to your dataset extremely quickly. actually, they will even if you use large bs, so the best solution is to freeze the batchnorm layers, this way you even you would even have more free vram
Anonymous No.8619708 [Report]
>>8619697
It doesn't really do anything good for the model, at least on my end.
Anonymous No.8619709 [Report]
>>8619707
>this way you would even have more free vram
shit
Anonymous No.8619710 [Report]
>>8619697
>the virgin wavelet
Anonymous No.8619711 [Report] >>8619718 >>8619988
>>8619707
>best solution is to freeze the batchnorm layers
so is there an argument for this or do i have to go on a vibecoding adventure? are batchnorm layers included in the usual train norm?
Anonymous No.8619714 [Report] >>8619719
>actually listening to 4chan advice without images
ishiggydiggy
Anonymous No.8619718 [Report] >>8619727
>>8619711
some trainers may let you select the trainable layers, but i don't think it's implemented in sd-scripts, or at least in the finetune script
you can try hacking it into https://github.com/kohya-ss/sd-scripts/blob/a21b6a917e8ca2d0392f5861da2dddb510e389ad/sdxl_train.py#L52
Anonymous No.8619719 [Report]
>>8619714
worst case I waste some electricity, best case I get better loras
Anonymous No.8619720 [Report] >>8619723 >>8619753
Fuck my lora didn't bake last night. How do I install both python 3.9 and 3.10? Installing one always breaks the other.
Anonymous No.8619723 [Report]
>>8619720
Getting uv is the easiest.
Anonymous No.8619727 [Report] >>8619733
>>8619718
So I didn't find any references to batch norm in the unet library and gpt tells me that the groupnorm32 layers that are used instead aren't dependent on batch size.
Anonymous No.8619729 [Report] >>8619733 >>8619988
>>8618840
i dont think sdxl uses batchnorm since their vae uses groupnorm
Anonymous No.8619733 [Report] >>8619988
>>8619727
>>8619729
>groupnorm32 layers that are used instead aren't dependent on batch size.
yeah that seems to be the case, actually. sdxl uses groupnorm
i swear i've seen batchnorm somewhere in sd though, maybe it was during early lora development back in sd 1 days..?
Anonymous No.8619753 [Report]
>>8619720
you'll need a virtual environment.

https://www.freecodecamp.org/news/how-to-setup-virtual-environments-in-python/
Anonymous No.8619790 [Report] >>8619903
https://files.catbox.moe/sd0srl.png
>>8618841
dark theme and bleak ambience are working tags? I've used black theme before but never dark theme. I usually only use the "horror (theme)" + "dark" combination.
Anonymous No.8619813 [Report] >>8619814
should I just not bother with activation tags for full finetunes? it looks like most of the style makes it into uncond anyway
Anonymous No.8619814 [Report] >>8619815
>>8619813
Would you say that all style loras need activation tags or only some of them?
Anonymous No.8619815 [Report] >>8619816
>>8619814
I feel like it's mostly a preference thing and you can get good results with both, regardless of dataset.
Anonymous No.8619816 [Report] >>8619817
>>8619815
I feel like some datasets need activation tags to work while others are fine without one. It's all a black box which was my attempt to answer your question.
Anonymous No.8619817 [Report]
>>8619816
been my experience too
I think it depends on whether the model already recognizes similar styles or not
Anonymous No.8619903 [Report] >>8620140
>>8619790
e621 tags, bleak ambience doesn't seem to have many images, so it may not work well
i personally use theme since I'm specifically trying to not get the effects horror gives, opting for more of a "good girl acting aggressive" lean. there seems to be dark aura too fro danbooru, which I might try
Anonymous No.8619988 [Report]
>>8619711
>>8619729
>>8619733
can confirm that I looked into a model's state dict out of curiosity just now and could not find any running_mean/running_var as would expect to be seen from a pytorch batchnorm layer
Anonymous No.8620087 [Report]
>lora hell
I should have stuck to my old config...
Anonymous No.8620088 [Report] >>8620098
certain "people" here should put a cannon to their head
Anonymous No.8620090 [Report]
wrong thread?
Anonymous No.8620098 [Report]
>>8620088
such as?
Anonymous No.8620113 [Report] >>8620117
How do you add vpred keys to a model again? I forgor :skull:
Anonymous No.8620117 [Report] >>8620123
>>8620113
If the .py I saved back when noob vpred was still new is correct:

from safetensors.torch import load_file, save_file
import torch
state_dict = load_file("foo.safetensors")
state_dict['v_pred'] = torch.tensor([])
state_dict['ztsnr'] = torch.tensor([])
save_file(state_dict, "bar.safetensors")
Anonymous No.8620123 [Report]
>>8620117
thx
Anonymous No.8620140 [Report] >>8620152 >>8620157
https://files.catbox.moe/uosrqa.png
>>8619903
Makes sense, but are you sure dark "theme" does anything different from "dark"? Might as well save some tokens. Dark aura is usually purple/black glow around a character.
Anonymous No.8620152 [Report] >>8620157
>>8620140
i honestly don't really know or care if it doesn't have much difference since it gets the job done used with dark persona/evil smile for my purposes. it's an e621 tag that autocomplete gives that I just take.
dark/night tags are usually what I also reach for in combination when I do low light settings and there are loras/color grading techniques if I ever wanted more (I usually don't, as I like skin color rather than everything ending up dark blue)
Anonymous No.8620157 [Report]
>>8620140
>>8620152
also, I don't believe in "saving" tokens being a worthwhile effort for the most part. The TE will get what it gets and I've always doubted that the range of outputs will be just that much different because of the amount of tokens it gets, as lo g as the prompt's general meaning is within the same ballpark.
I just don't think it's that sensitive in the underlying math (CLIP input -> latent space mappings) and that most people got psyop'd into caring too much about it during yhe early phase where they lost their minds over calling it "prompt engineering" with the mental framing that came along with it
Anonymous No.8620176 [Report] >>8620179 >>8620180
>tag doesn't get generated in every image
>up the weight
>now other tags get fucked over
>give them weight
>still other things get fucked
>download a lora
>it interferes with some parts of the image, lowering the weight makes it interfere less but also work less effectively
>there is no solution that doesn't fuck something else up or demand more manual labor (in the form of inpainting, or browsing through dozens of gens for the perfect cherrypick
God.
Is there any hope for a new good model on the horizon?
Anonymous No.8620179 [Report] >>8620183 >>8620193
>>8620176
(Dark:1.2) doesn't work for me neither
Anonymous No.8620180 [Report] >>8620193
>>8620176
Sounds like skill issue to be honest. But you could try raising CFG, it was meant for these cases originally before anime finetunes turned it into a blur/burn slider.
Anonymous No.8620183 [Report]
>>8620179
It's actually a dangerous tag on noob v-pred, with how well it works. But merges fuck up lighting very quickly.
Anonymous No.8620189 [Report] >>8620194 >>8620196 >>8620200 >>8620202 >>8620211 >>8620215
If I add anything besides the trigger word to the prompt if comes out deformed. Did I overtrain? For reference
Scheduler: Cosine with startup
Lr cycles:3
Lr rate: 1e-4
Unet lr:1e-4
3000 steps
text encoder:0
Alpha=Dim
Adam8bit
Anonymous No.8620193 [Report]
>>8620179
Doing a different thing. IIRC dark did work for me on normal vpred when I tried it in the past. It's totally possible it gets fucked on a merge.

>>8620180
My CFG is already higher than normal. At this point I've tried everything except the snake oils. You say it's a skill issue but it's a known architecture issue that the more stuff you try to pack into an image, even if it all makes sense and there isn't conflicting tags, the model will simply just choke. It likely doesn't help that SDXL has the 75 token limit and does the chunking concatenation hack.
Anonymous No.8620194 [Report]
>>8620189
>3000 steps
for what size dataset
Anonymous No.8620196 [Report] >>8620215
>>8620189
>Lr cycles:3
KEEEEEK
Anonymous No.8620200 [Report] >>8620210
>>8620189
alpha should be twice dim?
Anonymous No.8620202 [Report]
>>8620189
The original SDXL guides and our early Pony bakes used this LR with alpha=half dim, and 2K steps. By that logic, yes you did. But without examples and metadata it's just a wild guess.
Anonymous No.8620210 [Report]
>>8620200
The original purpose of alpha was that it would always be lower than dim, at most equal. That training tools even allowed a higher value was lazy on their part.
Anonymous No.8620211 [Report] >>8620236
>>8620189
3k steps has always been fine for me but yes post a picture. I notice you didn't post batch size tho.
Anonymous No.8620215 [Report]
>>8620189
60 img
50 reg img
>>8620196
was told it affects the scheduler, not overall learning rate.
Now I feel kind of stupid :{
Anonymous No.8620218 [Report] >>8620222 >>8620231
is there any way to fit pagedadamw8bit into 24gb without fullbf16?
Anonymous No.8620220 [Report]
just set alpha=1 and let rngsus take the wheel
Anonymous No.8620222 [Report]
>>8620218
use adamw4bit
Anonymous No.8620230 [Report] >>8620232
if you arent using edm2 loss you dont know shit about lora training
Anonymous No.8620231 [Report]
>>8620218
use adamw2bit
Anonymous No.8620232 [Report]
>>8620230
edm is dead
Anonymous No.8620236 [Report] >>8620242 >>8620263
>>8620211
Batch size:2
and here's one
https://files.catbox.moe/ckdru0.png
Anonymous No.8620242 [Report] >>8620249
>>8620236
post one of the deformed ones
Anonymous No.8620249 [Report] >>8620258
>>8620242
https://files.catbox.moe/wcleqe.png
Anonymous No.8620258 [Report] >>8620269
>>8620249
And you tagged all her features and stuff? If so then yeah just lower the steps.
Anonymous No.8620263 [Report] >>8620286
>>8620236
Isn't batch size 2 basically like doubling your step count? That's how I treat it anyway.
Anonymous No.8620269 [Report] >>8620276 >>8620284
>>8620258
I removed tags that are in every picture like hairstyle and glasses so it's tied to the trigger word.
And Ill try 2k steps with one lr cycle and lr1e-5.
Anonymous No.8620276 [Report] >>8620320
>>8620269
So you never want to take her glasses off?
Anonymous No.8620284 [Report] >>8620320
>>8620269
anyon why aren't you keeping pierodic epochs/steps
Anonymous No.8620286 [Report]
>>8620263
It's not really anything, if anything it's halving, since you gotta bump the LR a bit.
Anonymous No.8620320 [Report]
>>8620276
nope
>>8620284
I trained 5 epochs and I tried all of them. they had the same issues. I'll just have to try again
Anonymous No.8620323 [Report] >>8620334 >>8620339 >>8623518 >>8623666
New config, https://files.catbox.moe/kg3ivs.toml what do you think?
Anonymous No.8620330 [Report]
Oh no not another one
Anonymous No.8620334 [Report]
>>8620323
Don't use this, it fries your GPU
Anonymous No.8620339 [Report] >>8620340
>>8620323
I thought anon said above 1024 resolution caused issues? Does this work for you?
>DORA
?
Anonymous No.8620340 [Report] >>8623444
>>8620339
you can go a little higher without issue, but there won't be any benefits in detail. I just have it set to 1152*1152 here to somewhat combat the over sharpening that lanczos causes when downscaling very large images.
Anonymous No.8620376 [Report] >>8620378
Why do artist mixes sometimes make those weird ass fucking gremlin things in the background
Anonymous No.8620378 [Report] >>8620379
>>8620376
less common than you think since I have no idea what you're talking about
post gen
Anonymous No.8620379 [Report] >>8620384
>>8620378
sometimes i get this typa shit and it's always in mixes lmao https://files.catbox.moe/5m0iyi.png
Anonymous No.8620384 [Report]
>>8620379
yeah never seen that before
try prompting your artists like artist:zankuro to avoid their names leaking into something else
Anonymous No.8620387 [Report] >>8620391 >>8620394 >>8620488
Is 102d better than vpred10? I tried it out and feel like it's a bit more coherent but also less capable of some art styles and concepts.
Anonymous No.8620391 [Report]
>>8620387
It's a shitmix, so it will be more stable and will generete more "aesthetic" (ie slopped up) images at the expense of being able to replicate styles.
Anonymous No.8620394 [Report]
>>8620387
Sounds about right. Like every merge, it dilutes the base model's knowledge somewhat in exchange for a style bias. Long as that bias doesn't conflict with what you're trying to do; and you aren't relying on 100pic danbooru tags knowledge, it's pretty good.
Anonymous No.8620424 [Report] >>8620434
Trying to fix hands so I'm using meshgraph hand refiner but I keep getting
>ModuleNotFoundError: No module named 'mediapipe'
I install it with pip install mediapipe --user in my comfui folder directory and it still gives me an error after a restart. Any idea how I can fix this?
Anonymous No.8620428 [Report] >>8620430 >>8620747
anyone know how i can get a plain text of all booru artist and character tags?
Anonymous No.8620430 [Report] >>8620431
>>8620428
Ask our ai overlords to write you a script that scrapes them from the API
Anonymous No.8620431 [Report]
>>8620430
wont i get blocked from too many api calls?
Anonymous No.8620434 [Report] >>8620443
>>8620424
>Trying to fix hands so I'm using meshgraph hand refiner but I keep getting
The fuck is this? some comfyui node or something?
Anonymous No.8620443 [Report] >>8620452
>>8620434
Yes. Do you guys use something else to fix hands?
Anonymous No.8620452 [Report] >>8620502
>>8620443
Yeah I use cyberfix/wai/102d. Otherwise I just inpaint sketch.
Anonymous No.8620488 [Report]
>>8620387
Give 291h a shot. I flip between it and 102dcustom.
Anonymous No.8620497 [Report] >>8620501 >>8620570
>>8619645
hi bwos, posted the bakes
RadishKek: civitai.com/models/1662074
Aza/Manglifer: civitai.com/models/1662450
Anonymous No.8620501 [Report] >>8620505
>>8620497
>sdxl_vae_mod_adapttest_01.safetensors
huh, what's this?
Anonymous No.8620502 [Report] >>8620515
>>8620452
Best inpainting tutorial?
Anonymous No.8620505 [Report] >>8620513 >>8620585
>>8620501
its a modified sdxl vae from an anon in /hdg/ >>8618360
its a little sharper than the noob vae and doesn't have the slight blue tint of xlvaec_c0.
Anonymous No.8620513 [Report] >>8620527
>>8620505
hmm, did you train those loras with it? sounds like it shouldn't work otherwise, or it otherwise needs comfy to work and anon's custom node
Anonymous No.8620515 [Report]
>>8620502
https://rentry.org/fluffscaler-inpaint
Anonymous No.8620527 [Report]
>>8620513
nope i did not, i just tried using it for genning and kinda liked the effect.
it gave me the same results as the c0 vae but without the tint, so i'm happy with using it as-is
Anonymous No.8620532 [Report] >>8620625
anyone here still have oekaki anon's
>slantedsouichirousep26-step00000132.safetensors
lora?
I accidentally deleted mine awhile back. tried looking in the rentry and archives but couldn't find mine
Anonymous No.8620560 [Report]
vibin'
Anonymous No.8620563 [Report]
what is lil bro vibin' about :skull: :skull: :thinkingemoji:
Anonymous No.8620570 [Report] >>8620599
>>8620497
nice bwo,
by any chance would you be willing to share a "fail"/overbake of the aza loras? Wanted to see what outputs I could get from one of them
Anonymous No.8620585 [Report] >>8620599
>>8620505
if it works for you thats fine but im still trying things out and wouldnt recommend using it, the trained model actually produced worse results now that i tested it more in enc+dec (very blurry rather than overcontrasted), and its only the encoder that i trained so it should actually be the same as original sdxl vae when used for just making gens
Anonymous No.8620599 [Report] >>8620609 >>8620615
>>8620570
hi bwo, do you have any specific version in mind? i'm not sure if i've retained the original fails (the more recent ones might still be in the recycle bin, but i'm not sure)

>>8620585
alright, thanks for the heads up.
i only tested it against the noob vae and the c0 one i was using. Found it sharper than the noob vae (might just be that the noob vae is ever so slightly blurrier, will have to test against the original sdxl vae as a control). looking forward to the results of your vae training experiment!
Anonymous No.8620609 [Report] >>8620962
>>8620599
I think I liked how >>8615606 seemed to turn out. If there are ones with more steps from that attempt, that would be cool instead too.
Also, as an aside observation on the lora, it's hilariously capable at getting legible english text moans to come out.
Anonymous No.8620615 [Report]
>>8620599
>slightly blurrier
thats probably the case because the one tuned decoder vae i tried from civit clearly didnt use a perceptual loss like lpips and was blurrier

my idea might not go anywhere, mainly wanted to demonstrate that there might be a way to upgrade from the 8x compressed vae, to a 4x compressed one without that much training
encoder needs to be adapted, decoder also shits itself a little for some reason with having its 2x upscaling removed (the vae actually outputs coherent stuff even without training it, just a little artifacted), and there needs to be a hopefully minor finetune for the higher res since its the equivalent as generating at 2x higher res than usual
Anonymous No.8620622 [Report] >>8620769 >>8621300
Can a kind anon prompt a blowjob where you're sitting in the car (maybe driving) and the girl leans over from the side to give you a blowjob? Can't seem to figure this out.
Anonymous No.8620625 [Report]
>>8620532
apparently even I don't have it anymore(and I nuked it off mediafire since it's a standard lyco and I don't think it was actually super great to begin with). And I never really tried doing a fatter/more "recent" config run of it since I still had PTSD of running that dataset on pdxl.
I did still have the config and I'm pretty sure the dataset has been untouched since then, so here's a (sort of) reprint
pixeldrain com/u/quBr9bVC
if someone else has the original still that one is technically still going to be different due to the whole process being random because ML is fun like that, but I did at least check through a few output steps and 154 was (still) the best, though it's also a bit fried and bleeding at the edges. though that's probably fine-ish when used as a mix at lower weight and/or on derivative models instead of illustrious 0.1
Anonymous No.8620683 [Report]
Anonymous No.8620747 [Report]
>>8620428
Get one of the csv that people already scraped like from the autofill extension and then ask gpt to get you a conversion command.
I posted an updated one more fit for Noob, either here or in hdg, I don't remember.
Anonymous No.8620769 [Report] >>8620794 >>8620907
>>8620622
I think that would imply you can get a good car interior pov in the first place
t. tried
It's probably somewhat bakeable though
Anonymous No.8620794 [Report] >>8620853
>>8620769
>managed to get a whole 9 pics for a dataset
yeah i don't think so
Anonymous No.8620853 [Report]
>>8620794
isn't this the perfect opportunity for that difference learning lora meme https://github.com/hako-mikan/sd-webui-traintrain
Anonymous No.8620870 [Report]
Anonymous No.8620899 [Report]
Anonymous No.8620907 [Report]
>>8620769
Even if it's not car interior. Just sitting on the couch while the girl sucks you off from the side.
Anonymous No.8620950 [Report]
Detail daemon is goated

https://files.catbox.moe/8xz5g8.png
Anonymous No.8620962 [Report] >>8621388
>>8620609
sorry bwo, it seems that i have already discarded the earlier failbakes; don't really have anything more than the current version. from my test logs about that version, there wasn't a better off step off past 1100, the losses were fairly spread out into 2 groups and the next minima at 1900 did have good style but also had fried eyes. (resolved by adding a set of face crops in subsequent versions)
>hilariously capable at getting legible english text moans
there's a decent number of images with english and korean text in the dataset - it is kinda funny when it happens
Anonymous No.8621028 [Report]
Anonymous No.8621029 [Report]
Anonymous No.8621030 [Report]
Anonymous No.8621099 [Report] >>8621104
Anonymous No.8621104 [Report]
>>8621099
grab me a can of ⊙y when you're done, nee-chan
Anonymous No.8621134 [Report] >>8621159 >>8621234 >>8621347 >>8621383
Anyone know a way to get midget sized subjects? Not loli, and not shortstack, just a small person, though prompting loli honestly doesn't seem to help either. It feels like the model just has a poor sense of how to size characters relative to the environment.
Anonymous No.8621159 [Report] >>8621385
>>8621134
Try some variations of [chibi::x] (I assume you're using webui, if not there's probably a comfy node that does the same prompt edit) where x is the number of steps. The idea is you want to lock in a midget shape human blob before it starts trying to apply anatomy to it.
Anonymous No.8621234 [Report]
>>8621134
find an artist that does it
Anonymous No.8621300 [Report] >>8621510
>>8620622
It seems gachable enough depending on how accurate you want the steering wheel, dashboard and windshield to be. Best bet is probably to gacha for something like picrel and then go fish in img2img for a better version and maybe inpaint the rest.
Anonymous No.8621347 [Report]
>>8621134
perhaps try goblin but without pointy ears and green skin
Anonymous No.8621383 [Report] >>8621478
>>8621134
Tag should be "petite" according to the wiki. First thing to try when a prompt doesn't work is to go back to noob v-pred 1.0, see if it's your loras or shitmerge doing it. So I did, and it didn't help at all. Also interesting bias on the quality tags, style prompt was (anime screenshot:0.1)

Flux or Chroma can do it way better, just img2img or controlnet the style afterwards into something more pleasing.
Anonymous No.8621385 [Report]
>>8621159
nta but guess it's more about scale of the girl relative to the background. Bodyshape is pretty easy in my experience.
Anonymous No.8621387 [Report] >>8621405
Can someone with a github account bother machina to add full finetuning support to ezscripts? I need my edm2 snakeoil
Anonymous No.8621388 [Report] >>8622349
>>8620962
Sounds good then, thanks for the loras!
How big did the dataset get when counting face crops as their own images vs just original images?
I've also done indiscriminate face+upper body crops, but I'm wondering what balance you went with
Anonymous No.8621405 [Report]
>>8621387
sd-scripts uses different options for lora training, it's not as simple as just "adding" support
Anonymous No.8621478 [Report]
>>8621383
Interesting that what changes is the character's size in pixels, but the background remains the same. I wonder if there are some background tags that can influence this.
Anonymous No.8621510 [Report]
>>8621300
Thank you!
Anonymous No.8621602 [Report] >>8621605 >>8621608 >>8621609 >>8621690 >>8621840
>102d absolutely zaps all the sovl out of my lora
God damnit
Anonymous No.8621605 [Report] >>8621612
>>8621602
>102d
It's a shitmix with a heavy butiful smooth pastelmix henti aesthetic bias, why wouldn't it suck all the soul out of a rough style lora?
Anonymous No.8621608 [Report] >>8621612
>>8621602
compared to what, base?
Anonymous No.8621609 [Report]
>>8621602
it sadly is very overpowering on the last few steps
Anonymous No.8621612 [Report] >>8621690
>>8621608
left is 29+v1
>>8621605
I guess, sometimes it produces kino like picrel
Anonymous No.8621638 [Report] >>8621650 >>8621664 >>8621684 >>8621711
how & why did no one came with with anything better than my v29+1.0 vanilla shitmerge yet?
Anonymous No.8621650 [Report] >>8621666
>>8621638
h-hot... now do the spitroast.
>v29+1.0 vanilla
>better
Because that one sacrifices backgrounds and all subsequent merges wanted to preserve those.
Anonymous No.8621664 [Report]
>>8621638
what is better at?
Anonymous No.8621666 [Report] >>8621674
>>8621650
>h-hot... now do the spitroast.
that's an old gen actually (picrel as well)
>v29+v1.0 sacrifices backgrounds
i'm not a background fag myself, but is that really the case? i remember doing tests on backgrounds, and remember that the merge is better than either v29 or v1.0. if i had to point a flaw, it'd be that sometimes the picture falls apart for no apparent reason (or that could be a skill issue with upscaling on my part but whatever)
Anonymous No.8621674 [Report] >>8621692
>>8621666
>remember that the merge is better than either v29 or v1.0.
I'm inclined to agree but it absolutely shreds backgrounds. Might be loras in general (every lora I've used) but they're simple at best and nonsensical at worst. 102d at least makes the character look like they're in the environment. It also seems very sensitive to schedulers and they completely change the style (which was the whole point of using the model).
>666
Uh oh... I will disregard everything you said then.
Anonymous No.8621684 [Report] >>8621692 >>8621812
>>8621638
v29+v1.0 is decent but more than often I had this little artifacts everywhere no matter what I did
Anonymous No.8621690 [Report]
>>8621602
>>8621612
I've also found that 102d seems to have issues with adding canvas/paper texture to lines. I've been trying simple merges with it with some success, but it does still get annoying
I've found merging with the bluemint version to be the most effective for these styles specifically
Anonymous No.8621692 [Report] >>8621812 >>8621813
>>8621674
>sensitive to schedulers
>>8621684
>little artifacts everywhere
that's what i mostly meant by "falling apart", for example if you apply a lora (especially trained on eps) the images become, like... i don't know, muddy? I prefer not to use loras either way.
Anonymous No.8621711 [Report]
>>8621638
>these aren't my glasses
Anonymous No.8621812 [Report] >>8621813
>>8621684
>>8621692
>for example if you apply a lora (especially trained on eps) the images become, like... i don't know, muddy?
Hmm I've been having this problem a lot too and this might be the cause but I was having it on 102d. Very strange.
Anonymous No.8621813 [Report] >>8621821
>>8621692
>>8621812
Why are you using eps loras on a vpred model?
Anonymous No.8621817 [Report]
Anonymous No.8621821 [Report] >>8621822
>>8621813
>I prefer not to use loras either way.
Anonymous No.8621822 [Report]
>>8621821
my baderino
Anonymous No.8621838 [Report]
whats up, naisan? Scared of a little... sovl?
Anonymous No.8621840 [Report] >>8621849 >>8621855 >>8621861
>>8621602
Try it with 291h. Normally what I flip to when custom smooths out my rough/scratchy artist mixes.
Anonymous No.8621849 [Report] >>8621855 >>8621958
>>8621840
desu pleasantly surprised by it actually. 1+29 on the left and 291h on the right, would've expected a merge like 291h to do much worse.
Anonymous No.8621855 [Report]
>>8621840
>>8621849
Alright 'nonnie, you've finally convinced me to give this a go.
Anonymous No.8621856 [Report] >>8621859 >>8621949 >>8622109
Any tags or loras that give the scene consistent lighting? It feels like a lot of the time, if you ask for a blue or green or whatever theme, it will just change the background and leave the girl shaded normally.
Anonymous No.8621859 [Report]
>>8621856
try "[color] theme, high contrast"
Anonymous No.8621861 [Report] >>8621958
>>8621840
What is the full name of this shit? I hate all these abbreviations.
Anonymous No.8621949 [Report]
>>8621856
literally just use a color balance node/extension on the base res gen, upscale it and send it to i2i
Anonymous No.8621958 [Report] >>8622156 >>8622181
>>8621849
I like it a lot as it's VERY similar to 29+1, that's what the anon who merged it was going for just with a bit more stabilization so no loras or other snakeoils needed, without it going schizo with complicated prompts. I don't know shit about model merging but the anon who baked it was supposed to do some updated block merge to it before dying. No idea what it would have accomplished but even so, great little shitmerge.
>>8621861
>https://civitai.com/models/1301670/291h
Anonymous No.8622109 [Report]
>>8621856
Sounds like a shitmix issue, [color] theme works perfectly fine on base noob.
Anonymous No.8622114 [Report]
anyone noticed how greyscale gens have worse lineart than colored ones?
Anonymous No.8622150 [Report] >>8622156 >>8622193
is 29+1 discussed here https://civitai.com/models/1313975?modelVersionId=1483194 or what?
Anonymous No.8622156 [Report] >>8622168
>>8622150
This one >>8621958 but the one you linked is very similar. Seeds on both models gen just about the same shit.
Anonymous No.8622167 [Report] >>8622170
>>8619132
box(es)?
Anonymous No.8622168 [Report] >>8622193
>>8622156
That's 291h isn't it, anon was talking about "v29+1" at first
Anonymous No.8622170 [Report] >>8622173
>>8622167
Oh I don't have them anymore but it was genned with >>8618434
Anonymous No.8622173 [Report]
>>8622170
thank you
Anonymous No.8622181 [Report] >>8622187 >>8622193
>>8621958
>https://civitai.com/models/1301670/291h
>v29+v1.0
>+
>...
>illPersonalMerge_v30
>noobieater_v30
>obsessionIllustrious_v3
>obsessionIllustrious_vPredV10
>catTowerNoobaiXL_v15Vpred
>noobaiCyberfixV2_10vpredPerp
>EasyFluffXLVpred
>QLIP
>betterDaysIllustriousXL_V01ItercompPerp
>betterDaysIllustriousXL_V01Cyber4fixPerp
>betterDaysIllustriousXL_V01CyberillustfixPerp
>who knows what number of loras
Anonymous No.8622187 [Report] >>8622190
>>8622181
I wonder what the people who merge this kind of slop are trying to achieve.
Anonymous No.8622188 [Report]
holy shit it even got lucereon through obsession in it
https://civitai.com/models/818750
Anonymous No.8622190 [Report]
>102d is slopmerge
>now 29+1 is the real deal
>>8622187
I mean it's ultimately just throwing shit at the wall and seeing what sticks
I doubt many people were randomly merging shitty anime model layers with 3dpd and thinking it would be the primary weeb model for a year during early 1.5
Anonymous No.8622193 [Report] >>8622214
>>8622168
Ah good catch.
>>8622150
29+1 was only available through torrent because some anon didn't want to piss off LAX by making it public or some convoluted shit like that. I think some anon also put i up on mega a few days ago but said he'd be taking it down after a day or two.
>>8622181
Nigga, he could have added expressiveH and All Disney Princess XL LoRA Model from Ralph Breaks the Internet. As long as it doesn't produce slop gens, why the fuck should you care outside of autism?
Anonymous No.8622203 [Report]
good morning sir
Anonymous No.8622206 [Report]
/hdg/ is the other tab.
Anonymous No.8622207 [Report]
>gyet fomod into trying it again
>both are shittier at my artists than custom
whew
Anonymous No.8622208 [Report]
>/hdg/ is the other-BRAAAAAAAAAAAP
Anonymous No.8622214 [Report]
>>8622193
Oh yeah it's pretty easily findable but here
magnet:?xt=urn:btih:1a8e80eb5fc2e1dd42ad7f68e13d1fe73b9d8853&dn=NoobAI-XL-Vpred-v1.0%2bv29b-v2.safetensors
Anonymous No.8622219 [Report]
Anonymous No.8622312 [Report]
>>8622294
>>>/aco/
Anonymous No.8622328 [Report]
>>8622294
pretty based ngl
Anonymous No.8622346 [Report] >>8622348 >>8622756 >>8622865
Should I upscale first then use face detailer or face detailer first then upscale? Or does order not matter
Anonymous No.8622348 [Report] >>8622351
>>8622346
You should inpaint details like the eyes and mouth after upscaling, stop using that automated shit (unless you are a cumfy user then you really don't have much of a choice)
Anonymous No.8622349 [Report]
>>8621388
the dataset (~150 images) is around
- 25% augmented (face + upper body crops with some rotation)
- 25% uncensored images
- 50% from danbooru / gelbooru (mix of censored / uncensored, with text / no text)
Anonymous No.8622351 [Report]
>>8622348
I do use comfy for most of the gen but bring it into the webui for the inpainting sections.
Anonymous No.8622352 [Report]
me and my tomboy gf
Anonymous No.8622433 [Report] >>8622455
What are the implications of excluding a tag if it's not in the image vs using "no <tag>"? Does "no <tag>" teach the model to always gen something UNLESS you give it the "no" tag?
Anonymous No.8622455 [Report] >>8622457 >>8622463
>>8622433
"no <tag>" generally doesn't work and instead gives you the <tag> through CLIP leakage. Unless "no" is part of the danbooru tag, such as "no outline", "no headwear", no humans", etc.
Anonymous No.8622457 [Report]
>>8622455
>no outline
bad example, it's "no lineart"
Anonymous No.8622463 [Report] >>8622471
>>8622455
Yes, but what does this mean for model/lora training? Does boorus having these "no <tag>" tags fuck up model in some way when you consider the unconditional tag dropping? What does the model learn when you are explicitly tagging something that is not in an image?
Anonymous No.8622471 [Report]
>>8622463
It probably still ends up a positive association. "no lineart" is a style similar to watercolor and with some /aco/ bias, if you don't prompt it with anything else. "no pants" is basically the same as" bare legs". etc.
Anonymous No.8622473 [Report] >>8622553 >>8622600 >>8622610 >>8622659 >>8622757
>the loras I have to use for my super specific fetishes also ruins the ability to prompt for great backgrounds
Sigh.
Anonymous No.8622497 [Report]
Anonymous No.8622553 [Report]
>>8622473
Depending on the fetishes, you may be able to schedule the lora to only the early steps, to only the center of the image, or remove it when upscaling.
Anonymous No.8622600 [Report]
>>8622473
>backgrounds
ishiggydiggy
Anonymous No.8622610 [Report]
>>8622473
>backgrounds
autism
Anonymous No.8622659 [Report]
>>8622473
tragedy
Anonymous No.8622683 [Report] >>8622707
>come upon an artist mix you like, that produces good results with a wide variety of angles and poses
>except it looks like shit with certain kinds of clothing
It never ends...
Anonymous No.8622707 [Report]
>>8622683
this but pussies and dicks
Anonymous No.8622708 [Report] >>8622719 >>8622885
is there a dress like this irl?
Anonymous No.8622719 [Report]
>>8622708
I don't think piercings are made with steel.
Anonymous No.8622726 [Report] >>8622730 >>8622732
do you like slow corruption?
Anonymous No.8622730 [Report]
>>8622726
you mean like getting a shy girl slowly turn into a slut?
Anonymous No.8622732 [Report]
>>8622726
you mean like compressing the same pic in jpeg multiple times?
Anonymous No.8622740 [Report]
i mean like in the op of the thread we stole the name of
Anonymous No.8622747 [Report]
We still don't have a new logo btw
Anonymous No.8622756 [Report]
>>8622346
Do it both times. Once to have a decent face as a base and the second to really get something good out of it.
Anonymous No.8622757 [Report]
>>8622473
Use wai.
Anonymous No.8622760 [Report] >>8622762 >>8622767
>>8616235
This shit is garbage.
>Base model: The base model you are training on, it should be either Illustrious0.1 or NoobAI-Vpred 1.0 for most users.
No you dumbass. It should be whatever the model you're using is or the predominate base model.
The argument for "training on illustrious 0.1 for compatibility :)" is dumb as fuck and horrible advice. Its compatibility is SHIT and washes out on any model that isn't a mainline illustrious model and if you're using v1/1.1/2 you shouldn't be training on 0.1 because of the resolution mismatch. You should also, unfortunately, be training at 1536x1536
>Scale V pred loss: Scales the loss to be in line with EDM, causes detail deterioration. Not recommended.
You uh. You kind of need that enabled to train on vpred, you know.
>Width: Keep at 1024 for Illustrious/Noob. Higher values do not increase bake quality.
It doesn't increase "quality" in a general sense but you should be matching the base resolution of the model you're training. Which 1024 does not apply to for illustrious 1, 1.1 or 2.0
>Gradient Accumulation: Used for virtually extending batch sizes for less VRAM cost. Not recommended
Jesus fucking christ.
>Batch Size: Represents the maximum number of images in each batch. Multiple batches allow for quicker training but also exhibit problems with symmetricality in the end bake. Keep at 1.
Fucking dumbass.
>Pyramid noise: Tweak to model noise, supposedly less destructive than noise offset. Causes quality deterioration. Keep off.
Actual retard.

Is that you, refiner-faget? Because this is just as dumb and seemingly intentionally damaging as that bullshit.
Anonymous No.8622761 [Report]
Hmm I don't have any reaction pic for this, time to start the oven
Anonymous No.8622762 [Report] >>8622763 >>8622770
>>8622760
Why not just explain what he should have said instead of calling him stupid? I'm so bored with this low iq "banter" of every fucking thread on 4chan. Do something productive for once.
Anonymous No.8622763 [Report] >>8622765 >>8622766
>>8622762
It's better this way. He's far too sure of his opinions, even ones that are obviously wrong like scale v-pred loss.
Anonymous No.8622764 [Report] >>8622770
>You uh. You kind of need that enabled to train on vpred, you know.
retard
Anonymous No.8622765 [Report] >>8622784
>>8622763
>scale v-pred loss.
you're confusing scale v-pred loss with v- parameterization
Anonymous No.8622766 [Report] >>8622810
>>8622763
>It's better this way
No it isn't. It's only "good" for you since your entire purpose in commenting is attempting to prove yourself superior to him. Why not share your knowledge with the rest of the class of fuck off? If you're the smartest person in the room, you don't need to be here.
Anonymous No.8622767 [Report]
>>8622760
>refiner-faget
this is me, anon >>8616298
Anonymous No.8622769 [Report] >>8622771 >>8622791 >>8622806 >>8622867
What are the most tags you've ever used in a prompt? I'm up to near 400 now. Gunning for a super specific image with a ton of things in it including stuff that doesn't exist as tags or that work weakly so you have to use a dozen hacks and try and make them all work together without interfering is surely a challenge.
Anonymous No.8622770 [Report]
>>8622762
After a certain point it isn't worth it. And a lot of it comes down to "you're just trying to explain the functions of the application don't be a dumbass and keep your retarded pet decision shit to yourself."
I stopped where I stopped because I just couldn't care anymore. There's more fucked up shit on that "guide."
>>8622764
It's been a long while since I had to deal with settings to make vpred loras work but from what I remember, trying to actually do all the proper vpred shit and not having that disabled just output noise.
If you don't have all the proper flags in place then sure it will technically probably work but you're training it as EPS and that's kind of contrary the point.
Anonymous No.8622771 [Report] >>8622786 >>8622791
>>8622769
>What are the most tags you've ever used in a prompt
Meant tokens there.
Anonymous No.8622780 [Report]
>want me to pour you another glass shinji?
Anonymous No.8622784 [Report]
>>8622765
Am I, or is he?
Anonymous No.8622786 [Report]
>>8622771
Comfy doesn't have an integrated token counter so I have no idea. But at some point adding more tags either doesn't change anything or overpowers some already existing ones, I usually stay under 30 whole tags.
Anonymous No.8622788 [Report]
Anonymous No.8622791 [Report]
>>8622769
>>8622771
150 tokens
Anonymous No.8622792 [Report] >>8622800
felt like making a short story with peach
https://files.catbox.moe/x1kntf.cbz
Anonymous No.8622800 [Report] >>8622807 >>8622818
>>8622792
What the fuck is a cbz. did you just give me a virus
Anonymous No.8622806 [Report]
>>8622769
>400 tokens
Absolute madprompter. My average prompt sits at around 100, probably only goes up to near 150 at max.
Anonymous No.8622807 [Report]
>>8622800
comicrack
Anonymous No.8622810 [Report]
>>8622766
>Why not share your knowledge with the rest of the class of fuck off?
do you think that anon actually has any knowledge to be shared? 100% of these drive by training posts are made by schizos who have never actually bothered testing themselves
Anonymous No.8622818 [Report]
>>8622800
It's a zip file I think.
Anonymous No.8622819 [Report] >>8622822 >>8622828 >>8622831 >>8622847 >>8622873 >>8623153 >>8623166
Just realised how important some token orders are. At least, the order I put some artists in has a huge impact on 102d.
How do you 'nonies order your prompt? Not sure if mine is optimal but it's: artists, quality tags, background/meta, positional details, subject details, negpips.
Anonymous No.8622822 [Report] >>8622826
>>8622819
I put negpip anywhere. I put quality tags in the front, the artist, then character info, then background at the very end with the loras.
Anonymous No.8622826 [Report]
>>8622822
Same, but it's more out of habit and to keep the prompt tidy, I haven't really noticed changes swapping the order of tags
Anonymous No.8622828 [Report]
>>8622819
Prompt template is as follows:
>quality tags
>artist tags
>fundamental (unchanging) female info
>clothing
>facial expression
>female body position
>male information/sex tags
>background
>loras
I don't want to buy the prompt order snake oil, thoughever.
Anonymous No.8622831 [Report]
>>8622819
More or less the same as yours but I don't use any quality tag
Anonymous No.8622847 [Report]
>>8622819
I order by importance, or what thing the model should concentrate on first, and model bias. Because of a model's biases towards certain tags, you need to use trial and error unless you already know how all the tags you're using likes to get interpreted by the model. Of course this is if you encounter issues where the model is not generating what you want, if it is then you don't need to further perfect the prompt unless you really want to.

That's if all my tokens fit into the token limit. If they don't, then generally speaking I will split my prompt up into concepts using BREAK, and redundantly begin all of them, or most of them, with tags that tie together elements of the image. I base this on the theory that cross attention is more loosely relating tags across prompt chunks, while each chunk is more well-understood. The effect of this might be placebo but I feel like it helps so I haven't stopped doing this. It makes my prompts more structured and easier to parse anyway which I think is the greater benefit.
Anonymous No.8622865 [Report]
>>8622346
>t2i + detailed anzhcs
>upscale
>inpaint
Anonymous No.8622867 [Report]
>>8622769
I never go over 300 as it just fries anyway. Basic scenes are normally about 200. I only push 300 when there's some specific angles and poses I'm trying to prompt engineer with certain clothing.
Anonymous No.8622868 [Report]
vibin'
Anonymous No.8622873 [Report]
>>8622819
>2boys
>character, copyright(optional)
>artists
>positions, actions, overall composition
>character details, clothing
>concepts like size difference, penis size difference, height difference, etc
>background and scene items
>quality
Anonymous No.8622884 [Report] >>8622891 >>8622895 >>8623151
Trying to train a style lora with only 33 images, any tips? is it even possible?
Anonymous No.8622885 [Report]
>>8622708
can i see a boxo?
Anonymous No.8622891 [Report]
>>8622884
I've trained one with 24 so yes. If your current preset is working for you then stick to that and you'll be fine.
Anonymous No.8622895 [Report]
>>8622884
Sure, style will work even with one image. Can't really call it a style lora tho since it'll reproduce everything. The fewer pics you have the more biases the lora picks up in terms of composition, characters, background, lighting, etc. Ideally you'd have a whole bunch of different ones with style being the only thing they all share.
Anonymous No.8622917 [Report]
Ok so I tried using a bunch of loras to make a small non-loli non-shortstack woman relative to the environment. It kind of worked but also made the girl look too much like a loli, especially the narrow hips. So I tried using those bottom heavy loras to try and return things. And doing that fucks the scale up more again and also makes it look more like a shortstack which isn't the goal. So yeah I guess img2img/controlnet is the only way.
Anonymous No.8622940 [Report] >>8622979 >>8623753
eps schizo if youre here try to bake a lora for this artist. or anyone who wants to try. i think theres more of their images on their twitter
https://danbooru.donmai.us/posts?tags=inutokasuki
Anonymous No.8622979 [Report] >>8622981
>>8622940
>i think theres more of their images on their twitter
understatement of the week.
here's a twitter scrape with, I think, all of the photos and random gachashit screencaps removed(it has 343 images)
https://files.catbox.moe/8f5t46.zip
wouldn't use it as a dataset outright though. there's a lot of weird shit, a lot of weird/complicated poses, a lot of really lowres stuff and the artist bounces between like 3 or 4 different brushes.
anyway I'll do a lazy-tier run with a cut down dataset, see what it's doing and adjust accordingly to try and do something overnight or something.
Anonymous No.8622981 [Report] >>8623217
>>8622979
thanks bwo
Anonymous No.8623151 [Report]
>>8622884
Use flips and crops with repeats
Anonymous No.8623153 [Report]
>>8622819
lora triggers,
artists,
characters,
1girl, blue eyes, standing shit,
2boys, extra dark skin shit,
composition,
quality
Anonymous No.8623166 [Report]
>>8622819
Technically Illustrious has a set order from the paper but honestly you can rearrange order ass backwards and it will still work.
Anonymous No.8623217 [Report] >>8623243
>>8622981
https://files.catbox.moe/knnkc9.png
https://files.catbox.moe/q24kuq.png
I probably need to do another 1-2 runs testing an adjusted dataset but I guess the artist should end up functional-ish. though it doesn't seem like it's escaping all the problems that usually pop up with these sorts of digital rough sketch styles, in that the anatomy is usually a bit more on the thicker side of things and hands can end up pretty nebulous. I'd say it probably works better generating in one style for anatomy/composition and then just upscaling at a higher denoise(that's what was done for the hoshino catbox), or using as a part of a mix or something than by itself.
I'll probably have something uploaded by tomorrow or so.
Anonymous No.8623243 [Report]
>>8623217
Sour cream looks tasty, maybe I'll go put some on bread.
Anonymous No.8623265 [Report] >>8623268 >>8623800 >>8623837
Anon, for fuck's sake, you should try launching training with this environment variable. It's basically free VRAM.
PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True
Anonymous No.8623268 [Report] >>8623269
>>8623265
Does this affect the training speed?
Anonymous No.8623269 [Report]
>>8623268
It's as quick or slightly faster.
Anonymous No.8623356 [Report]
>Hentai Games General have become Hentai Generation General
Are things that bad?
Anonymous No.8623365 [Report] >>8623368
>>8619220
>>8619235
I see a lot of kyokucho there, nice gens btw.
Anonymous No.8623368 [Report] >>8623371
>>8623365
the artist style name is beitemian (yaoi)
Anonymous No.8623371 [Report]
>>8623368
>beitemian
oh, the way the girls were depicted reminded me a lot of kyokucho, oh well, nice to know, I'll test him anyway, looks interesting.
Anonymous No.8623441 [Report]
thoughts on chroma?
Anonymous No.8623444 [Report]
>>8620340
>I just have it set to 1152*1152 here to somewhat combat the over sharpening that lanczos causes when downscaling very large images
You can fix that by changing buckets interpolation from INTER_CUBIC back to INTER_AREA in sd-scripts/library/train_util.py
Anonymous No.8623458 [Report]
>page 10
Anonymous No.8623477 [Report] >>8623484
>>8618317
>>8619066
Could I get box please?
Anonymous No.8623484 [Report]
>>8623477
https://litter.catbox.moe/sgsjw7rmvn5huwfs.png
Won't help you much. Used controlnet off an earlier gen for the pose (same prompt), then adjusted the style mix again when upscaling.
Anonymous No.8623518 [Report] >>8623532
>>8620323
no scalar? no dropout?
Anonymous No.8623532 [Report] >>8623542
>>8623518
Dropout kills style for me, even at the lower suggested end of 0.0005. I did some bakes with 0.0001 and below, but didn't really notice any changes (positive or negative) at that point.
Scalar gave me errors and I never got to test it.
Anonymous No.8623542 [Report] >>8623543
>>8623532
Is that caption dropout or neuron dropout? I run the latter at 0.1 and styles are not killed.
Anonymous No.8623543 [Report] >>8623570 >>8623594
>>8623542
>0.1
With dora?
Anonymous No.8623549 [Report] >>8623552 >>8623777
are inpainting models necessary?
Anonymous No.8623552 [Report]
>>8623549
not really but It would be very nice to have, we would be able to put complete new elements into a gen without any extreme shit to make it blend nicely
Anonymous No.8623570 [Report] >>8623571
>>8623543
no, with regular lora
might be dora conflicts with that somehow
Anonymous No.8623571 [Report]
>>8623570
0.1 is fine with old "doras" and locons.
Too high for new, fixed doras.
Anonymous No.8623573 [Report] >>8623574
i cant get good fingers for the life of me even with inpainting, any recs?
Anonymous No.8623574 [Report] >>8623577
>>8623573
you are clearly inpainting wrong if you can't get that pose right, post yout current inpaint settings
Anonymous No.8623577 [Report] >>8623581 >>8623591 >>8623604 >>8623628
>>8623574
Just trying out a bunch of different values with the padding and denoise. Trying to follow this guide https://rentry.org/fluffscaler-inpaint-old
Anonymous No.8623581 [Report] >>8623587
>>8623577
>512 * 594
No shit inpainting isn't helping, try 1024*1024
Anonymous No.8623587 [Report]
>>8623581
Tried it and it barely seems to be helping out
Anonymous No.8623591 [Report] >>8623601 >>8623628
>>8623577
increase mask blur to 24
decrease masked padding to 64
enable soft inpainting
use more steps (32)
increase denoise to .5
use 1024*1024
Anonymous No.8623594 [Report] >>8623684
>>8623543
It's neuron dropout. I haven't tried it on higher rank loras or the full preset yet, but the 16 or 32dim loras I usually train definitely suffer some degradation, from which they don't recover during my usual training length
Anonymous No.8623601 [Report]
>>8623591
Best iteration so far, thanks anon i'll save these settings for future use
Anonymous No.8623604 [Report] >>8623635
>>8623577
>euler a
>20 steps
anonie pls
Anonymous No.8623628 [Report] >>8623635 >>8623647 >>8623799
>>8623577
What >>8623591 said and don't use ancestral samplers for i2i stuff
Anonymous No.8623631 [Report] >>8623637
>realized a good part of my shitty artist recognition was just scheduler/sampler not resolution
retest coming :-DDD
or maybe not
desu i do love the greater depth and detail you get at higher base reses but the occasional weird anatomy snakies do get annoying
Anonymous No.8623635 [Report] >>8623640 >>8623641 >>8623643
>>8623628
>>8623604
What samplers and amount of steps do you guys rec
Anonymous No.8623637 [Report] >>8623643
>>8623631
>realized a good part of my shitty artist recognition was just scheduler/sampler not resolution
I posted about this some threads ago
Anonymous No.8623640 [Report]
>>8623635
I'm using euler e 25
Everyone has their own schizo theory, but anyway it only takes a few seconds to try another pair on any image you gen.
Anonymous No.8623641 [Report]
>>8623635
Euler + SGM Uniform or Simple works fine for inpainting
Anonymous No.8623643 [Report]
>>8623635
Anything than isn't euler a desu, steps don't really matter since they get scaled by default by the denoise amount anyway so it's like 3 v 5 steps in the end
>>8623637
i was posting about that too but i was getting off a highres high and tested the artists with the new sampler along with the higher res at the same time and thought it was related lmao
Anonymous No.8623647 [Report] >>8623651 >>8623669
>>8623628
>don't use ancestral samplers for i2i stuff
Holy shit I did follow guide and keep everything the same as txt2img when img2img multidiffusion upscaling, is this why lineart is thicken smoothed out?
Anonymous No.8623651 [Report]
>>8623647
I inpaint with euler a out of pure lazyness and don't see any issues
Anonymous No.8623665 [Report]
>It's NOT funny shinji... You're so done for... Let me in RIGHT NOW!
Anonymous No.8623666 [Report] >>8623716
>>8620323
Pretty sure half of those optim params doesn't work with regular ADOPT
https://github.com/67372a/LoRA_Easy_Training_scripts_Backend/blob/413a4d09db5265ade3fcd64b402f60180ec9024e/custom_scheduler/LoraEasyCustomOptimizer/adopt.py#L30
It should work with SF one.
Anonymous No.8623669 [Report]
>>8623647
Euler a smooths out a lot even in txt2img but that could be true just because higher reses tend to also smooth things out a bit
Anonymous No.8623676 [Report] >>8623682
uegh honestly though it's hard to go back to 1024x1024
you really do lose a lot of detail and clarity
idkk
Anonymous No.8623682 [Report] >>8623686
>>8623676
sdxl vae fucking sucks
Anonymous No.8623684 [Report]
>>8623594
What kind of degradation? I'm training on a small style dataset with your config and neuron dropout of 0.001 and it looks fine 40% in the run.
Anonymous No.8623686 [Report] >>8623690 >>8623711
>>8623682
i mean it is 1024x1024
that's a resolution that fell off in like 2007 for hentai images lol
i'd say the nai images are proofs that the vae doesn't really do THAT much at that resolution
Anonymous No.8623690 [Report] >>8623698
>>8623686
crazy talk honestly, 4 times the detail at the same resolution is definitely noticeable
ironically their upscale sucks so bad it's not even worth it though
Anonymous No.8623691 [Report] >>8623698
>get back into SD
>try a bunch of stuff
>look back on my best based64 gens with the best artist and lora mix I had then
>they're lower res, and less coherent, BUT the style is better than what I can do now
Damn. And the same loras don't exactly exist in the same way for illu/noob. I guess I will just keep experimenting with mixing until I get back the glory.

How easy is it to train a lora btw guys? Can it be done on a 3090?
Anonymous No.8623698 [Report] >>8623708 >>8623750
>>8623690
i still haven't really seen anything that impressive
or actually showing "4x the detail"
a lot of artists posted looked shittier than v3
composition? maybe sure but the pitfalls for artists are still here with the same base res
>>8623691
my setup uses like 7gb vram last time i checked
Anonymous No.8623703 [Report] >>8623704
That reminds me. Does jeremy clarkson still improve gen quality on SDXL models?
Anonymous No.8623704 [Report]
>>8623703
what?
Anonymous No.8623708 [Report]
>>8623698
I think you're coping about the vae desu but whatever
>a lot of artists posted looked shittier than v3
yeah, their fault for baking shitty aom lighting into the model lmao. neta's lumina model is looking way more aesthetic
Anonymous No.8623711 [Report]
>>8623686
do you realize that most monitors are 1080 pixels in height
Anonymous No.8623716 [Report] >>8623724
>>8623666
>It should work with SF one.
I switched from SF to normal and just updated the parameters that caused errors. Don't really know if I can recommend adopt in the first place.
Anonymous No.8623724 [Report]
>>8623716
>I switched from SF to normal
Why?
Anonymous No.8623747 [Report] >>8623748 >>8623752 >>8623796
why do my gens have a random red hue to them
Anonymous No.8623748 [Report] >>8623749
>>8623747
example?
Anonymous No.8623749 [Report] >>8623778 >>8623784
>>8623748
Anonymous No.8623750 [Report]
>>8623698
>a lot of artists posted looked shittier than v3
which one?
Anonymous No.8623751 [Report] >>8623763 >>8624333
Anyone know an artist that consistently draws hips/thighs like Asanagi but doesn't draw wide shoulder like he sometimes does? Don't like the shading and linework he does a lot of the time either.
Anonymous No.8623752 [Report]
>>8623747
maybe it's a comfy quirk because there was another anon with that problem
Anonymous No.8623753 [Report] >>8623772
>>8622940
https://www.mediafire.com/folder/7e2x1fheakgc7/inutokasuki
didn't test it super thoroughly but this seemed to be the best performing run.
Anonymous No.8623763 [Report]
>>8623751
I feel like that's a very /aco/ thing, generally. Jadf takes it pretty far.
Anonymous No.8623772 [Report] >>8623797
>>8623753
Previous image had softer colors and lighter strokes I think? I liked that more, but I'm also not the one who requested the lora.
Anonymous No.8623777 [Report]
>>8623549
Yes and no, no because you can "make do" without it and yes because without it you can't properly "inpaint" because the generated image doesnt align with the rest of the image, this is less true for DDIM for whatever reason...
Anonymous No.8623778 [Report]
>>8623749
this looks quite a bit like vpred without APG/rescale
Anonymous No.8623781 [Report]
sd-scripts is such a cancer
Anonymous No.8623784 [Report] >>8623786
>>8623749
If you are using noob v-pred 1.0 or 1.0+29 and not using any loras, it helps to have some kind of CFG adjustment. CFG Rescale, CFG++, AYS, etc. That or keep your CFG really low, like around 3.
Anonymous No.8623786 [Report] >>8623792
>>8623784
>1.0+29
eh, that one can do fine without various snake oils
Anonymous No.8623792 [Report]
>>8623786
Meant that as "if you're using one of these two AND getting red/blue hues everywhere". If not then obviously you can keep doing your thing.
Anonymous No.8623796 [Report] >>8623798
>>8623747
do you really want to know the reason?
Anonymous No.8623797 [Report] >>8623807
>>8623772
previous one also had a lot more issues with hands. Like, really bad consistency issues.
also pretty sure the reason the lines got more defined is because, well, I added something that was much more defined to the dataset. But I'm pretty sure this is also what corrected the hands issue.
here's a one-off grid comparing the two to show what I'm talking about
https://files.catbox.moe/2u3ip7.png
though if anyone wants that earlier one here's a pixeldrain
pixeldrain com/u/RbFNmthN
Anonymous No.8623798 [Report]
>>8623796
Oh yeah good point, he was asking for the reason not how to avoid it.
Anonymous No.8623799 [Report] >>8623802
>>8623628
>don't use ancestral samplers for i2i stuff
What is the reason why you shouldn't?
Anonymous No.8623800 [Report] >>8623803 >>8623837
I wish someone finally tested this >>8623265 besides myself, I don't get why nobody is using this when it frees over 4 gb VRAM, making it possible to train sdxl's unet at batch size 12 on a 24gb gpu.
Anonymous No.8623802 [Report]
>>8623799
I don't know the technicalities but it always end up looking like shit for me.
Anonymous No.8623803 [Report] >>8623808
>>8623800
how to?
Anonymous No.8623807 [Report] >>8623817
>>8623797
Why bake on epred and not vpred or illu 0.1?
Anonymous No.8623808 [Report]
>>8623803
If you're on linux you just put
>PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True
before the actual command, so it's like
>PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True python sdxl_train.py ...
If you're using windows, you do
>set PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True
before running the train script.
If you're using easy scripts, idk. Try setting this variable for the entire system, globally.
Anonymous No.8623817 [Report]
>>8623807
>vpred
because I don't use vpred(and they didn't ask for a vpred train so I just kept to my usual standard). Not going to go into the bullshit but regardless of what shitposters try and claim, EPS 0.5 is just the most consistent with my process and requires the least amount of cleaning in post.
>illu 0.1
no point unless you're using an illustrious model or a shitmix with it as the predominant base model. I don't use shitmixes and if you're using a later illustrious model you'll get anatomy wonkiness due to expected resolution mismatch.
tl;dr: because I make things for personal use.
Anonymous No.8623821 [Report] >>8623826
Anonymous No.8623826 [Report] >>8623879 >>8624192
>>8623821
seems like 16 hours weren't spend for naught, now gonna try 1536x run with the same dataset, should take about 3x long
Anonymous No.8623837 [Report] >>8623847 >>8623849
>>8623265
>>8623800
Okay I just tried to test it but got similar speed and vram consumption results

3090 TI

Torchastic + fused-backpass + full_bf16 - bs1 - fft - 23444MB - 2.28s/it

Torchastic + fused-backpass + full_bf16 + command - bs1 - fft - 23346MB - 2.26s/it

What was the rest of the config?
Anonymous No.8623847 [Report] >>8623849 >>8623866 >>8624113
>>8623837
>Torchastic + fused-backpass
Hmm, I'm running AdamW4bit+bf16_sr https://github.com/pytorch/ao/tree/main/torchao/optim and naifu instead of sd-scripts, the rest should be +- the same. I'm not really sure why, but it does give me a huge memory advantage, maybe it's because AdamW4bit is implemented in triton? I'm at about 16gb usage using batch size 1.
Anonymous No.8623849 [Report] >>8623866
>>8623837
>>8623847
Actually, it may be because using triton kernels requires compiling the model. The last time I was trying to use sd-scripts, I couldn't get the model to compile, so... Maybe you can force it through sdpa but I doubt it.
Anonymous No.8623866 [Report] >>8623889
>>8623847
> maybe it's because AdamW4bit is implemented in triton?
Probably, triton is not even the part of sd-scripts, tried with old january installation of 67372a fork which nominally has it, but doesn't seem to actually utilizing it, it's still the same for adamw8+full_fp16
> I'm at about 16gb usage using batch size 1
PagedAdamw8 are probably one of the best for vram saving. While it's still being adam, it could fit in both encoders and unet in full fp16 precision under 14gb on my machine, batch 12 is finely fit under 24, but of course at the cost of some speed loss
> AdamW4bit+bf16_sr https://github.com/pytorch/ao/tree/main/torchao/optim and naifu
Linux or windows? Can you show a full command of how you running it with naifu? I'm willing to try it
>>8623849
> sdpa
Nope, no luck either
Anonymous No.8623874 [Report] >>8623876 >>8623877
>get a nice artist mix going
>it also results in banding
aaaaaaaaaaa
Anonymous No.8623876 [Report]
>>8623874
get 4ch vae'd, nerd.
Anonymous No.8623877 [Report]
>>8623874
banding?
Anonymous No.8623879 [Report] >>8623881 >>8623884
>>8623826
Melty native 1280x1792 res gen with working cross-eye stereoscopic effect bonus
Anonymous No.8623881 [Report] >>8623897
>>8623879
damn people still use 1.5
Anonymous No.8623884 [Report]
>>8623879
>cross-eye stereoscopic effect
the fuck?
Anonymous No.8623889 [Report] >>8623923
>>8623866
>it's still the same for adamw8+full_fp16
Ugh, sd-scripts uses bnb implementation of (paged)adamw8bit which is written in cuda and does not require triton.
>PagedAdamw8
You're technically offloading gradients to RAM. I think it's possible to do with one of torchao's wrappers but I haven't tried it yet.
>Linux or windows? Can you show a full command of how you running it with naifu? I'm willing to try it
Linux, as long as you have all dependencies installed you can just run it like this:
>python trainer.py config.yaml
This thing is much more modular than sd-scripts, there are 4 basic fft configs here https://github.com/Mikubill/naifu/blob/main/config/train_sdxl_v.yaml but either way I'm running a heavily modified version of naifu (most notably to add edm2 and some other things I tried playing around) so it's not like my configs will be useful to you.
Anonymous No.8623896 [Report] >>8623898
is krita worth using?
Anonymous No.8623897 [Report] >>8624171
>>8623881
can't be bothered to find good styles on a first test checkpoint
Anonymous No.8623898 [Report]
>>8623896
tell me what kirta is going to do for (You) that other programs won't
Anonymous No.8623923 [Report] >>8623927
>>8623889
> Ugh, sd-scripts uses bnb implementation of (paged)adamw8bit which is written in cuda and does not require triton
Okay, I'm just trying with torchao implementation and sd-scripts. It spits out assertion error of lr and doesn't start
>lr was changed to a non-Tensor object. If you want to update lr, please use "optim.param_groups[0]['lr'].fill_(new_lr)"
After commenting out that there is just endless model compilation every step of training which leads to 98.77s/it, I'm pretty sure it's because kohya code somewhere are fucked and probably it could be easy fixable to get both edm2 and tochao workable on the fork
Anonymous No.8623927 [Report] >>8623942
>>8623923
>It spits out assertion error of lr and doesn't start
Ah, I remember it being a quirk of that library, you literally have to follow what the assertion is talking about and convert lr to a tensor like this.
lr = torch.tensor(lr)

>there is just endless model compilation every step of training which leads to 98.77s/it
i don't think you need to compile the entire model which is probably what sd-scripts are doing. On naifu the training starts in a few seconds.
>every step
Did you run it for like 20 steps?
Anonymous No.8623942 [Report]
>>8623927
> convert lr to a tensor
Yeah, I get it, just don't know where in should be done in the code
> i don't think you need to compile the entire model which is probably what sd-scripts are doing
It's not adopted to triton at all, so besides that it probably recompiles in a loop every step
> Did you run it for like 20 steps?
Just for 5. When you look at the console you can see how it stops for a second after step is completed
Anonymous No.8623954 [Report]
Anonymous No.8623959 [Report]
cook the thread bloody bitch
Anonymous No.8623960 [Report]
more like cucks on this very thread
Anonymous No.8624031 [Report]
why is girls kissing girls so fucking HOT
Anonymous No.8624039 [Report] >>8624057
I keep running into the issue where the model knows an artist well but doesn't draw them as well as I'd like. Would it be a good idea to train a lora with the artist name as an activation tag in that case? I tried doing it without one and the model doesn't learn very much and things come out weird.
Anonymous No.8624057 [Report]
>>8624039
Doesn't take much training at all if you're building on top of existing knowledge, easily 1/4 of what you'd normally need.
Anonymous No.8624074 [Report] >>8624085
>gen 100 images
>the first one was the best
How does this keep happening.
Anonymous No.8624085 [Report]
>>8624074
it's telling you to inpaint instead of rerolling
Anonymous No.8624113 [Report] >>8624135
>>8623847
>adafactor
>fused backwards pass
>unet only
>batch size 3
>12gb vram
Anonymous No.8624135 [Report] >>8624161
>>8624113
>adafactor
Anonymous No.8624161 [Report]
>>8624135
much constructive
Anonymous No.8624171 [Report]
>>8623897
box please?
Anonymous No.8624172 [Report] >>8624179
Is there any point in using global gradient clipping if optimizer can do adaptive and SPAM ones?
Anonymous No.8624176 [Report] >>8624178 >>8624179
Is there any point in using WHAT if optimizer can do WHAT and WHAT??
Chill out bro, just type
>1girl, 1boy, touhou, dark-skinned male, suspended congress
and enjoy the show like a NORMAL person.
Anonymous No.8624178 [Report] >>8624179
>>8624176
>suspended congress
uhm go back to pol
Anonymous No.8624179 [Report]
>>8624172
>>8624176
>>8624178
its nai
Anonymous No.8624180 [Report]
uh oh hdg meltie
Anonymous No.8624188 [Report] >>8624191
is controlnet stuff better on comfy or the webui
Anonymous No.8624191 [Report]
>>8624188
You can't use tiled CN with multidiffusion upscale in webui, iirc
Anonymous No.8624192 [Report] >>8624199
>>8623826
>1536x run
yeah it definitely pays off, same seed for a row, 1928x1152 base res on the bottom, 2048x1152 on top
Anonymous No.8624199 [Report] >>8624202 >>8624232
>>8624192
you trained with 1536x1536 base res? it does look sharper
was our bwoposter right about training loras on a res higher than 1024
Anonymous No.8624202 [Report]
>>8624199
no idea who you're talking about but if you're training on illustrious 1, 1.1 or 2 you should train at 1536x1536
you can probably get away with training it on that for other models, too, but I wouldn't use the lora for anything other than upscaling if you do.
I'd argue it's stupid to do in general because illustrious v1/2 are shit but if you're intent on using it you should probably at least do it correctly.
Anonymous No.8624204 [Report]
I got a better result just getting something at 1536 then doing at 1024 and upscaling it 2x
Anonymous No.8624228 [Report]
1536 training seemed pretty sharp when i tried it but the actual details of everything seemed less consistent and shittier
Anonymous No.8624232 [Report] >>8624237
>>8624199
>you trained with 1536x1536 base res?
Yeah but it's not exactly that simple. I first trained noob vpred 1.0 (not a lora) on a dataset consisting of 4776 images for 15 epochs at 1024x, and now I'm continuing from that checkpoint but at 1536x. I'm also using cosine schedule so early epochs look kinda fried and smudgy, but it looks like it's already starting to forget some things desu. And it looks like color blownouts only got exaggerated. Pic is 5 epochs at 1536x.
>it does look sharper
are you an actual schizo?
Anonymous No.8624237 [Report] >>8624256
>>8624232
what kind of images are you finetuning noob on?
is the objective of your finetuning to improve backgrounds? (judging by your pic)
>color blownouts only got exaggerated
i've been wondering what causes these color blow outs on vpred models
>are you an actual schizo?
honestly i might be. i still found your image looking sharper than what i've been able to gen, albeit a bit smudged
Anonymous No.8624256 [Report] >>8624259
>>8624237
>what kind of images are you finetuning noob on?
all kind of except furry, comic and 3dpd (i included some sliver of /aco/-like artists present on danbooru for regularization) but it's not a good dataset by any means.
>is the objective of your finetuning to improve backgrounds?
not really, it's just that i got a bit tired doing an upscale just so that the details are crisp. although i included some backgrounds.
>i've been wondering what causes these color blow outs on vpred models
if you want a short answer, it's due to undertraining on the very first steps, when your picture is almost entirely consists of noise. The model isn't really sure which "color" it should set for various parts of the image, and if for some reason you want to put some dark object on a white background (or vice versa), the model may be confused and set "overall color" for this dark object as "bright"
Anonymous No.8624259 [Report]
>>8624256
all images here are genned at 1152x2048 btw
Anonymous No.8624290 [Report] >>8624301 >>8626199
>22 hours on page 10
Anonymous No.8624301 [Report] >>8624303
>>8624290
then bake it, faggot
Anonymous No.8624303 [Report] >>8624316
>>8624301
Why (You) haven't do it
Anonymous No.8624316 [Report] >>8624319
>>8624303
Thank you for visiting 4chan dot org. This is an English-speaking board. Try "Why don't you do it?" or "Why haven't you done it?"
Anonymous No.8624319 [Report]
>>8624316
I just wake up ok?, my first posts of the day are always this bad
Un día de éstos simplemente responderé en español a todos para evitarme esta clase de molestias
Anonymous No.8624327 [Report]
>implying he isn't here 24/7
Anonymous No.8624333 [Report]
>>8623751
pottsness
simao (x x36131422)
Anonymous No.8624376 [Report] >>8624378 >>8624380
Kind of off-topic but does Nintendo DMCA lewd images on twitter? I scrape some accounts with gallery-dl and it said a post was Dmcaed. Googling the link brought me to Midna fanart.
Anonymous No.8624378 [Report]
>>8624376
Pokesluts were the original gacha girls so...
Anonymous No.8624380 [Report]
>>8624376
Nintendo DMCA everything they don't like basically.
Anonymous No.8624384 [Report]
>Please wait a while before making a thread
what
Anonymous No.8624387 [Report] >>8624595
nvm

>>8624386
>>8624386
>>8624386

I fucking hate you all btw
Anonymous No.8624595 [Report] >>8624700
>>8624387
>I fucking hate you all btw
whats wrong?
Anonymous No.8624700 [Report]
>>8624595
>whats wrong?
everything
Anonymous No.8626199 [Report] >>8626493
>>8624290
Anonymous No.8626489 [Report]
PAGE 11 BUMP
Anonymous No.8626493 [Report]
>>8626199
why