Sovl edition
Previous Thread:
>>8600493>LOCAL UIreForge: https://github.com/Panchovix/stable-diffusion-webui-reForge
Comfy: https://github.com/comfyanonymous/ComfyUI
>RESOURCESWiki: https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki | https://comfyanonymous.github.io/ComfyUI_examples
Training: https://rentry.org/59xed3 | https://github.com/derrian-distro/LoRA_Easy_Training_Scripts | https://github.com/bmaltais/kohya_ss | https://github.com/Nerogar/OneTrainer
Tags: https://danbooru.donmai.us/wiki_pages/tag_groups | https://danbooru.donmai.us/related_tag
ControlNet: https://rentry.org/dummycontrolnet | https://civitai.com/models/136070
IOPaint (LamaCleaner): https://www.iopaint.com/install
Upscalers: https://openmodeldb.info
Booru: https://aibooru.online
4chanX Catbox/NAI prompt userscript: https://rentry.org/hdgcb
Illustrious-related: https://rentry.org/illustrious_loras_n_stuff
Useful Nodes/Extensions: https://rentry.org/8csaevw5
OP Template/Logo: https://rentry.org/hgg-op/edit | https://files.catbox.moe/om5a99.png
>there's actually a pretty significant bake difference between adamw and adamw8bit
>>8613160lil bwo has one joke between two generals
>>8613158
>>8613090What was your end control step for the tile upscale? I've noticed that CN only works well on artists the model knows but on my loras it does hallucinations below 0.95 end step. Idk what's happening.
>>86131760.8 denoise first pass, 0.7 denoise second pass
both passes at 0.35 strength and 0.8 guidance end
>>8613090This is stellar, if it's not the same style would you mind posting a box for this one as well?
>>8613090thanks, final result looks great
>used my image for bake
now if only I could figure out how to bake that lora again but less melty
https://files.catbox.moe/9o94qg.png
>>8603720thought that style looked familiar as hell and it turns out i was right
https://www.youtube.com/watch?v=NPU0O5mUJbs
>>8613090i need some new upscaling snake oil i should try that out
>>8613367just started with this shit
how did i do
i have more but i'll wait for some feedback
don't even know if this is the correct board to post this
>>8613371>>8613375Well, if you want me to being honest
>>8613358>>8613364>>8613371>>8613374Those are """decent""" but way too similar from each other, the style is more or less good
This is the right board but avoid spamming your images this much, especially when they have the same setting
>>8613376ok, will do anon
>>8613366>>8613364>>8613363>>8613362>>8613360>>8613358that's what cats get for always sticking their stupid buttholes in people's faces
>wonder what a big tit artist style would look like applied to a loli
>it looks hideous because the artist normally draws wide shoulders
>>8613375They are pretty good, but posting a lot of variations of the same pic is considered spamming.
>full_body in the prompt
>gen is a cowboy shot
came doesn't want to work for me hm
>>8607584close-up box please?
>>8613596Based resolution ignorer.
Any Intel Arc B580 owners? Have you tried training?
test
md5: 32c64296a0a82bac1834fd64b5e121d1
๐
>>8613224Mission accomplished, I think? It's not quite as sharp as I'd want but whatever
>outlives /hdg/
We fucking did it, sisters.
guys where are the good gens?
Optimizers done, now only schedulers and loss (and huber schedule) and I should be done.
>>8613848oh wtf lmao what happened
did the janny just archive it because of the /aco/spam?
>>8613852i feel like it could be because op pic sorta looks like futa autofellatio kek
>>8613852Seemed cursed from the getgo with the pruned related generals, highlightfag not highlight fagging, and I think some anon reported it early on as NAI shilling on irc.
/hgg/ also got archived because of the spam and reports one time, now that i think about it
>>8613800Umm, sweaty? You can have wide shots in landscape resolution?
>>8614033Never left, I decided to not post at all on the last thread
>>8614035the thread was that slow?
>>8614055It has nothing to do with that, I was schizo testing ,uhm, things
>>8613850>he tests one variable in equilibrium and thinks he's done
I like these panels but I need to get better at prooompting
test
md5: 8de75e64685ce8adba4c7f71686bc789
๐
>>8614416dunnyo what the hell you're saying cuh but i shall continue testing
>>8614588They're waiting for you Freeman, in the test chamber.
>>8614588can you check if compass supports degenerated_to_sgd = true as an optional optimizer argument and test it?
>>8614602Compass didn't even want to work on my end so no lol
>>8613153>got another round of xyzs>adamw8 looks like dogshit stylistically in comparison to adamwlol
all these years of "it's exactly the same just faster"
>>8614588what are you doing anon?
>>8614655I just 1:1d every easyscripts setting.
>>8614657I mean, waht is the final goal? are you training / fine tuning a model? loras? just larnign ai concepts?
>>8614671A good lora config more or less
>>8614675oh I see, good luck!
>>8614675>A good lora config more or less>>8614588>dunnyo what the hell you're saying cuhyou're about to
>>8614727very nice angle/pose
>>8614749What prompt can help me get that angle? I can't force it via img2img on CivitAI? Any specific prompt could help?
>>8614756That's not your picture?
>>8614769I did gen them + using bed invite lora. But i can't get angles right. It is random
>>8614756you mean the pov?
I thought you'd knew yourself lol, it's quite nice
wait what, this IS /hgg/ what the fuck are those posts
>>8614775Catbox??
Should just be lying on side, pov, under covers. Might need close-up too at 1.2 or so.
https://files.catbox.moe/54bsvc.png
>>8614784https://files.catbox.moe/m2orli.jpeg does this help?
>>8614786Woah this was made on civit itself?
https://files.catbox.moe/5myigu.png
>>8614792nice. does the accidental thighjob gen often with this prompt?
>>8614791So? Prompting and loras work the same way, it's the rest of the stuff that's lacking. Adetailer feels really weird, can't use controlnet or regional prompter, fewer upscaling options, etc.
>>8614802I didn't say whatever you're insinuating, was just surprised by the metadata is all.
>>8614802People still think you need their 10 layer of snake oil to get a gen
>>8614814it's more important to apply the snakeoil during lora training
>>8614799I'm pretty sure I got lucky, but I don't see it being that hard to nudge if you added thighjob to the tags.
https://files.catbox.moe/8ad0w4.png
>starting to notice the weird unsymmetricality of higher batches
it's over
>>8615288kino
>serieswhat about idolmaster or persona?
>>8615288>Just did thisIt's clearly ai generated anon.
>>8615463tomimi's
BIG
FAT
tail
>>8615422>anal DPB A S E D
I was the anon saying v4.5 was shit. I figured it out, not so shitty anymore but I do prefer 4.0 aesthetics
>>8615527>but I do prefer 4.0 aestheticsyou mean v3, right?
>>8615550not when it hits right
v3 is just all over the place
composition wise, style wise is depends on many things
>>8615550No, anon just loves artifacts and blur.
>>8615557Me too but only if it's called sovl because it's on 2005 gay cgs
I'm confused, are you supposed to use the adafactor scheduler with the adafactor optimizer always, or it is just some specific one? I know it doesn't work on other optimizers.
>>8615569Oh wait it automatically does use it, never mind.
>>8615463Repost so early?
>>8615527Post a picture, sweaty?
>came suddenly started working
ogey
>>8615594ah, my bad
I just dump them in one folder, lose track sometimes of which ones I already posted
>>8615110>went back to batch 1>suddenly optimizers are completely differentpredictable but i'm glad i went back to them
>hdg got archived again
oh rmao
hdgsissies just can't catch a break. kek
this is what you get for removing NovelAI from the OP~
can jannies not do that
i don't want a rapefugee flood here
>>8615678They just tabbed over as we all do, anonie.
the fuck is going on with hdg, and what the fuck is the difference with this thread?
>>8615681We exhibit self control here and laugh at the hdg purges.
you guys were posting so much trash hdg killed itself again...
>>8615681i assume
>janny archives previous thread for acospam and assumedly reports>janny sees scat spam and acospam take up half the new thread>there now exists a calm alternative so he doesn't care if it gets archived again
If you don't have a post archived in /hgg/ #001, we don't want you. Shoo, refugees. Shoo.
whatever, I'll just post some hentai
Uh oh. That one got her boilin' n bubblin'.
janny should have taken this thread down, this one is the even lower quality hdg
cake
md5: 14a226fa7fd1be78877bd90f375da7e6
๐
This thread is superior because it wasn't baked by highlightanon
As /hgg/'s first pillar, I should inform you all of the ground rules.
1. No shitposting
2. miquellas are encouraged
3. No catfag
Keep those in mind and you'll get extended residency until your jeet general returns.
>>8615697>1. No shitpostingNYOOoOOOOooooOOOooOOO!!
>>8615700I know it's tough, anonie, but we've found here that a great way to fight the urge is autistically discussing lora baking for days on end. I believe in you.
>>8615709Bakeranon, you should write a rentry.
>>8615709isn't this general actually dead
>>8615685This but unironically. I was in the screencap. In fact if you didn't post a pic with the original metadata then you need to go.
what are you n-words doing this time
>>8615720Looks like mass reporting is the new black.
>>8615738I should really learn to inpaint...
anyone tried wavelet loss yet for training?
file
md5: b139b4fe49f0cb5bba78f38b732de329
๐
my old config was hilariously fucked up
>three times the lr
>takes twice as long to converge
glad i went and tested this shit
What's the best way to translate a specific style from one model to another? My first approach would be to gen a lot of images on the first model with the style I want and then train a Lora with that for the second model but I'm wondering if anyone has a better Idea and I know that training AI on AI is bad.
>>8615827>I know that training AI on AI is bad.It's not if the images are properly selected
>>8615807gonna share your findings?
>>8615827There is plenty of loras trained on AI to replicate specific style mixes of NAI grifters on local and they don't have any specific issues, problems happen when you train whole models mainly on AIslop like LLMs do.
>>8615834i am still doing it but i can write something up
>>8615851oh, nah, if you're still testing then please continue
back from vacation for posting march 7th with small tits
Not sure if this function already exists somewhere, but I made a couple custom nodes for comfy since I was annoyed tard wrangling mixes with washed out colors if anyone is interested, just drop the files imto custom_nodes folder if you want to try them
Main useful one applies a luminosity s-curve to images, meant to be run on images right after vae decode (and then sent to hires pass nodes). Should be fine on defaults, just don't raise y3 unless you want to go blind. (or are doing some spot color/monochrome style) Defaults may not have good effect on images with backlighting.
luminosity s-curve file: https://files.catbox.moe/50x9cm.py
Other one was meant to do monkey color grading by a warm/cool axis but it also applies a multiplier on chroma which is some more snake oil to have fun with, use this node with low strength
color warmth grading file: https://files.catbox.moe/73o3uz.py
>three days of testing just to bake a single lora
worth
>>8615906>three days of testing just to bake a single loraMy loras end in _XX, imperial. Can you say the same?
>>8616019I gotta do it on the whole dataset first, but I'm starting to write up that "guide".
>>8616175it's just ixy m8
>>8615720coomin my brains out because of the marvel announcement. we're so back.
>>8615827>>8615833>training AI on AI is badit's actually always bad if you're using plain eps gens to train a ztsnr model because you're overwriting weak low snr knowledge with the specific pattern that occurs in your dataset
training on nai or noob outputs is ""okay""
training on pony is bad
https://rentry.org/hgg-lora
>>8616235>The base model you are training on, it should be either Illustrious0.1 or NoobAI-Vpred 1.0 for most users.Woah woah woah. What's wrong with baking on epred?
>>8616290illustrious is eps
no real reason to bake on eps noob
>>8616235nice but and the end of the day it's still some list of incomprehensible magic spells, there's too little evidence and even some incorrect stuff, still way better than whatever is in the op
>Scale V pred loss: Scales the loss to be in line with EDMthis is wrong, it just multiplies the loss with the snr curve, nothing super fancy, but this is mutually exclusive with min snr and it should not be used under any circumstances except if you're training the model using (neweset) v-pred debiased estimation (which is also mutually exclusive with minsnr and it's basically the same thing as min snr with gamma=1 but smooth)
>Flip Augment: Flips the latents of the image while training. Causes quality degradation. Keep on.kek
>Max Grad Normit prevents too large gradients (basically gradients are the proposed changes to the network weights) from throwing off the training
>Noise offsetany kind, must be disabled for vpred/ztsnr
>Multiple batches allow for quicker training but also exhibit problems with symmetricality in the end bake. Keep at 1.what? sounds like an issue with bucketing
>Keep Tokens Separator: Unsure of a practical use for it.it's for nai-style training when you want to separate some meta tags (and keep it in place) from other tags, for example "1girl, artist, character ||| whatever", basically keep tokens but more flexible
>Cache Latentsthis also increases vram usage unless you cache it to disk
>Dropout: Used to drop out parts of the model. Causes degradation of quality. Keep off.most likely you've used way too large numbers here, it should be below 0.1
>Prior Loss Weight>Regularization Images: For specifying regularization images, a method of supposedly reducing overfitting. Uncommon use.these options are specifically for dreambooth-style datasets, prior loss weight is a multiplier of the regularization image loss
>>8616290noobpoint5 bros?
oekakianon cultural erasure
>>8616298this is why i didn't even want to make this rmao
>>8616300are you scared of the truth?
>>8616300Not him but do you see no benefit in discussion? Debate isn't supposed to be about making the other person look stupid or feeling superior but about putting our heads together to get closer to the truth. Some things you have there are accurate while others aren't. Why not update it to reflect his corrections so the whole rentry is correct?
>>8616299I highly refuse to believe that anyone that isn't him is still using that version
>>8616305pretty sure cathag uses it because all his images are washed out
>>8616294>>8616295>>8616299Ok hold on bros I'm freaking out now and I have to make all my loras. What would be the difference between baking on base illu and vpred? Besides the settings. Shit my current bake just finished and btfo'd me too I'm so sick of this shit.
>>8616304well more like everyone's an expert until it's time to sit down and write lol
i don't even have the code for it anymore, it's more of an AAR of my 1:1s than a proper guide, imho the most interesting parts of it are the fp8 stuff since you can say "well actually x works better for me than y" just like i did about actual parameters
>>8616300What you can do is paste everything there into somewhere else so others can review and edit things around, have some overall revisions to then have the final "mostly correct/general agreed" current way to train a lora
Good initial effort tho
>>8616298Nice,
I'm also surprised dropout causes quality degradation but maybe it's not as compatible with lora due to the low fidelity nature of how the lora values propagate into the resulting model on application
Another thing I'm surprised about it scale weight norms, I would've expected that to help counteract frying but I guess not
At the end of the day, I imagine the most impactful settings will be the dataset quality itself and sampler/optimizer settings anyways. What's the verdict on that and did your opinion on it change from whatever you had beforehand?
16/32 Locon AdamW cosine 1e-4?
How many steps?
>>8616311>well more like everyone's an expert until it's time to sit down and writeYeah you're not wrong either. I've had my fair share of this, specifically his brand of thing where everyone will say nothing for months only to show up and tell me I'm wrong and that "this has been known since 2008" when I finally do a write-up and post results.
>well actually x works better for me than yYep a lot of it comes down to just pure empiric testing which is why I'm saying your contribution is good. The more info we all have, the better and the less time we all collectively have to spend testing. Ideally if we knew *why* x worked better than y then every lora would improve, but just having some solid evidence is a good start.
>>8616300If it's any consolation I appreciate you putting something together and having to collate info from the barrage of people claiming their way is correct way. I owe you a gobby.
>>8616298>this is wrong, it just multiplies the loss with the snr curveTBF I was going with the ezscripts definition.
Maybe it really should be put onto some doc for fixing up and reuploading.
>>8616321I think the saddest part of this is that this field just doesn't do "why". Shit's a black box.
>>8616317If you meant to @ me then yeah definitely, Locon was the straightest no subtlety improvement here, at least for a characters.
AdamW, like I said, instead of 8bit which logically (it is 8bit!) does make it slightly worse, I'm more surprised that nobody mentioned it before, but I bet a lot of it was just "waow less vram more good" during the early days when it became the "SOTA".
Cosine did surprise me because people recommend with restarts but yeah, the restarts version just kept more of a consistent "fry" in the images, especially relating to clothes.
>>8616298>most likely you've used way too large numbers here, it should be below 0.1Dropout is a method from ancient times. Only ML practicioners working on toy models or learning from old material actually use it these days.
>>8616331or people experimenting
I find it makes the lora work better on further merges and finetunes
Also haven't noticed any quality degradation, and it wouldn't make sense for there to be any. Just slows down the training a bit.
>>8616328oh yeah I replied to the wrong post lol
I guess it makes sense @ restarts since it cranks the learning rate back up when the chances you're at an actual minima during training in high dimensional space gradient descent is incredibly miniscule, meaning you're not even getting the advantage it's meant to provide in the first place.
How many steps do you usually shoot for?
>>8616338I think I'm still team epoch but I have some diverse full bakes to do now to be sure.
>>8616317>I'm also surprised dropout causes quality degradationto be completely honest this is also my experience with loras
>scale weight normsiirc it rolls back the weights that become larger than a specified threshold, this is basically only useful for stable training using high lr and small alpha, and even then you'd need to find an equilibrium
>I imagine the most impactful settings will be the dataset quality itselfas long as you're not training the model in the wrong way (by enabling noise offset for vpred/ztsnr model for example), this will always be the case, other things will only affect how quickly you train the model and possibly how much it will forget
>>8616328>I think the saddest part of this is that this field just doesn't do "why". Shit's a black box.projecting as is, it's just that educated people rarely visit 4chan's /h/.
>want a specific body type that a certain artist draws but want another artist's style
>use [artist1:artist2:0.6]
>it just werks
Man, I've been using these kinds of techniques for ages but never thought about applying it to this use case until now somehow.
>>8616349sounds like me when I finally started blending hair colors. [red hair:dark brown hair:0.3] makes a nice auburn.
>>8616352Oh yeah, I've been using it for color blending for quite a while. Also using negative sometimes to try and make it more consistent.
>>8616308Are you baking for vpred though? That's the question.
>>8616359So if I bake on illu and run on vpred will it really be that much worse? I'm running on 102d and all the epred bakes I did seem okay but after my latest bake I'm not so sure anymore.
>>8616342I read up about it a little just now and the dropout making things worse makes sense. Due to the nature of adding multiple passes of gaussian noise in steps, diffusion models expects/requires a lot more consistency in terms of network output that dropout had no expectation of doing back when it was made and is counterproductive in regards to.
I'm inclined to agree with not using dropout now and just trusting that the layer/group norms in SDXL should be doing its job (not that I know for sure they're used in ezscripts lora training if anyone wants to confirm)
>>8616362It's most visible in stuff like inbuilt styles on non-vpred character loras. You just get way less accuracy, and a lot of greyness because of the incompatibility. It's not bake ruining but it's worth comparing.
Actually, "OP" here, I was thinking, since I use full shuffling now, would adding an artist tag for a style bake still make sense?
>>8616369Eh fuck it I'll bake twice and be empirical about it kek
>>8616369No there's no point.
>>8616369I'd imagine it wouldn't hurt, especially if there's some vestigial knowledge of the artist from danbooru so as to nudge that knowledge to the surface
If it wasn't in danbooru/after cutoff (or if the model native artist tag just fucking sucks), I'd say may as well leave it out since style loras seem to work fine without tags, why add another piece of overhead during runtime
>>8616369I've done it with the "3D" tag once by accident and it still worked like a trigger despite shuffling. Tag order doesn't seem to matter as much on illu/noob anyway.
>>86163913d tag is a super special case imo.
>new config also converges styles way faster than old one
That old one needed like 10 epochs more than for characters, I guess it was all the dropouts.
It's kinda funny how much it can change depending on how you test it, but I think the bulk of the old setup I grandfathered from Pony.
I'll test it a bit more and probably post it tomorrow.
>>8616385>>8616391Hm yeah it's still needed.
>>8616328>Cosine did surprise me because people recommend with restarts but yeah, the restarts version just kept more of a consistent "fry" in the images, especially relating to clothes.ime restarts do help but it depends on dataset size and lr and probably a lot of other stuff. i did some tests a while ago and settled on one restart for every 2400 examples of a concept in the dataset at batch size 1.
Coolio, I'm gonna go on a bake binge soon.
any videos i can use to learn?
>>8616454This is a reading hobby, sweaty?
Unironically people here would send you to the research papers back in the day.
>>8616454nyot really
the people making vid tutorials are making them for the common jeet denominator
>>8616454depends if you have a programming/maths baseline
if you don't, lmao, go follow karpathy course and good luck
if you do, having chatgpt break down the components of sdxl for you until you get to the level of technical detail you're satisfied with isn't a terrible idea.
>>8616454If you're completely new, then go ahead and search for any video tutorial on YouTube. I started learning by watching those and then reading some guides from the OP a year and a half ago.
where's the nai gens?
im tired of localslop
where's the ironic posts?
im tired of genuineslop
Best way to improve face quality?
>>8616530make it >30% of the image
>>8616530>inpaintForget that. Use adetailer on a hires pass and type in the face details at 32 padding.
Weird, a couple months ago gens would make me diamonds. Now they barely do anything. Are the models getting worse, have all the good genners left or has my taste just drifted?
>>8616715Unironically take a break, gooner.
Any other shitmix testing purveyor want to run this model through their saved prompts? I've been awfully impressed with some of the outputs albeit it's not without issue. Curious if it's just good gacha on my part or what.
https://civitai.com/models/832573?modelVersionId=1677841
>>8616298>Multiple batches allow for quicker training but also exhibit problems with symmetricality in the end bake. Keep at 1.>what? sounds like an issue with bucketingHow
>>8616817I refuse to use shitmixes, if I want to fry my model with shitty loras I can do it myself without loading a whole new checkpoint.
test
md5: b2f041f65528c94064f720bf9db57113
๐
>>8616817What was it that impressed you? I threw a bunch of prompts at it and didn't see much of a difference. Didn't even include 1+29 or 102d custom because they're even more similar to base v-pred.
I think bottom line every merge is just some mix ratio of illu+noob with diluted knowledge and a different base style. As soon as you prompt an artist above 300 pics or add a lora, all the checkpoints sort of drift together.
>>8616715>he was jerking to other people's gensishiggydiggy
>>8616817I wouldn't say it's better than 102dcustom but it's better than most shitmixes I've tested, primarily regarding inbuilt artist replication.
>>8616715whose gens did you like most bro
>>8616715You kidding me? Look at this shit
>>8615738 its getting even more amazing, and apparently now you can animate them too? Things are getting better and better.
>>8616910My go-to's are 291h and custom. Flip between the two as I find 291h to be soft (good or bad thing depending on the artist mix) but have more artist fidelity whereas custom has more color depth but can overpower the style of certain artists.
This shitmix I stumbled on is giving me a nice in-between of both those models which is ideal. But it wouldn't be the first time I find a model I think has potential and then dump a week later.
>>8616912>primarily regarding inbuilt artist replicationYeah that's what I liked about it. Gives me the fidelity of 291h but with more color depth but not diving into fried or blown contrast territory.
>>8616532>and type in the face details at 32 paddingSorry I'm retarded. Can you explain what this part means?
>>8616530higher base res and inpaint
>>8616936nta and he might not be too smart either since Adetailer is the same as inpainting.
But anyway when inpainting change your prompt to describe only the face (and style) not the whole scene. Adetailer gives you a separate prompt window to simplify this. Mask padding is in the settings.
>>8616936In the adetailer prompt type the info for her face plus your quality tags, artists tags, lora etc. I organize my prompts so it's all easy to copy/paste. For your settings put the resolution to 1024x1024 and 32 mask padding at 0.4 denoise and faces should come out much better.
>>8616939You're so retarded since I'm telling him to automate his inpainting rather than doing it manually. You've already conceded the fact that they're the same so what's the issue?
>>8616936booru tags that describe said character's face.
>yellow eyes, short eyebrows, scar across eye, etc32 padding is the area the model will derive info from to change your masked zone. 32 or 64 are usually safe bets along with soft inpainting to eliminate any inconsistencies in the masked area.
>>8616940The post made it sounds like you thought adetailer was somehow superior to inpaint beyond just finding the face automatically. A misunderstanding then, thought at least we clarified for the newfag.
I would apologize for calling you "not too smart", but now you insulted me even worse so get fucked.
>>8616947Based grudge holding anon.
>>8616947Basic 4chan discourse I'm afraid, but I will apologize anyway since I see my insult was unnecessary.
>>8616949Based bigger man anon.
>>8616235>https://rentry.org/hgg-lorawere you the one who was empirically testing various settings and posting those grids?
if you are, could you share the collection of grids as well?
would like to see if i would draw the same conclusions
>>8616958mmm nyo (i deleted them)
shitty nyogen impersonator...
How are you guys using BREAK?
I feel like the more I use it, the less I know. Are you supposed to organize your tokens by importance so that important things are early in each prompt segment (ie masterpiece, best quality are not in the same segment, but split into different segments as the starting tokens)? Are you supposed to group by concept, so like you put all the clothing tags together, all the background tags together, etc? Or perhaps do you group the tokens by visual proximity, so perhaps tags close to one subject get a group, tags close to each subject's faces get their own groups, etc.
>>8617086I don't. Some have said that it resets unet or something so that the very next token after BREAK gets full importance which can be helpful but I don't really use it. Organization is done with pressing enter rather than BREAK.
>>8617086Specific use snakeoil which doesn't work the way you are intending to use it.
>>8617086>How are you guys using BREAK?The only legitimate use case for it is to prevent getting tags split between blocks. For example, if you are at 73/75 tokens and have "foreshortening" as your next tag, it would be good to use BREAK, so the "forshorten" and "ing" tokens don't end up in separate token blocks.
>>8617098this
everything else like "separating characters" is pure placebo
>>8617103Isn't that also how you separate adetailer face prompts, or regional prompter areas? I don't use webui/forge, just thought I read that somewhere. Might be where their confusion comes from.
>>8617108I mean technically yeah, but you're changing its function when you use it with extensions like regional prompter. By default it simply separates clip chunks which is only useful to prevent tag being split into separate tokens
>>8617108Woah woah woah. You can separate adetailer face prompts? With regional prompter?
>>8617098This isn't even happening in modern UIs though? If you don't put BREAKs, they only split by commas, never in the middle of a word. Well I only tested forge/reforge as they have convenient token counter, maybe comfy does it like you say.
>>8617111>You can separate adetailer face prompts?no, adetailer has its own syntax for prompt splitting and uses [SEP]
>>8617114Right, sorry I mixed those up.
>>8617111If you have multiple faces in the pic you can give them separate adetailer prompts and it goes through them all, left to right I think. It doesn't use regional prompter, just inpaints them one by one.
>>8617086I haver never used BREAK under any circumstances
kusujinn https://files.catbox.moe/46yjxk.safetensors
>>8617150Thanks? Can't image how it'll look, he's gone through like five different styles.
How many images do I need for a character lora?
>>8617154>>8616468>>8616451>>8617157depends but it can work with as little as like 10
>>8617157Fewer the better to get a strong style, it just won't generalize well without a varied dataset.
file
md5: 2fc9f0e12fa85aeec86ad6e15bc2f25b
๐
>>8616715Had the same thing. In my case I was trying to prompt increasingly complex stuff and never noticed the subtle decrease in quality and frying step-by-step. Going back to simpler less convoluted prompts fixed it for me.
https://files.catbox.moe/qqxvkl.png
>>8617188Did you train this on a merge? It looks completely different on noob. Not in a bad way.
>>8617245Also I can't tell if "kusujinn" is supposed to be a trigger prompt or it's just picking up the model's existing knowledge from 150 booru pics.
>>8617245Nyo u can see it's baked on 1.0
>>8617248It is
>>8617245Is that a character? If not what is the prompt for that hairstyle? Parted hair, wavy hair?
>>8617251>It isworking with noobs existing artist tags is inadvisable in my opinion. They are often misaligned and overtrained. It's possible to somewhat fix them with te training, but you're usually better off training a new tag from scratch or training the style into uncond.
>>8617251Okay, that really brings out the /aco/face, thanks
>>8617254Okumura Haru
I thought she was about as mainstream as it gets
>>8617255Unless they're not misaligned. Then it's a huge benefit and you only need a little bit of training to drive it home.
>>8617266I haven't played persona 5 nor any persona game because I was never interested. Worse, the mainstream nature of the fifth game made me lean out not in. Maybe I'd like it if I tried it.
Seems like the anon who was finetuning a vae for noob back then is now responsible for neta's lumina 2 bake? https://huggingface.co/heziiiii
>>8616349Thanks for the tip, I should experiment more with prompt editing. You can probably replicate a lot of controlnet stuff with it.
https://files.catbox.moe/kkzluz.png
>>8617255I mean I tested it 1:1 and it was better with, meh.
>>8617279Is this good or bad news?
>>8617302I dunno, it's interesting, maybe he could give us some cool insider info
>https://files.catbox.moe/deadsd.toml
oh yeah i was gonna post that toml
it converges roughly around the 11-13 epoch but it's safer to keep 15
https://files.catbox.moe/a22hr0.toml
>>8616349>>8616352What does [a:b:numerical value] mean?
>>8617370>What does [a:b:numerical value] mean?tag "a" is in the prompt for "numerical value" percentage of the steps, after which, tag "b" replaces "a".
i.e. [cat:dog:0.5] - 20 steps
for the first 10 steps, cat exists in the prompt, at step 11, cat is replaced with dog
hope this helps bwo
>>8617378Oh, perfect, thanks anon!
>>8617356is it for style or character?
>>8617477Don't worry I know.
>>8617477stealth meta bwo
still testing the lora tho
>>8617455i absolutely hate how this only looks crisp if you don't open it in full resolution
If that's fuzzy then my pics are hot garbage holy shit.
>>8617510more like it's a vae problem
>>8617512Which vae are you using?
>>8617510some prefer their gens mushy and blurry while others want digital art level of sharpness, at this point it's just a matter of taste, (You)r taste
>>8617515same blurry shit as you most likely
>>8617524fixFP16ErrorsSDXLLowerMemoryUse_v10?
>>8617527>fixFP16ErrorsSDXLLowerMemoryUse_v10isn't that just the fp16 vae
>>8617271you're not missing out on anything good, it's shit megami tensei for the bottom of the barrel r*dditors
>>8617464I thought they were supposed to be different
>>8617534that can happen but nah it works, i baked a style and a chara and it just werks
about to rebake derpixon and skuddbutt out of curiosity
>>8617356oops i forgot to add a subset with the shuffle captions
>>8617529vae_trainer_step_90000_1008?
>>8617543oh ffs and i baked two loras including the kusujinn without it because i forgot
lmaoo
i need to rebake them
>>8617546here's the fixed one lel https://files.catbox.moe/uvldis.toml
>>8617544this one?
https://huggingface.co/heziiiii/noob_vae_test/tree/main
>>8617548Yeah those are the only two weird VAEs I've seen people use. Otherwise I just use SDXL.
>>8617535Sweet, i'll test then I need to rebake a character
Okay bakers how much dim should I use? I heard someone say something about overfitting on style with high dim or something. I don't remember this being a thing.
>>8617279For fucks sake, can someone please tell him to finetune flux's VAE? I don't want lumina 2 to come out with the artifacted, washed out, biggest shit that flux's VAE has, I really want the next gen model to be as less fucked as possible
Please tell him
>>8617651Post a VAE comparison. However bad Flux's might be, SDXL's is 100x worse.
>>8617657You can't use SDXL's VAE on lumina, how would that comparison even work
>>8617651This might be his civit account https://civitai.com/user/li_li/images all I did was search the noob discord for hezi and this name popped up
>>8617657nta but
>However bad Flux's might be, SDXL's is 100x worse.not an excuse to not try to mess with it just to get better details, and then a better model. in fact, this is the best moment to do itsince lumina 2 isnt finished so there is still time to improve the vae
>>8617661https://huggingface.co/spaces/rizavelioglu/vae-comparison
>>8617672Oh that's pretty cool, thanks for showing it to me.
>>8617455interesting how this one came out with less paper/canvas texture on the outlines, I kinda liked that effect
>>8617647I use 16
and I used 16 on my last config too
>>8617647don't remember this being a thing with what? SD1.5?
Dims are much different because dims don't exist in a vacuum if so, it's proportionate to the actual model size. a 16 dim lora on SDXL is much more "powerful" than a 16 dim lora on SD1.5 hence more sensitive to frying as larger dim values are used.
You probably don't need such a big lora (could just also train/resize 32 dim down to 16 and see what method you like) unless you're training many concepts, at which point just finetune and extract.
>>8617631just remember to shuffle the captions unlike me
>>8617736>at which point just finetune and extractAny rentrys for this?
>>8617546>retrained two of them without it againam i fucking retarded or something
>>8617797that's literally me
>>8617810I like your white hair.
>>8616715I've been working on getting a mix for a simpler style but something about the faces/bodies just lack the "intensity" to get me going.
I may have conditioned myself to needing prominent toned belly/ribcage/hipbones or else I won't see it as erotic
>>8617820>but something about the faces/bodies just lack the "intensity" to get me going.it's called "context"
>>8617848you're right chief, I should go back to just making unquestionably rape/ryona pics
/hgg/, slopposting yea or nay?
https://files.catbox.moe/s38rup.png
>>8617909as long as you don't spam it should be fine, post whatever you want just don't over do it
file
md5: 51744dfc3cfe2ea0791b03919f9abc51
๐
>>8617820Idk just seeing my cock slide into her wet pussy in pov always gets me. Bonus points if it's a waifu from one of my character cards.
Can someone help me bake this lora? I'm not sure what I'm doing wrong.
https://litter.catbox.moe/o0vbte3jla83xfx1.rar
>>8617944>these two feet shotshttps://files.catbox.moe/flhv1q.gif
but if you mean for a style lora, give it some tag, and prune all those style descriptors like "anime coloring" because it just dilutes the output
>>8617947No I don't even care about feet but that anime has really good nails for both hands and feet which I wanted for the lora. I had added those tags because I thought it helped but I guess not? I'll try training it for longer.
>>8617949i think styles generally do need "trigger" tags for "concentrating" them, never had much luck post 1.5 without them
and yeah those tags would just dilute that baked in tag and pull on the pretty strong inbuilt knowledge
i can bake it tomorrow out of curiosity, just say what trigger tag you want
>>8617953>i think styles generally do need "trigger" tags for "concentrating" themOh yeah? What I noticed back in the pony days was that some datasets needed them but others didn't. However I haven't had much issue with styles at all until I attempted this and I've been having trouble with it for weeks.
>just say what trigger tag you wantI doesn't really matter I just need a proof of concept and metadata. Some faggot (who's probably here now) was showing off his lora but when he posted the lora he cut out the metadata so I can't even see what he did.
https://files.catbox.moe/z4p83a.png
>>8617738oh yeah good call
no noise offset tho?
hello, I've been using NoobAI-XL-Vpred-v1.0+v29b-v2-perpendicular-cyberfixv2 as my daily driver, snake oil be damned, is there anything "better" that has been released since or maybe a paradigm shift that popped up overnight that I missed? Thank you.
>>8618044Is that the one with the shit backgrounds? We've all moved on to 102d custom bro.
https://civitai.com/models/1201815?modelVersionId=1491533
>he recently release 2.5d boostInteresting.
>>8618044>NoobAI-XL-Vpred-v1.0+v29b-v2-perpendicular-cyberfixv2 as my daily driverit's not that bad, you don't need any snake oil with that one, it does suffer of very noticeable artifacting with some styles tho
As
>>8618045 post, we use 102d custom nowadays for a better and more easy time
>>8618045>>8618048Thank you, I'll try this model out.
>>8618052the euler cfg++ kl optimal 28-32 step @ 1.5cfg settings they suggest give pretty neat outputs for hires pass, just don't use rescale cfg with that if you try it
Artist mix is settling out nicely. Though it has a tendency to make girls cute and funny, I guess I did sign up for that with the artists I chose and it's not like I have reason to post that often
>>8618117Nai is looking good
>>8618138luv me anime pubes
>>8618165fluffy pubes get a pass
tufts and fuzz are acceptable, it's those overexaggerated pussies with flaps and individual strands going everywhere trying so hard to be "realistic" that it shoots right past is where it gets appaling
>>8618193I don't provide metadata so you just gotta believe me bwo (I guess I base gen at 896x1152 too if that even matters)
>>8618048Cute Vibes. Cute Lize.
>>8618045>Interesting.>This 2.5d boost model provides a model that deviates from flat 2D to a slightly 2.5D orientation.Why do they do this? Reminds me of chromayume, for whatever it's worth, started off as flat and then ventured into 2.5 slop.
>>8618270hey look, two cakes
it's not much effort to just merge in more custom udon for another model. variety is the spice of life
>>8617700>paper/canvas texture on the outlinesthat effect was due to mistagging (lack thereof) motion line tags on some images in the set with finer, less noticable, motion lines drawn near the outlines.
missed out on tagging a couple again which were in another folder, so back to the kitchen with this
>>8618317i like that the penises are small
>>8618208Feels like wagashi but more generic.
>>8618330yeah, it indeed started out with trying to find settings to make wagashi + a wagashi lora play nice with noob. For some reason, my setup would always fry body parts with it
>>8618320b-bro that's average...
>>8618363oh nyo nyo nyo nyoooooooooooooooooooooo
what's the difference between this and hdg? I haven't been here in months, where do I post sex gens?
>>8618396if you have to ask, you're meant for /hdg/
>>8618396This one's constantly near-death, so shitposters see it as too much effort for no payoff. Not worth it imo, now I'm stuck having to refresh five threads instead of 4.
>>8618396less shitposting, less gens and more technical talk
>>8618403>>8618403I just wanna use hentai gens to make e-girls perform things they can't or won't in real life.
like nigri, lyumos, katz and other cosplayers getting fucked out by tentacles.
it's challenging indeed for brainlet coomers like me.
>>8618412>3dpdmay your journey to other boards be swift and final
>>8618396Until a janny decides to start enforcing the shitposting rules on /hdg/ it's unusable to me. Maybe 1 out of 10-15 posts is genuine these days, if you exclude the spam. This thread might be slower, but it has 100 times less of the cancer that's now in /hdg/.
>>8618396>this bait againOn the 1% chance that you're serious, just look at the previous /hdg/ thread
This worked surprisingly well. Although it does have a bit of a darker bias because of the 1st ep.
skuddbutt https://files.catbox.moe/nivhgd.safetensors
Ever since anons suggested to run without negatives my gens have noticeably improved on Noob models, so I'm pretty convinced on that front. But what about quality tags? I see some anons using [<quality tags>:x] or even [<quality tags>::y]. Anyone experimented with what works best?
"curvy" tag my beloved
>>8618438I found them to have little to no actual effect. Maybe it has some meaning if you run no artist base model but like... why?
>>8618441No effect? They have a crazy strong effect even when using artist tags for me on chromayumeNoobaiXLNAI_v40.
>>8617963>when he posted the lora he cut out the metadataI guess you're talking about my basedbinkie lora? I can give you the toml if you want but anon's config here:
>>8617547 is probably better, I was gonna use it myself for my next lora
Reminder to never leave your prompts with a hanging tag. Always have a comma at the end.
>>8618451wait why? is not having a comma at the end that impactful?
>>8618447Chromayume is pretty close to base model.
Well, I guess I meant positive effect. Even in your pics they kinda screw up the look and anatomy imho. When I was testing it myself it was pretty minimal.
Rebaked Kusujinn
https://files.catbox.moe/ridycu.safetensors
>>8618460as for the difference between shuffling captions or not, the rightmost one was not shuffled
it's pretty minimal but maybe the flatness of the shuffled ones corresponds to the style a bit more? who knows at this point
https://files.catbox.moe/cyhpct.png
>>8618438It's not that negs are bad, although maybe they are. It's that there's a technical quirk with noobAI, where just having anything at all in negs causes quality degradation versus leaving them completely empty. Even if it's just a single letter, or an underscore. The effect does partially go away with merges but it's easily visible on base noob.
>>8618463It's not NoobAI as much as reForge. Pony can get a similar effect. I think people just didn't see because everyone needed negs back then. Or maybe the dude changed the backend between Pony and Noob because it is caused by how it processes the uncond.
>>8618466Made me load up Pony again and check. It also goes away with merges, and everyone used source_pony in negs so I guess we never noticed.
Thing is A11/Forge/reForge pass the uncond as empty, instead of encoding an empty string and passing the output of that.
>>8618474>Thing is A11/Forge/reForge pass the uncond as empty, instead of encoding an empty string and passing the output of that.Then, why does an empty uncond look better on SDXL models?
>>8618493we just don't know.gif
>>8618493It would be kinda funny if there was some other weird quirk that was handicapping outputs. The reForge/Comfy/Classic output differences show that it's not always the exact same thing depending on how you run the model.
>>8618208>I don't provide metadataWhy are you proud of this?
>>8618516seethe prompt thief
>>8618450Did you make the lora for the pic I posted? It's not really about the config, although that matters, it's about the number of pics used, the tags used, etc. Having all the information is best but since dataset is 60% of what makes a good lora, I'm still pretty much in the dark without any idea of how he did it. Also:
>>8617944>>8617944>>8617944Any takers? It's a style lora that's already tagged. I thought this place had at least 10 bakers lurking around.
derpixon https://files.catbox.moe/9ws6sb.safetensors
i tried tagging the characters and herzha forms but it doesn't really want to work
>>8618531i have it baked i need to test it
>>8618524I'm seething that /hgg/ has started tolerating retarded attentionwhores not that I care about the metadata for artists I'm already using.
>>8618516Consider post natal self abortion. Nobody owes you shit, mouthbreather
>>8618537Go back faggot. I don't want you shitting up /hgg/ too. We're not your xitter fanclub.
>>8618539just quickly posts some tired bait in the other thread and he'll be occupied for a while
>>8618531>Did you make the lora for the pic I posted?Nope, in that case I have no idea what you're talking about
>>8618539Fuck yourself you worthless cunt. If you don't like people posting gens that's your problem. Again, consider suicide you fucking dipshit.
>avatarfagging doesn't exist
>desperately trying to look cool in front of strangers isn't a thing
>don't mind me I'm just posting gens
i like asking for boxes bwos
sometimes i find some nice artists to train a lora for
>>8618550it's one of few things this general is good for still
>>8617547>RuntimeError: quantile() input tensor must be either float or double dtypejust errors out for me on the fork lol
>>8618560eh i kinda like the technical discussions
but yeah sharing boxes is nice and comfy, who cares about the grifters
>>8618573If you don't want to share it's fine, but then why post here? Just keep it to yourself and enjoy your super secret recipe. In reality every other intelligent person from the SDXL creators to the forge/comfy coders could have said "I don't owe you anything" and used the tech privately but they didn't, yet your insignificant contribution is the thing that belongs to you? Retarded ladder pushers should in fact be shamed.
>>8618343Did some test and all the output were fried, so idk what was wrong with mine
>>8618577wrong anon bwo, i do share... all my gens have stealth like
>>8618550
>YOU MUST SHARE OR DONT POST AT ALL
what the hell? lmao
Yes, still not sharing metadata of my insignificant gens. If they're so insignificant why do you care
>>8618580I was speaking generally not to you specifically.
>>8618582>>8618583Yes faggot. Why do you have no argument? Because your position is completely indefensible.
bros why are the hdg rapefugees still here the thread got rebaked
and i know it's you because you posted the same style in hdg too
>begger thinks he's contributing by whining and shaming image posters
it's time to go back
>attentionwhore thinks he's contributing by posting pictures where he's not wanted
Go back. /hdg/ is the perfectly place for you. You don't need to be here.
ywnbaj, this isn't your thread chud. you can't police what people post
>>8618531retagged and used kotonoha for a trigger https://files.catbox.moe/n125j8.safetensors
i don't know if this is what you wants but it is a style lol
i think ai always smooths out the bloomy atmosphere if that makes sense
>>8618598Can you post a catbox so I can post a comparison pic? Thanks for baking it btw.
>>8618601https://files.catbox.moe/sprs7g.png
>if you post a picture without metadata youre attenionwhoring
rofl
Any other anons obsessed with horror?
https://files.catbox.moe/tnmkhn.png
>>8618623Not obsessed nor I do really like terror but I do like to gen unsettling gens from time to time
>>8618598>>8618609I had to change some stuff because the lighting wasn't coming out good for some reason. Either way I think these pics showcase the differences.
This one is the original lora. You can see how the overall style is very close to the screencaps but also how detailed the desks/walls/window are. The buildings too. However his lora always has that backlighting/side lighting effect on all pics which makes me think he just used like 20 pics he found on a wallpaper website and called it a day.
https://files.catbox.moe/l1ljla.png
This one is yours which is close but doesn't quite have the detail of his, especially with the skin and the way the desk/window/buildings look.
https://files.catbox.moe/blrns0.png
This one is mine. Idk why it's doing this lighting thing. The desk/window is a bit closer than yours to the screencaps but mine looks overbaked. I don't understand. My config works for 90% of datasets but now I'm repeatedly have issues.
https://files.catbox.moe/oiq1v5.png
Is this your config?
>>8617547
>>8618623Oh thank you for reminding me. I enjoy horror gens with nakamura regura but yeah I should make some more of them.
>>8618634>Is this your configye
>>8618634I wouldn't be surprised if he overbaked on some shitmerge and that somehow made it better
I know the old N64 lora that was bretty kino was baked on AOM3a or something
https://files.catbox.moe/c4v4oe.png
>>8618629I think stumbling upon a the ring porn parody when I was young was what did me in.
>>8618637Regura is nice, should make another mix with them included. Karasu raven also makes some real nice monster girls, but they desperately need a lora for noob.
>>8618643Hmm I guess this is worth a try. He was using this model for his gens.
https://civitai.com/models/1442151?modelVersionId=1732221
the urge to bake a lora on those weird ass old mugen hentai animations
oh yeah i also gotta retry that cursed game cg lora on this config
Whats the difference between Hentai Generation General and Hentai Diffusion General
>>8618663waow, the 2006 was awesome!!! https://files.catbox.moe/cadhtw.jpg
>>8618673the middle name
>>8618623>be looking at scraped cgs>stumble upon this https://files.catbox.moe/sfqyrf.jpguh i think you'd like this
>>8618692So uhh... I guess that's semen on the walls then?
>>8618462catbox for some of the images pls? i can't get the exact same style as yours with the kusujinn lora
Fugtrup https://files.catbox.moe/an57lq.safetensors
It can kinda work natively but it's better with
>>8618730Semen On The Walls is my new band name
>>8618740Don't have those but I prompted it like this https://files.catbox.moe/0bpfaq.png
>>8618516not necessarily proud of it and was just preemptively addressing it if it was thinly veiled metadata bait. I'm not sure where you'd get the idea in the first place besides projecting/boogeyman but I can make up many reasons to not provide metadata
1) Spent hours figuring out what slight modifications to add to mix to get it to play nice and I'd prefer to not see my efforts end up being used in questionable subject matter by someone with what I would consider abhorrent tastes.
2) Workflow is incredibly schizo and identifiably mine. Ironically enough this reason is because I specifically want less attention because it's low hanging fruit to pick at that could easily follow me across styles if I always posted metadata.
3a) To spite you specifically
3b) It's closing in on a plausibly deniable style concerning which artists it's copying from, so it's a candidate for maybe reviving my xitter/pixiv accounts :^)
4) Mercury is almost in retrograde
kek so it was to shill xis twitter
don't worry skinny-tits-from-above-kun, your gens are already extremely identifiable even if you don't post any workflows :)
>>8618771My point is that a general should be a collaborative environment and there is no collaboration without sharing info whether that's pic metadata, lora metadata, configs, controlnet settings, etc. Outside of the social aspect, I don't see any purpose in having a general. We don't have themes, contests, challenges, requests, or anything else so that just leaves the typical attention whoring you'd see on /trash/ doubly so if you're posting porn of all things. Regardless I don't want to shit up the thread more than I already have but at least you're reasonable.
>>8618784Uh no, seethe more style thief
>>8618784Yeah that's a fair place to be coming from. I figure if there was a sanitized /b/ thread without all the wack shit/toddlercon, I'd be inclined to post there.
Mostly I've just been throwing in 2cents @ the lora training stuff here since I have a basic understanding of the underlying math and may have something to add beyond "empirically, this is observed" like the discussion over dropout. I figured I'd post some images to add activity to the thread since the most common comment concerning the thread is that it's "too slow"
>>8618782>skinny-tits-from-above-kununderboobless-kun ;)
https://files.catbox.moe/ccpe58.png
>>8618692Heh, funny since I did try to gen some fatal frame girls, sadly they don't work natively. I tried making a shiragiku lora that never turned out well, but maybe I should try to rebake.
>>8618782>>8618801valid even if potentially inorganic, I wonder if something I'm using is overfit for that.
I put bouncing breasts in the negatives earlier because something overfit for breasts looking like there's an unnatural pressure on them on an earlier version and this reminded me to remove it at least, thanks
>>8618663i still don't think i can get it to work lmao
>>8618827Yeah it's bullshit but the civitbros might have discovered something that I need to test.
>>8618771just curious but is this a wagashi with worldsaboten mix?
the hair highlights and lineart reminds me of wagashi but the eyes and face in general remind me of that worldsaboten lora
did you know when you're training with batch size below 32 you're fucking up every other batchnorm layer
>>8618623dark theme, bleak ambience, dark persona, evil smile is a powerful combination
>>8618835funnily enough, I also thought I saw cactusman in it so I added the lora at one point and it instantly made it worse, so it's not in anything I've posted.
eyes in that one are from the kindatsu lora on civitai at low weight interacting with the other artists I'm using
>>8618834well i am the guy who made that config
the dataset is just cursed
why are zoomers so obsessed with fish? are catgirls seen as a boomer thing now and this is their attempt at counter culture?
weird windmill to fight desu
Ellen Joe did nothing wrong.
think it's a coincidence there were two highly visible highly produced sharks in the past few years, the latter may be caused by the former via usual trend chasing
zoomers would've bought into any theme'd girl if it garnered enough social media attention
>>8618879what about the kraut orca?
>>8618771>3a) To spite you specificallybased
>>8618867they should like crocs instead
Anyone know if there's a tag for two-tone clothing in which the front and back are different colors, rather than say the bottom being different from the top, or the colors being in a more striped pattern. In biology this happens and is called "countershading". This does exist as a tag but not with many examples. Is there a term for it in clothing?
>>8618652Funny that. I like to test rando shitmixes and recently tried the v3 one of this. Wasn't impressed.
>>8618747>FugtrupI'll give this one a try but feel fugtrup stuff works best with Pony or those 2.5/3D focused noob shitmixes.
>>8618523She is cute. Do some non-/h/ with her sometime
>>8619004All 2hoes are whores, I can't really picture myself doing something non-h with any of them
Does base sd-scripts have any annealing schedules?
>>8619010The internets has damaged your mind.
>>8619022The real reason is that I don't think the internet needs more 2hoe images, I would rather do more of my cute obscure gacha wives tbqh
>>8619023OK that's a good one.
so the adafactor finetune config on the rentry leaves me with around 5gb of spare vram. How do i snakeoilmaxx with that?
>>8618747>>8618973I thought fugtrup works inherently?
>>8619023>I would rather do more of my cute obscure gacha wivesUnfathomably based.
>>8619052Nah. You can kinda prompt engineer a bit with tags like 3d, realistic, blender, etc but unless your model is already slopped, it's hard to replicate faithfully.
>>8619023the true worth of AI gen
found out someone did a good train for my VN waifu on civitai with all outfits god bless
>>8619023>The real reason is that I don't think the internet needs more 2hoe imagesthis is loser talk.
you are a loser.
the internet needs more 2hu not less.
>>8619052>>>>It can kinda work natively but it's better with
>>8619023What about not-so-obscure Vtuubas?
>>8619111Those as well of course
>>8619111>not .gifmissed opportunity
fun
not something i'm gonna use every day but fun
So like, does anyone have a sense for why exactly the model doesn't simply just perfectly know how to render anal_tail? It can definitely do it, but sometimes it wants to make the tail more of a plug or vibrator, sometimes it doesn't even place one, sometimes you get one tail and one object in the anus instead of them being the same thing. There should be way more than enough samples in the dataset. Is this just a really hard concept to get for the current amount of parameters?
>assless panties have 700 posts on danbooru
>surely it must work
>prompt for it
>it works but makes the character topless
>prompt for the character's clothing explicitly
>the panties turn into normal panties
It's all so tiresome.
>>8619158have you tried "backless panties" instead?
>>8619158Meant backless panties not assless. It's an alias so I was thinking it while making my post.
Ohhh is *that* who this shitposter is. I should have known.
schizo gen is over there if you're into pointlessly prolonging that kind of activity
82066
md5: c3cae642128689e9cb466c92356e0601
๐
what is lil bro talking about
Why are we so dead tonight, /hdg/?
/hdg/?
and it's the weekend, go outside
>>8619271no more than usual
if you want action go to the shitposting thread
>>8618550I assume you are the fabled bwoposter I was referred to, could you help?
>>8619527 Thank you.
>>8619038increase batch size or train text encoder
>>8619579hi bwo, will post it in a day or two, currently baking so i've not got the vram to make previews
>>8619640Can't wait, thanks again 'bwo.
>>8618048>102d customCould u suggest other models that are equally good as this but for realistic (dandon fuga, sakimichan, zumi) and 3d (fugtrup, slash-soft) style?
>>8619688I wouldn't say custom is that bad at them, picrel
A lot of styles just need a higher base res and or lora to get them fully correct, that goes for any artist
>>8619605both of these don't really increase quality and it converges fast enough already.
blackpill on this? no jeets as authors for once.
>>8619699the inputs become way too noisy, batchnorm layers will fit to your dataset extremely quickly. actually, they will even if you use large bs, so the best solution is to freeze the batchnorm layers, this way you even you would even have more free vram
>>8619697It doesn't really do anything good for the model, at least on my end.
>>8619707>this way you would even have more free vramshit
>>8619697>the virgin wavelet
>>8619707>best solution is to freeze the batchnorm layersso is there an argument for this or do i have to go on a vibecoding adventure? are batchnorm layers included in the usual train norm?
>actually listening to 4chan advice without images
ishiggydiggy
>>8619711some trainers may let you select the trainable layers, but i don't think it's implemented in sd-scripts, or at least in the finetune script
you can try hacking it into https://github.com/kohya-ss/sd-scripts/blob/a21b6a917e8ca2d0392f5861da2dddb510e389ad/sdxl_train.py#L52
>>8619714worst case I waste some electricity, best case I get better loras
Fuck my lora didn't bake last night. How do I install both python 3.9 and 3.10? Installing one always breaks the other.
>>8619720Getting uv is the easiest.
>>8619718So I didn't find any references to batch norm in the unet library and gpt tells me that the groupnorm32 layers that are used instead aren't dependent on batch size.
>>8618840i dont think sdxl uses batchnorm since their vae uses groupnorm
>>8619727>>8619729>groupnorm32 layers that are used instead aren't dependent on batch size.yeah that seems to be the case, actually. sdxl uses groupnorm
i swear i've seen batchnorm somewhere in sd though, maybe it was during early lora development back in sd 1 days..?
>>8619720you'll need a virtual environment.
https://www.freecodecamp.org/news/how-to-setup-virtual-environments-in-python/
https://files.catbox.moe/sd0srl.png
>>8618841dark theme and bleak ambience are working tags? I've used black theme before but never dark theme. I usually only use the "horror (theme)" + "dark" combination.
should I just not bother with activation tags for full finetunes? it looks like most of the style makes it into uncond anyway
>>8619813Would you say that all style loras need activation tags or only some of them?
>>8619814I feel like it's mostly a preference thing and you can get good results with both, regardless of dataset.
>>8619815I feel like some datasets need activation tags to work while others are fine without one. It's all a black box which was my attempt to answer your question.
>>8619816been my experience too
I think it depends on whether the model already recognizes similar styles or not
>>8619790e621 tags, bleak ambience doesn't seem to have many images, so it may not work well
i personally use theme since I'm specifically trying to not get the effects horror gives, opting for more of a "good girl acting aggressive" lean. there seems to be dark aura too fro danbooru, which I might try
>>8619711>>8619729>>8619733can confirm that I looked into a model's state dict out of curiosity just now and could not find any running_mean/running_var as would expect to be seen from a pytorch batchnorm layer
>lora hell
I should have stuck to my old config...
certain "people" here should put a cannon to their head
How do you add vpred keys to a model again? I forgor :skull:
>>8620113If the .py I saved back when noob vpred was still new is correct:
from safetensors.torch import load_file, save_file
import torch
state_dict = load_file("foo.safetensors")
state_dict['v_pred'] = torch.tensor([])
state_dict['ztsnr'] = torch.tensor([])
save_file(state_dict, "bar.safetensors")
https://files.catbox.moe/uosrqa.png
>>8619903Makes sense, but are you sure dark "theme" does anything different from "dark"? Might as well save some tokens. Dark aura is usually purple/black glow around a character.
>>8620140i honestly don't really know or care if it doesn't have much difference since it gets the job done used with dark persona/evil smile for my purposes. it's an e621 tag that autocomplete gives that I just take.
dark/night tags are usually what I also reach for in combination when I do low light settings and there are loras/color grading techniques if I ever wanted more (I usually don't, as I like skin color rather than everything ending up dark blue)
>>8620140>>8620152also, I don't believe in "saving" tokens being a worthwhile effort for the most part. The TE will get what it gets and I've always doubted that the range of outputs will be just that much different because of the amount of tokens it gets, as lo g as the prompt's general meaning is within the same ballpark.
I just don't think it's that sensitive in the underlying math (CLIP input -> latent space mappings) and that most people got psyop'd into caring too much about it during yhe early phase where they lost their minds over calling it "prompt engineering" with the mental framing that came along with it
>tag doesn't get generated in every image
>up the weight
>now other tags get fucked over
>give them weight
>still other things get fucked
>download a lora
>it interferes with some parts of the image, lowering the weight makes it interfere less but also work less effectively
>there is no solution that doesn't fuck something else up or demand more manual labor (in the form of inpainting, or browsing through dozens of gens for the perfect cherrypick
God.
Is there any hope for a new good model on the horizon?
>>8620176(Dark:1.2) doesn't work for me neither
>>8620176Sounds like skill issue to be honest. But you could try raising CFG, it was meant for these cases originally before anime finetunes turned it into a blur/burn slider.
>>8620179It's actually a dangerous tag on noob v-pred, with how well it works. But merges fuck up lighting very quickly.
If I add anything besides the trigger word to the prompt if comes out deformed. Did I overtrain? For reference
Scheduler: Cosine with startup
Lr cycles:3
Lr rate: 1e-4
Unet lr:1e-4
3000 steps
text encoder:0
Alpha=Dim
Adam8bit
>>8620179Doing a different thing. IIRC dark did work for me on normal vpred when I tried it in the past. It's totally possible it gets fucked on a merge.
>>8620180My CFG is already higher than normal. At this point I've tried everything except the snake oils. You say it's a skill issue but it's a known architecture issue that the more stuff you try to pack into an image, even if it all makes sense and there isn't conflicting tags, the model will simply just choke. It likely doesn't help that SDXL has the 75 token limit and does the chunking concatenation hack.
>>8620189>3000 stepsfor what size dataset
>>8620189>Lr cycles:3KEEEEEK
>>8620189alpha should be twice dim?
>>8620189The original SDXL guides and our early Pony bakes used this LR with alpha=half dim, and 2K steps. By that logic, yes you did. But without examples and metadata it's just a wild guess.
>>8620200The original purpose of alpha was that it would always be lower than dim, at most equal. That training tools even allowed a higher value was lazy on their part.
>>86201893k steps has always been fine for me but yes post a picture. I notice you didn't post batch size tho.
>>862018960 img
50 reg img
>>8620196was told it affects the scheduler, not overall learning rate.
Now I feel kind of stupid :{
is there any way to fit pagedadamw8bit into 24gb without fullbf16?
just set alpha=1 and let rngsus take the wheel
if you arent using edm2 loss you dont know shit about lora training
>>8620211Batch size:2
and here's one
https://files.catbox.moe/ckdru0.png
>>8620236post one of the deformed ones
>>8620242https://files.catbox.moe/wcleqe.png
>>8620249And you tagged all her features and stuff? If so then yeah just lower the steps.
>>8620236Isn't batch size 2 basically like doubling your step count? That's how I treat it anyway.
>>8620258I removed tags that are in every picture like hairstyle and glasses so it's tied to the trigger word.
And Ill try 2k steps with one lr cycle and lr1e-5.
>>8620269So you never want to take her glasses off?
>>8620269anyon why aren't you keeping pierodic epochs/steps
>>8620263It's not really anything, if anything it's halving, since you gotta bump the LR a bit.
>>8620276nope
>>8620284I trained 5 epochs and I tried all of them. they had the same issues. I'll just have to try again
New config, https://files.catbox.moe/kg3ivs.toml what do you think?
>>8620323Don't use this, it fries your GPU
>>8620323I thought anon said above 1024 resolution caused issues? Does this work for you?
>DORA?
>>8620339you can go a little higher without issue, but there won't be any benefits in detail. I just have it set to 1152*1152 here to somewhat combat the over sharpening that lanczos causes when downscaling very large images.
Why do artist mixes sometimes make those weird ass fucking gremlin things in the background
>>8620376less common than you think since I have no idea what you're talking about
post gen
>>8620378sometimes i get this typa shit and it's always in mixes lmao https://files.catbox.moe/5m0iyi.png
>>8620379yeah never seen that before
try prompting your artists like artist:zankuro to avoid their names leaking into something else
Is 102d better than vpred10? I tried it out and feel like it's a bit more coherent but also less capable of some art styles and concepts.
>>8620387It's a shitmix, so it will be more stable and will generete more "aesthetic" (ie slopped up) images at the expense of being able to replicate styles.
>>8620387Sounds about right. Like every merge, it dilutes the base model's knowledge somewhat in exchange for a style bias. Long as that bias doesn't conflict with what you're trying to do; and you aren't relying on 100pic danbooru tags knowledge, it's pretty good.
Trying to fix hands so I'm using meshgraph hand refiner but I keep getting
>ModuleNotFoundError: No module named 'mediapipe'
I install it with pip install mediapipe --user in my comfui folder directory and it still gives me an error after a restart. Any idea how I can fix this?
anyone know how i can get a plain text of all booru artist and character tags?
>>8620428Ask our ai overlords to write you a script that scrapes them from the API
>>8620430wont i get blocked from too many api calls?
>>8620424>Trying to fix hands so I'm using meshgraph hand refiner but I keep gettingThe fuck is this? some comfyui node or something?
>>8620434Yes. Do you guys use something else to fix hands?
>>8620443Yeah I use cyberfix/wai/102d. Otherwise I just inpaint sketch.
>>8620387Give 291h a shot. I flip between it and 102dcustom.
>>8619645hi bwos, posted the bakes
RadishKek: civitai.com/models/1662074
Aza/Manglifer: civitai.com/models/1662450
>>8620497>sdxl_vae_mod_adapttest_01.safetensorshuh, what's this?
>>8620452Best inpainting tutorial?
>>8620501its a modified sdxl vae from an anon in /hdg/
>>8618360its a little sharper than the noob vae and doesn't have the slight blue tint of xlvaec_c0.
>>8620505hmm, did you train those loras with it? sounds like it shouldn't work otherwise, or it otherwise needs comfy to work and anon's custom node
>>8620502https://rentry.org/fluffscaler-inpaint
>>8620513nope i did not, i just tried using it for genning and kinda liked the effect.
it gave me the same results as the c0 vae but without the tint, so i'm happy with using it as-is
anyone here still have oekaki anon's
>slantedsouichirousep26-step00000132.safetensors
lora?
I accidentally deleted mine awhile back. tried looking in the rentry and archives but couldn't find mine
what is lil bro vibin' about :skull: :skull: :thinkingemoji:
>>8620497nice bwo,
by any chance would you be willing to share a "fail"/overbake of the aza loras? Wanted to see what outputs I could get from one of them
>>8620505if it works for you thats fine but im still trying things out and wouldnt recommend using it, the trained model actually produced worse results now that i tested it more in enc+dec (very blurry rather than overcontrasted), and its only the encoder that i trained so it should actually be the same as original sdxl vae when used for just making gens
>>8620570hi bwo, do you have any specific version in mind? i'm not sure if i've retained the original fails (the more recent ones might still be in the recycle bin, but i'm not sure)
>>8620585alright, thanks for the heads up.
i only tested it against the noob vae and the c0 one i was using. Found it sharper than the noob vae (might just be that the noob vae is ever so slightly blurrier, will have to test against the original sdxl vae as a control). looking forward to the results of your vae training experiment!
>>8620599I think I liked how
>>8615606 seemed to turn out. If there are ones with more steps from that attempt, that would be cool instead too.
Also, as an aside observation on the lora, it's hilariously capable at getting legible english text moans to come out.
>>8620599>slightly blurrierthats probably the case because the one tuned decoder vae i tried from civit clearly didnt use a perceptual loss like lpips and was blurrier
my idea might not go anywhere, mainly wanted to demonstrate that there might be a way to upgrade from the 8x compressed vae, to a 4x compressed one without that much training
encoder needs to be adapted, decoder also shits itself a little for some reason with having its 2x upscaling removed (the vae actually outputs coherent stuff even without training it, just a little artifacted), and there needs to be a hopefully minor finetune for the higher res since its the equivalent as generating at 2x higher res than usual
Can a kind anon prompt a blowjob where you're sitting in the car (maybe driving) and the girl leans over from the side to give you a blowjob? Can't seem to figure this out.
>>8620532apparently even I don't have it anymore(and I nuked it off mediafire since it's a standard lyco and I don't think it was actually super great to begin with). And I never really tried doing a fatter/more "recent" config run of it since I still had PTSD of running that dataset on pdxl.
I did still have the config and I'm pretty sure the dataset has been untouched since then, so here's a (sort of) reprint
pixeldrain com/u/quBr9bVC
if someone else has the original still that one is technically still going to be different due to the whole process being random because ML is fun like that, but I did at least check through a few output steps and 154 was (still) the best, though it's also a bit fried and bleeding at the edges. though that's probably fine-ish when used as a mix at lower weight and/or on derivative models instead of illustrious 0.1
>>8620428Get one of the csv that people already scraped like from the autofill extension and then ask gpt to get you a conversion command.
I posted an updated one more fit for Noob, either here or in hdg, I don't remember.
>>8620622I think that would imply you can get a good car interior pov in the first place
t. tried
It's probably somewhat bakeable though
>>8620769>managed to get a whole 9 pics for a datasetyeah i don't think so
>>8620794isn't this the perfect opportunity for that difference learning lora meme https://github.com/hako-mikan/sd-webui-traintrain
>>8620769Even if it's not car interior. Just sitting on the couch while the girl sucks you off from the side.
Detail daemon is goated
https://files.catbox.moe/8xz5g8.png
>>8620609sorry bwo, it seems that i have already discarded the earlier failbakes; don't really have anything more than the current version. from my test logs about that version, there wasn't a better off step off past 1100, the losses were fairly spread out into 2 groups and the next minima at 1900 did have good style but also had fried eyes. (resolved by adding a set of face crops in subsequent versions)
>hilariously capable at getting legible english text moansthere's a decent number of images with english and korean text in the dataset - it is kinda funny when it happens
>>8621099grab me a can of โy when you're done, nee-chan
Anyone know a way to get midget sized subjects? Not loli, and not shortstack, just a small person, though prompting loli honestly doesn't seem to help either. It feels like the model just has a poor sense of how to size characters relative to the environment.
>>8621134Try some variations of [chibi::x] (I assume you're using webui, if not there's probably a comfy node that does the same prompt edit) where x is the number of steps. The idea is you want to lock in a midget shape human blob before it starts trying to apply anatomy to it.
>>8621134find an artist that does it
>>8620622It seems gachable enough depending on how accurate you want the steering wheel, dashboard and windshield to be. Best bet is probably to gacha for something like picrel and then go fish in img2img for a better version and maybe inpaint the rest.
>>8621134perhaps try goblin but without pointy ears and green skin
>>8621134Tag should be "petite" according to the wiki. First thing to try when a prompt doesn't work is to go back to noob v-pred 1.0, see if it's your loras or shitmerge doing it. So I did, and it didn't help at all. Also interesting bias on the quality tags, style prompt was (anime screenshot:0.1)
Flux or Chroma can do it way better, just img2img or controlnet the style afterwards into something more pleasing.
>>8621159nta but guess it's more about scale of the girl relative to the background. Bodyshape is pretty easy in my experience.
Can someone with a github account bother machina to add full finetuning support to ezscripts? I need my edm2 snakeoil
>>8620962Sounds good then, thanks for the loras!
How big did the dataset get when counting face crops as their own images vs just original images?
I've also done indiscriminate face+upper body crops, but I'm wondering what balance you went with
>>8621387sd-scripts uses different options for lora training, it's not as simple as just "adding" support
>>8621383Interesting that what changes is the character's size in pixels, but the background remains the same. I wonder if there are some background tags that can influence this.
>102d absolutely zaps all the sovl out of my lora
God damnit
>>8621602>102dIt's a shitmix with a heavy butiful smooth pastelmix henti aesthetic bias, why wouldn't it suck all the soul out of a rough style lora?
>>8621602compared to what, base?
>>8621602it sadly is very overpowering on the last few steps
comp
md5: f17bde4db5b3b7c42d6022780a2f0b52
๐
>>8621608left is 29+v1
>>8621605I guess, sometimes it produces kino like picrel
file
md5: c923fdb86cd404ca7b8e26222ffc67a4
๐
how & why did no one came with with anything better than my v29+1.0 vanilla shitmerge yet?
>>8621638h-hot... now do the spitroast.
>v29+1.0 vanilla>betterBecause that one sacrifices backgrounds and all subsequent merges wanted to preserve those.
>>8621638what is better at?
file
md5: 08b42a4856491d4a13a8903dcfee1646
๐
>>8621650>h-hot... now do the spitroast.that's an old gen actually (picrel as well)
>v29+v1.0 sacrifices backgroundsi'm not a background fag myself, but is that really the case? i remember doing tests on backgrounds, and remember that the merge is better than either v29 or v1.0. if i had to point a flaw, it'd be that sometimes the picture falls apart for no apparent reason (or that could be a skill issue with upscaling on my part but whatever)
>>8621666>remember that the merge is better than either v29 or v1.0.I'm inclined to agree but it absolutely shreds backgrounds. Might be loras in general (every lora I've used) but they're simple at best and nonsensical at worst. 102d at least makes the character look like they're in the environment. It also seems very sensitive to schedulers and they completely change the style (which was the whole point of using the model).
>666Uh oh... I will disregard everything you said then.
>>8621638v29+v1.0 is decent but more than often I had this little artifacts everywhere no matter what I did
>>8621602>>8621612I've also found that 102d seems to have issues with adding canvas/paper texture to lines. I've been trying simple merges with it with some success, but it does still get annoying
I've found merging with the bluemint version to be the most effective for these styles specifically
file
md5: e3fdb9d1af883c318930d3885be9c718
๐
>>8621674>sensitive to schedulers>>8621684>little artifacts everywherethat's what i mostly meant by "falling apart", for example if you apply a lora (especially trained on eps) the images become, like... i don't know, muddy? I prefer not to use loras either way.
>>8621638>these aren't my glasses
>>8621684>>8621692>for example if you apply a lora (especially trained on eps) the images become, like... i don't know, muddy?Hmm I've been having this problem a lot too and this might be the cause but I was having it on 102d. Very strange.
>>8621692>>8621812Why are you using eps loras on a vpred model?
>>8621813>I prefer not to use loras either way.
owa
md5: f6acbeacddf92c935d911f95e44642fc
๐
whats up, naisan? Scared of a little... sovl?
>>8621602Try it with 291h. Normally what I flip to when custom smooths out my rough/scratchy artist mixes.
kitti
md5: afad29aa06c1fe4a3b629c0f21052095
๐
>>8621840desu pleasantly surprised by it actually. 1+29 on the left and 291h on the right, would've expected a merge like 291h to do much worse.
>>8621840>>8621849Alright 'nonnie, you've finally convinced me to give this a go.
Any tags or loras that give the scene consistent lighting? It feels like a lot of the time, if you ask for a blue or green or whatever theme, it will just change the background and leave the girl shaded normally.
>>8621856try "[color] theme, high contrast"
>>8621840What is the full name of this shit? I hate all these abbreviations.
>>8621856literally just use a color balance node/extension on the base res gen, upscale it and send it to i2i
>>8621849I like it a lot as it's VERY similar to 29+1, that's what the anon who merged it was going for just with a bit more stabilization so no loras or other snakeoils needed, without it going schizo with complicated prompts. I don't know shit about model merging but the anon who baked it was supposed to do some updated block merge to it before dying. No idea what it would have accomplished but even so, great little shitmerge.
>>8621861>https://civitai.com/models/1301670/291h
>>8621856Sounds like a shitmix issue, [color] theme works perfectly fine on base noob.
anyone noticed how greyscale gens have worse lineart than colored ones?
is 29+1 discussed here https://civitai.com/models/1313975?modelVersionId=1483194 or what?
>>8622150This one
>>8621958 but the one you linked is very similar. Seeds on both models gen just about the same shit.
>>8622156That's 291h isn't it, anon was talking about "v29+1" at first
>>8622167Oh I don't have them anymore but it was genned with
>>8618434
>>8621958>https://civitai.com/models/1301670/291h>v29+v1.0>+>...>illPersonalMerge_v30>noobieater_v30>obsessionIllustrious_v3>obsessionIllustrious_vPredV10>catTowerNoobaiXL_v15Vpred>noobaiCyberfixV2_10vpredPerp>EasyFluffXLVpred>QLIP>betterDaysIllustriousXL_V01ItercompPerp>betterDaysIllustriousXL_V01Cyber4fixPerp>betterDaysIllustriousXL_V01CyberillustfixPerp>who knows what number of loras
>>8622181I wonder what the people who merge this kind of slop are trying to achieve.
holy shit it even got lucereon through obsession in it
https://civitai.com/models/818750
>102d is slopmerge>now 29+1 is the real deal>>8622187I mean it's ultimately just throwing shit at the wall and seeing what sticks
I doubt many people were randomly merging shitty anime model layers with 3dpd and thinking it would be the primary weeb model for a year during early 1.5
>>8622168Ah good catch.
>>862215029+1 was only available through torrent because some anon didn't want to piss off LAX by making it public or some convoluted shit like that. I think some anon also put i up on mega a few days ago but said he'd be taking it down after a day or two.
>>8622181Nigga, he could have added expressiveH and All Disney Princess XL LoRA Model from Ralph Breaks the Internet. As long as it doesn't produce slop gens, why the fuck should you care outside of autism?
>gyet fomod into trying it again
>both are shittier at my artists than custom
whew
>/hdg/ is the other-BRAAAAAAAAAAAP
>>8622193Oh yeah it's pretty easily findable but here
magnet:?xt=urn:btih:1a8e80eb5fc2e1dd42ad7f68e13d1fe73b9d8853&dn=NoobAI-XL-Vpred-v1.0%2bv29b-v2.safetensors
>>8622294pretty based ngl
Should I upscale first then use face detailer or face detailer first then upscale? Or does order not matter
>>8622346You should inpaint details like the eyes and mouth after upscaling, stop using that automated shit (unless you are a cumfy user then you really don't have much of a choice)
>>8621388the dataset (~150 images) is around
- 25% augmented (face + upper body crops with some rotation)
- 25% uncensored images
- 50% from danbooru / gelbooru (mix of censored / uncensored, with text / no text)
>>8622348I do use comfy for most of the gen but bring it into the webui for the inpainting sections.
What are the implications of excluding a tag if it's not in the image vs using "no <tag>"? Does "no <tag>" teach the model to always gen something UNLESS you give it the "no" tag?
>>8622433"no <tag>" generally doesn't work and instead gives you the <tag> through CLIP leakage. Unless "no" is part of the danbooru tag, such as "no outline", "no headwear", no humans", etc.
>>8622455>no outlinebad example, it's "no lineart"
>>8622455Yes, but what does this mean for model/lora training? Does boorus having these "no <tag>" tags fuck up model in some way when you consider the unconditional tag dropping? What does the model learn when you are explicitly tagging something that is not in an image?
>>8622463It probably still ends up a positive association. "no lineart" is a style similar to watercolor and with some /aco/ bias, if you don't prompt it with anything else. "no pants" is basically the same as" bare legs". etc.
>the loras I have to use for my super specific fetishes also ruins the ability to prompt for great backgrounds
Sigh.
>>8622473Depending on the fetishes, you may be able to schedule the lora to only the early steps, to only the center of the image, or remove it when upscaling.
>>8622473>backgroundsishiggydiggy
>>8622473>backgroundsautism
>come upon an artist mix you like, that produces good results with a wide variety of angles and poses
>except it looks like shit with certain kinds of clothing
It never ends...
>>8622683this but pussies and dicks
file
md5: 90f44c72ebd7babc99da8c846482204a
๐
is there a dress like this irl?
>>8622708I don't think piercings are made with steel.
do you like slow corruption?
>>8622726you mean like getting a shy girl slowly turn into a slut?
>>8622726you mean like compressing the same pic in jpeg multiple times?
i mean like in the op of the thread we stole the name of
We still don't have a new logo btw
>>8622346Do it both times. Once to have a decent face as a base and the second to really get something good out of it.
>>8616235This shit is garbage.
>Base model: The base model you are training on, it should be either Illustrious0.1 or NoobAI-Vpred 1.0 for most users.No you dumbass. It should be whatever the model you're using is or the predominate base model.
The argument for "training on illustrious 0.1 for compatibility :)" is dumb as fuck and horrible advice. Its compatibility is SHIT and washes out on any model that isn't a mainline illustrious model and if you're using v1/1.1/2 you shouldn't be training on 0.1 because of the resolution mismatch. You should also, unfortunately, be training at 1536x1536
>Scale V pred loss: Scales the loss to be in line with EDM, causes detail deterioration. Not recommended.You uh. You kind of need that enabled to train on vpred, you know.
>Width: Keep at 1024 for Illustrious/Noob. Higher values do not increase bake quality.It doesn't increase "quality" in a general sense but you should be matching the base resolution of the model you're training. Which 1024 does not apply to for illustrious 1, 1.1 or 2.0
>Gradient Accumulation: Used for virtually extending batch sizes for less VRAM cost. Not recommendedJesus fucking christ.
>Batch Size: Represents the maximum number of images in each batch. Multiple batches allow for quicker training but also exhibit problems with symmetricality in the end bake. Keep at 1.Fucking dumbass.
>Pyramid noise: Tweak to model noise, supposedly less destructive than noise offset. Causes quality deterioration. Keep off.Actual retard.
Is that you, refiner-faget? Because this is just as dumb and seemingly intentionally damaging as that bullshit.
Hmm I don't have any reaction pic for this, time to start the oven
>>8622760Why not just explain what he should have said instead of calling him stupid? I'm so bored with this low iq "banter" of every fucking thread on 4chan. Do something productive for once.
>>8622762It's better this way. He's far too sure of his opinions, even ones that are obviously wrong like scale v-pred loss.
>You uh. You kind of need that enabled to train on vpred, you know.
retard
>>8622763>scale v-pred loss.you're confusing scale v-pred loss with v- parameterization
>>8622763>It's better this wayNo it isn't. It's only "good" for you since your entire purpose in commenting is attempting to prove yourself superior to him. Why not share your knowledge with the rest of the class of fuck off? If you're the smartest person in the room, you don't need to be here.
>>8622760>refiner-fagetthis is me, anon
>>8616298
What are the most tags you've ever used in a prompt? I'm up to near 400 now. Gunning for a super specific image with a ton of things in it including stuff that doesn't exist as tags or that work weakly so you have to use a dozen hacks and try and make them all work together without interfering is surely a challenge.
>>8622762After a certain point it isn't worth it. And a lot of it comes down to "you're just trying to explain the functions of the application don't be a dumbass and keep your retarded pet decision shit to yourself."
I stopped where I stopped because I just couldn't care anymore. There's more fucked up shit on that "guide."
>>8622764It's been a long while since I had to deal with settings to make vpred loras work but from what I remember, trying to actually do all the proper vpred shit and not having that disabled just output noise.
If you don't have all the proper flags in place then sure it will technically probably work but you're training it as EPS and that's kind of contrary the point.
>>8622769>What are the most tags you've ever used in a promptMeant tokens there.
file
md5: eb427a8948ab6ccbd67a82e86047d964
๐
>want me to pour you another glass shinji?
>>8622771Comfy doesn't have an integrated token counter so I have no idea. But at some point adding more tags either doesn't change anything or overpowers some already existing ones, I usually stay under 30 whole tags.
file
md5: 0c3c4573ae28aef4d23263744d05407e
๐
felt like making a short story with peach
https://files.catbox.moe/x1kntf.cbz
>>8622792What the fuck is a cbz. did you just give me a virus
>>8622769>400 tokensAbsolute madprompter. My average prompt sits at around 100, probably only goes up to near 150 at max.
>>8622766>Why not share your knowledge with the rest of the class of fuck off?do you think that anon actually has any knowledge to be shared? 100% of these drive by training posts are made by schizos who have never actually bothered testing themselves
>>8622800It's a zip file I think.
Just realised how important some token orders are. At least, the order I put some artists in has a huge impact on 102d.
How do you 'nonies order your prompt? Not sure if mine is optimal but it's: artists, quality tags, background/meta, positional details, subject details, negpips.
>>8622819I put negpip anywhere. I put quality tags in the front, the artist, then character info, then background at the very end with the loras.
>>8622822Same, but it's more out of habit and to keep the prompt tidy, I haven't really noticed changes swapping the order of tags
>>8622819Prompt template is as follows:
>quality tags>artist tags>fundamental (unchanging) female info>clothing>facial expression>female body position>male information/sex tags>background>lorasI don't want to buy the prompt order snake oil, thoughever.
>>8622819More or less the same as yours but I don't use any quality tag
>>8622819I order by importance, or what thing the model should concentrate on first, and model bias. Because of a model's biases towards certain tags, you need to use trial and error unless you already know how all the tags you're using likes to get interpreted by the model. Of course this is if you encounter issues where the model is not generating what you want, if it is then you don't need to further perfect the prompt unless you really want to.
That's if all my tokens fit into the token limit. If they don't, then generally speaking I will split my prompt up into concepts using BREAK, and redundantly begin all of them, or most of them, with tags that tie together elements of the image. I base this on the theory that cross attention is more loosely relating tags across prompt chunks, while each chunk is more well-understood. The effect of this might be placebo but I feel like it helps so I haven't stopped doing this. It makes my prompts more structured and easier to parse anyway which I think is the greater benefit.
>>8622346>t2i + detailed anzhcs>upscale>inpaint
>>8622769I never go over 300 as it just fries anyway. Basic scenes are normally about 200. I only push 300 when there's some specific angles and poses I'm trying to prompt engineer with certain clothing.
>>8622819>2boys>character, copyright(optional)>artists>positions, actions, overall composition>character details, clothing>concepts like size difference, penis size difference, height difference, etc>background and scene items>quality
Trying to train a style lora with only 33 images, any tips? is it even possible?
>>8622708can i see a boxo?
>>8622884I've trained one with 24 so yes. If your current preset is working for you then stick to that and you'll be fine.
>>8622884Sure, style will work even with one image. Can't really call it a style lora tho since it'll reproduce everything. The fewer pics you have the more biases the lora picks up in terms of composition, characters, background, lighting, etc. Ideally you'd have a whole bunch of different ones with style being the only thing they all share.
Ok so I tried using a bunch of loras to make a small non-loli non-shortstack woman relative to the environment. It kind of worked but also made the girl look too much like a loli, especially the narrow hips. So I tried using those bottom heavy loras to try and return things. And doing that fucks the scale up more again and also makes it look more like a shortstack which isn't the goal. So yeah I guess img2img/controlnet is the only way.
eps schizo if youre here try to bake a lora for this artist. or anyone who wants to try. i think theres more of their images on their twitter
https://danbooru.donmai.us/posts?tags=inutokasuki
>>8622940>i think theres more of their images on their twitterunderstatement of the week.
here's a twitter scrape with, I think, all of the photos and random gachashit screencaps removed(it has 343 images)
https://files.catbox.moe/8f5t46.zip
wouldn't use it as a dataset outright though. there's a lot of weird shit, a lot of weird/complicated poses, a lot of really lowres stuff and the artist bounces between like 3 or 4 different brushes.
anyway I'll do a lazy-tier run with a cut down dataset, see what it's doing and adjust accordingly to try and do something overnight or something.
>>8622884Use flips and crops with repeats
>>8622819lora triggers,
artists,
characters,
1girl, blue eyes, standing shit,
2boys, extra dark skin shit,
composition,
quality
>>8622819Technically Illustrious has a set order from the paper but honestly you can rearrange order ass backwards and it will still work.
>>8622981https://files.catbox.moe/knnkc9.png
https://files.catbox.moe/q24kuq.png
I probably need to do another 1-2 runs testing an adjusted dataset but I guess the artist should end up functional-ish. though it doesn't seem like it's escaping all the problems that usually pop up with these sorts of digital rough sketch styles, in that the anatomy is usually a bit more on the thicker side of things and hands can end up pretty nebulous. I'd say it probably works better generating in one style for anatomy/composition and then just upscaling at a higher denoise(that's what was done for the hoshino catbox), or using as a part of a mix or something than by itself.
I'll probably have something uploaded by tomorrow or so.
>>8623217Sour cream looks tasty, maybe I'll go put some on bread.
Anon, for fuck's sake, you should try launching training with this environment variable. It's basically free VRAM.
PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True
>>8623265Does this affect the training speed?
>>8623268It's as quick or slightly faster.
>Hentai Games General have become Hentai Generation General
Are things that bad?
>>8619220>>8619235I see a lot of kyokucho there, nice gens btw.
>>8623365the artist style name is beitemian (yaoi)
>>8623368>beitemianoh, the way the girls were depicted reminded me a lot of kyokucho, oh well, nice to know, I'll test him anyway, looks interesting.
>>8620340>I just have it set to 1152*1152 here to somewhat combat the over sharpening that lanczos causes when downscaling very large imagesYou can fix that by changing buckets interpolation from INTER_CUBIC back to INTER_AREA in sd-scripts/library/train_util.py
>>8618317>>8619066Could I get box please?
>>8623477https://litter.catbox.moe/sgsjw7rmvn5huwfs.png
Won't help you much. Used controlnet off an earlier gen for the pose (same prompt), then adjusted the style mix again when upscaling.
>>8620323no scalar? no dropout?
>>8623518Dropout kills style for me, even at the lower suggested end of 0.0005. I did some bakes with 0.0001 and below, but didn't really notice any changes (positive or negative) at that point.
Scalar gave me errors and I never got to test it.
>>8623532Is that caption dropout or neuron dropout? I run the latter at 0.1 and styles are not killed.
are inpainting models necessary?
>>8623549not really but It would be very nice to have, we would be able to put complete new elements into a gen without any extreme shit to make it blend nicely
>>8623543no, with regular lora
might be dora conflicts with that somehow
>>86235700.1 is fine with old "doras" and locons.
Too high for new, fixed doras.
i cant get good fingers for the life of me even with inpainting, any recs?
>>8623573you are clearly inpainting wrong if you can't get that pose right, post yout current inpaint settings
>>8623574Just trying out a bunch of different values with the padding and denoise. Trying to follow this guide https://rentry.org/fluffscaler-inpaint-old
>>8623577>512 * 594No shit inpainting isn't helping, try 1024*1024
>>8623581Tried it and it barely seems to be helping out
>>8623577increase mask blur to 24
decrease masked padding to 64
enable soft inpainting
use more steps (32)
increase denoise to .5
use 1024*1024
>>8623543It's neuron dropout. I haven't tried it on higher rank loras or the full preset yet, but the 16 or 32dim loras I usually train definitely suffer some degradation, from which they don't recover during my usual training length
>>8623591Best iteration so far, thanks anon i'll save these settings for future use
>>8623577>euler a>20 stepsanonie pls
>>8623577What
>>8623591 said and don't use ancestral samplers for i2i stuff
>realized a good part of my shitty artist recognition was just scheduler/sampler not resolution
retest coming :-DDD
or maybe not
desu i do love the greater depth and detail you get at higher base reses but the occasional weird anatomy snakies do get annoying
>>8623628>>8623604What samplers and amount of steps do you guys rec
>>8623631>realized a good part of my shitty artist recognition was just scheduler/sampler not resolutionI posted about this some threads ago
>>8623635I'm using euler e 25
Everyone has their own schizo theory, but anyway it only takes a few seconds to try another pair on any image you gen.
>>8623635Euler + SGM Uniform or Simple works fine for inpainting
>>8623635Anything than isn't euler a desu, steps don't really matter since they get scaled by default by the denoise amount anyway so it's like 3 v 5 steps in the end
>>8623637i was posting about that too but i was getting off a highres high and tested the artists with the new sampler along with the higher res at the same time and thought it was related lmao
>>8623628>don't use ancestral samplers for i2i stuffHoly shit I did follow guide and keep everything the same as txt2img when img2img multidiffusion upscaling, is this why lineart is thicken smoothed out?
>>8623647I inpaint with euler a out of pure lazyness and don't see any issues
file
md5: 13794fe873348116abbdc37dfe34fdef
๐
>It's NOT funny shinji... You're so done for... Let me in RIGHT NOW!
>>8620323Pretty sure half of those optim params doesn't work with regular ADOPT
https://github.com/67372a/LoRA_Easy_Training_scripts_Backend/blob/413a4d09db5265ade3fcd64b402f60180ec9024e/custom_scheduler/LoraEasyCustomOptimizer/adopt.py#L30
It should work with SF one.
>>8623647Euler a smooths out a lot even in txt2img but that could be true just because higher reses tend to also smooth things out a bit
uegh honestly though it's hard to go back to 1024x1024
you really do lose a lot of detail and clarity
idkk
>>8623676sdxl vae fucking sucks
>>8623594What kind of degradation? I'm training on a small style dataset with your config and neuron dropout of 0.001 and it looks fine 40% in the run.
>>8623682i mean it is 1024x1024
that's a resolution that fell off in like 2007 for hentai images lol
i'd say the nai images are proofs that the vae doesn't really do THAT much at that resolution
>>8623686crazy talk honestly, 4 times the detail at the same resolution is definitely noticeable
ironically their upscale sucks so bad it's not even worth it though
>get back into SD
>try a bunch of stuff
>look back on my best based64 gens with the best artist and lora mix I had then
>they're lower res, and less coherent, BUT the style is better than what I can do now
Damn. And the same loras don't exactly exist in the same way for illu/noob. I guess I will just keep experimenting with mixing until I get back the glory.
How easy is it to train a lora btw guys? Can it be done on a 3090?
>>8623690i still haven't really seen anything that impressive
or actually showing "4x the detail"
a lot of artists posted looked shittier than v3
composition? maybe sure but the pitfalls for artists are still here with the same base res
>>8623691my setup uses like 7gb vram last time i checked
That reminds me. Does jeremy clarkson still improve gen quality on SDXL models?
>>8623698I think you're coping about the vae desu but whatever
>a lot of artists posted looked shittier than v3yeah, their fault for baking shitty aom lighting into the model lmao. neta's lumina model is looking way more aesthetic
>>8623686do you realize that most monitors are 1080 pixels in height
>>8623666>It should work with SF one.I switched from SF to normal and just updated the parameters that caused errors. Don't really know if I can recommend adopt in the first place.
>>8623716>I switched from SF to normalWhy?
why do my gens have a random red hue to them
>>8623698>a lot of artists posted looked shittier than v3which one?
Anyone know an artist that consistently draws hips/thighs like Asanagi but doesn't draw wide shoulder like he sometimes does? Don't like the shading and linework he does a lot of the time either.
>>8623747maybe it's a comfy quirk because there was another anon with that problem
>>8622940https://www.mediafire.com/folder/7e2x1fheakgc7/inutokasuki
didn't test it super thoroughly but this seemed to be the best performing run.
>>8623751I feel like that's a very /aco/ thing, generally. Jadf takes it pretty far.
>>8623753Previous image had softer colors and lighter strokes I think? I liked that more, but I'm also not the one who requested the lora.
>>8623549Yes and no, no because you can "make do" without it and yes because without it you can't properly "inpaint" because the generated image doesnt align with the rest of the image, this is less true for DDIM for whatever reason...
>>8623749this looks quite a bit like vpred without APG/rescale
sd-scripts is such a cancer
>>8623749If you are using noob v-pred 1.0 or 1.0+29 and not using any loras, it helps to have some kind of CFG adjustment. CFG Rescale, CFG++, AYS, etc. That or keep your CFG really low, like around 3.
>>8623784>1.0+29eh, that one can do fine without various snake oils
>>8623786Meant that as "if you're using one of these two AND getting red/blue hues everywhere". If not then obviously you can keep doing your thing.
>>8623747do you really want to know the reason?
>>8623772previous one also had a lot more issues with hands. Like, really bad consistency issues.
also pretty sure the reason the lines got more defined is because, well, I added something that was much more defined to the dataset. But I'm pretty sure this is also what corrected the hands issue.
here's a one-off grid comparing the two to show what I'm talking about
https://files.catbox.moe/2u3ip7.png
though if anyone wants that earlier one here's a pixeldrain
pixeldrain com/u/RbFNmthN
>>8623796Oh yeah good point, he was asking for the reason not how to avoid it.
>>8623628>don't use ancestral samplers for i2i stuffWhat is the reason why you shouldn't?
I wish someone finally tested this
>>8623265 besides myself, I don't get why nobody is using this when it frees over 4 gb VRAM, making it possible to train sdxl's unet at batch size 12 on a 24gb gpu.
>>8623799I don't know the technicalities but it always end up looking like shit for me.
>>8623797Why bake on epred and not vpred or illu 0.1?
>>8623803If you're on linux you just put
>PYTORCH_CUDA_ALLOC_CONF=expandable_segments:Truebefore the actual command, so it's like
>PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True python sdxl_train.py ...If you're using windows, you do
>set PYTORCH_CUDA_ALLOC_CONF=expandable_segments:Truebefore running the train script.
If you're using easy scripts, idk. Try setting this variable for the entire system, globally.
>>8623807>vpredbecause I don't use vpred(and they didn't ask for a vpred train so I just kept to my usual standard). Not going to go into the bullshit but regardless of what shitposters try and claim, EPS 0.5 is just the most consistent with my process and requires the least amount of cleaning in post.
>illu 0.1no point unless you're using an illustrious model or a shitmix with it as the predominant base model. I don't use shitmixes and if you're using a later illustrious model you'll get anatomy wonkiness due to expected resolution mismatch.
tl;dr: because I make things for personal use.
file
md5: d218f863345d1be26424efe5d75c1111
๐
file
md5: b0b9dd04c88fc86770d36d235d762255
๐
>>8623821seems like 16 hours weren't spend for naught, now gonna try 1536x run with the same dataset, should take about 3x long
>>8623265>>8623800Okay I just tried to test it but got similar speed and vram consumption results
3090 TI
Torchastic + fused-backpass + full_bf16 - bs1 - fft - 23444MB - 2.28s/it
Torchastic + fused-backpass + full_bf16 + command - bs1 - fft - 23346MB - 2.26s/it
What was the rest of the config?
>>8623837>Torchastic + fused-backpassHmm, I'm running AdamW4bit+bf16_sr https://github.com/pytorch/ao/tree/main/torchao/optim and naifu instead of sd-scripts, the rest should be +- the same. I'm not really sure why, but it does give me a huge memory advantage, maybe it's because AdamW4bit is implemented in triton? I'm at about 16gb usage using batch size 1.
>>8623837>>8623847Actually, it may be because using triton kernels requires compiling the model. The last time I was trying to use sd-scripts, I couldn't get the model to compile, so... Maybe you can force it through sdpa but I doubt it.
>>8623847> maybe it's because AdamW4bit is implemented in triton?Probably, triton is not even the part of sd-scripts, tried with old january installation of 67372a fork which nominally has it, but doesn't seem to actually utilizing it, it's still the same for adamw8+full_fp16
> I'm at about 16gb usage using batch size 1PagedAdamw8 are probably one of the best for vram saving. While it's still being adam, it could fit in both encoders and unet in full fp16 precision under 14gb on my machine, batch 12 is finely fit under 24, but of course at the cost of some speed loss
> AdamW4bit+bf16_sr https://github.com/pytorch/ao/tree/main/torchao/optim and naifuLinux or windows? Can you show a full command of how you running it with naifu? I'm willing to try it
>>8623849> sdpa Nope, no luck either
>get a nice artist mix going
>it also results in banding
aaaaaaaaaaa
>>8623874get 4ch vae'd, nerd.
file
md5: efe95f83df2a967e7931976bce6b69a7
๐
>>8623826Melty native 1280x1792 res gen with working cross-eye stereoscopic effect bonus
>>8623879damn people still use 1.5
>>8623879>cross-eye stereoscopic effectthe fuck?
>>8623866>it's still the same for adamw8+full_fp16Ugh, sd-scripts uses bnb implementation of (paged)adamw8bit which is written in cuda and does not require triton.
>PagedAdamw8You're technically offloading gradients to RAM. I think it's possible to do with one of torchao's wrappers but I haven't tried it yet.
>Linux or windows? Can you show a full command of how you running it with naifu? I'm willing to try itLinux, as long as you have all dependencies installed you can just run it like this:
>python trainer.py config.yamlThis thing is much more modular than sd-scripts, there are 4 basic fft configs here https://github.com/Mikubill/naifu/blob/main/config/train_sdxl_v.yaml but either way I'm running a heavily modified version of naifu (most notably to add edm2 and some other things I tried playing around) so it's not like my configs will be useful to you.
file
md5: 0184a15e00b2c9d808f2e934b6d5e270
๐
>>8623881can't be bothered to find good styles on a first test checkpoint
>>8623896tell me what kirta is going to do for (You) that other programs won't
>>8623889> Ugh, sd-scripts uses bnb implementation of (paged)adamw8bit which is written in cuda and does not require tritonOkay, I'm just trying with torchao implementation and sd-scripts. It spits out assertion error of lr and doesn't start
>lr was changed to a non-Tensor object. If you want to update lr, please use "optim.param_groups[0]['lr'].fill_(new_lr)"After commenting out that there is just endless model compilation every step of training which leads to 98.77s/it, I'm pretty sure it's because kohya code somewhere are fucked and probably it could be easy fixable to get both edm2 and tochao workable on the fork
>>8623923>It spits out assertion error of lr and doesn't startAh, I remember it being a quirk of that library, you literally have to follow what the assertion is talking about and convert lr to a tensor like this.
lr = torch.tensor(lr)
>there is just endless model compilation every step of training which leads to 98.77s/iti don't think you need to compile the entire model which is probably what sd-scripts are doing. On naifu the training starts in a few seconds.
>every stepDid you run it for like 20 steps?
>>8623927> convert lr to a tensorYeah, I get it, just don't know where in should be done in the code
> i don't think you need to compile the entire model which is probably what sd-scripts are doingIt's not adopted to triton at all, so besides that it probably recompiles in a loop every step
> Did you run it for like 20 steps?Just for 5. When you look at the console you can see how it stops for a second after step is completed
cook the thread bloody bitch
more like cucks on this very thread
why is girls kissing girls so fucking HOT
I keep running into the issue where the model knows an artist well but doesn't draw them as well as I'd like. Would it be a good idea to train a lora with the artist name as an activation tag in that case? I tried doing it without one and the model doesn't learn very much and things come out weird.
>>8624039Doesn't take much training at all if you're building on top of existing knowledge, easily 1/4 of what you'd normally need.
>gen 100 images
>the first one was the best
How does this keep happening.
>>8624074it's telling you to inpaint instead of rerolling
>>8623847>adafactor>fused backwards pass>unet only>batch size 3>12gb vram
>>8624135much constructive
Is there any point in using global gradient clipping if optimizer can do adaptive and SPAM ones?
Is there any point in using WHAT if optimizer can do WHAT and WHAT??
Chill out bro, just type
>1girl, 1boy, touhou, dark-skinned male, suspended congress
and enjoy the show like a NORMAL person.
>>8624176>suspended congressuhm go back to pol
is controlnet stuff better on comfy or the webui
>>8624188You can't use tiled CN with multidiffusion upscale in webui, iirc
file
md5: d9e5d53ca80dc941d7a5d3996ccd8f28
๐
>>8623826>1536x runyeah it definitely pays off, same seed for a row, 1928x1152 base res on the bottom, 2048x1152 on top
>>8624192you trained with 1536x1536 base res? it does look sharper
was our bwoposter right about training loras on a res higher than 1024
>>8624199no idea who you're talking about but if you're training on illustrious 1, 1.1 or 2 you should train at 1536x1536
you can probably get away with training it on that for other models, too, but I wouldn't use the lora for anything other than upscaling if you do.
I'd argue it's stupid to do in general because illustrious v1/2 are shit but if you're intent on using it you should probably at least do it correctly.
I got a better result just getting something at 1536 then doing at 1024 and upscaling it 2x
1536 training seemed pretty sharp when i tried it but the actual details of everything seemed less consistent and shittier
file
md5: 07201dca8d29c8585f62bf710e05bb44
๐
>>8624199>you trained with 1536x1536 base res?Yeah but it's not exactly that simple. I first trained noob vpred 1.0 (not a lora) on a dataset consisting of 4776 images for 15 epochs at 1024x, and now I'm continuing from that checkpoint but at 1536x. I'm also using cosine schedule so early epochs look kinda fried and smudgy, but it looks like it's already starting to forget some things desu. And it looks like color blownouts only got exaggerated. Pic is 5 epochs at 1536x.
>it does look sharperare you an actual schizo?
>>8624232what kind of images are you finetuning noob on?
is the objective of your finetuning to improve backgrounds? (judging by your pic)
>color blownouts only got exaggeratedi've been wondering what causes these color blow outs on vpred models
>are you an actual schizo?honestly i might be. i still found your image looking sharper than what i've been able to gen, albeit a bit smudged
file
md5: d8918b295336db3a05ed33b1c688c39c
๐
>>8624237>what kind of images are you finetuning noob on?all kind of except furry, comic and 3dpd (i included some sliver of /aco/-like artists present on danbooru for regularization) but it's not a good dataset by any means.
>is the objective of your finetuning to improve backgrounds?not really, it's just that i got a bit tired doing an upscale just so that the details are crisp. although i included some backgrounds.
>i've been wondering what causes these color blow outs on vpred modelsif you want a short answer, it's due to undertraining on the very first steps, when your picture is almost entirely consists of noise. The model isn't really sure which "color" it should set for various parts of the image, and if for some reason you want to put some dark object on a white background (or vice versa), the model may be confused and set "overall color" for this dark object as "bright"
file
md5: 968abd7017c1a23420868c4f5defd836
๐
>>8624256all images here are genned at 1152x2048 btw
>>8624290then bake it, faggot
>>8624301Why (You) haven't do it
>>8624303Thank you for visiting 4chan dot org. This is an English-speaking board. Try "Why don't you do it?" or "Why haven't you done it?"
>>8624316I just wake up ok?, my first posts of the day are always this bad
Un dรญa de รฉstos simplemente responderรฉ en espaรฑol a todos para evitarme esta clase de molestias
>implying he isn't here 24/7
>>8623751pottsness
simao (x x36131422)
Kind of off-topic but does Nintendo DMCA lewd images on twitter? I scrape some accounts with gallery-dl and it said a post was Dmcaed. Googling the link brought me to Midna fanart.
>>8624376Pokesluts were the original gacha girls so...
>>8624376Nintendo DMCA everything they don't like basically.
>Please wait a while before making a thread
what
nvm
>>8624386>>8624386>>8624386I fucking hate you all btw
>>8624387>I fucking hate you all btwwhats wrong?
>>8624595>whats wrong?everything