High contrast edition
Previous Thread: >>
>>8647788>LOCAL UIreForge: https://github.com/Panchovix/stable-diffusion-webui-reForge
Comfy: https://github.com/comfyanonymous/ComfyUI
>RESOURCESWiki: https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki | https://comfyanonymous.github.io/ComfyUI_examples
Training: https://rentry.org/59xed3 | https://github.com/derrian-distro/LoRA_Easy_Training_Scripts | https://github.com/bmaltais/kohya_ss | https://github.com/Nerogar/OneTrainer
Tags: https://danbooru.donmai.us/wiki_pages/tag_groups | https://danbooru.donmai.us/related_tag
ControlNet: https://rentry.org/dummycontrolnet | https://civitai.com/models/136070
IOPaint (LamaCleaner): https://www.iopaint.com/install
Upscalers: https://openmodeldb.info
Booru: https://aibooru.online
4chanX Catbox/NAI prompt userscript: https://rentry.org/hdgcb
Illustrious-related: https://rentry.org/illustrious_loras_n_stuff
Useful Nodes/Extensions: https://rentry.org/8csaevw5
OP Template/Logo: https://rentry.org/hgg-op/edit | https://files.catbox.moe/om5a99.png
>>8653048>1c is trained on the main cluster>uploaded 7 months agoThey didn't have compute then afaik, one of euge's v-pred experiments?
>>8650468What do you use to merge multiple custom masks like that? I think Attention Couple (PPM) node works but I'm not entirely sure if it's the best one.
Comfy Couple is just for 2 rectangle areas, right?
>>8653101Conditioning (Set Mask), it's a built-in node. It takes the conditioning for one area, and a mask telling it where the area is.
If you're also using controlnet then the whole mess together would look like this: https://litter.catbox.moe/jnsdso5o9352yq46.png
I use the mask editor built into Load Image node, you can just right-click it with an image loaded.
What are the /h/ approved shitmixes again? it's been a while since I've been here and I'm getting into genning again.
>>8653125how
how this fucking question gets asked EVERY SINGLE THREAD
HOW
>>8653124I was interested in what you are using after combining the conditionings. Turns out you are using Attention Couple, but when I'm trying to install missing nodes it installs Comfy Couple instead and your node is still missing. They have different inputs, Comfy Couple overrides the areas I set back to rectangles it looks like.
>>8653126Based Wai-KING.
>>8653127Because the OPs are useless?
>>8653124Just want to make sure since it doesn't install automatically for me. Are you using this node atm? https://github.com/laksjdjf/attention-couple-ComfyUI
Looks like that one got archived and is now part of some large node compilation.
>>8653130>Because the OPs are useless?The autistic obscurantist mind cannot grasp this simple concept
>>8653133Yes. It's deprecated now in favor of https://github.com/laksjdjf/cgem156-ComfyUI but I couldn't figure out what the "base mask" is about, so I kept using the old one. Not like it needs new features.
>>8653129The node is optional btw, if you skip it you'll be using latent couple. Which is a lot more strict, attention mode prefers making a consistent image over sticking to the defined areas.
>>8653148Thanks, I'll test this one against PPM version I got working.
>>8653152Yeah I figured. It does bleed styles more with attention thingie, but it probably works better this way for what I'm doing. This shit really blows up my workflow size, holy shit.
>>8653145Nice. Got a box?
Please, does anyone one what model and LoRA this guy is using? I've wanted to copy this exact style for quite some time now but I can't get it right.
https://x.com/OnlyCakez1
https://www.pixiv.net/en/users/113960180
>>8653271Just make a lora
>>8653271nyalia and afrobull
>>8653271I wonder why these requests are always for the grossest grifter styles possible.
>>8653444love this lil nigga like you wouldn't believe
How should I approach training a lora for a character that noob vpred already knows but not really well? Should I train the TE and use the original tag or make a separate one? What dims/alpha should I set?
>>8653452no te, use the original tag, 8/4
>>8653452I've only done this for styles, when I reused the existing tag I only needed like a quarter of my usual steps. Don't think you need to adjust your config in any way, and it's always better to train more and save the earlier epochs too in case they're enough.
>>8653342Because they're the most popular?? Face it, your aesthetic "taste" is clearly in the minority.
>>8653556Which is why there threads are so slow. I think a better question is why ask for those styles here? The people who would know are those who enjoy them.
>>8653556I mean that slopper isn't even popular
Post your fingers if they are so great.
>>8653642but my fingers are /aco/
Move along friend, this train car's full
>>8653780save some CFG for the rest of us
>kagami bday
>no sd on pc
grim
Been genning in my corner since Noobai released, what's new on the block?
>>8653845>what's new on the blockFor local, nothing.
>>8653852Oh well, still happy that I can generate random shit that comes to mind. Has anyone here trained concepts in an eps model? Characters and styles seem to come out okay but concept refuse to take for me.
devil
md5: d08f75d8c435be37cf2c667e2d290a90
๐
Not really quite the character, but eh, good enough. Anyone got some good configs for characters? I've only got 16 images for this one.
>>8653879already posted in thread#10
>>8653879Assuming you're talking about making a lora,You can make a character lora with 15 images. Something like 30 is my ideal number but it's by no means a strict rule. something that worked quite well in the past is to aim for a number of steps between 300 and 400 per epoch. So 16 images x 20 repeats = 320 steps per epoch. Then multiply your epochs to get around to 2000 steps. So 20 repeats, 7 epochs and 2240 total steps. Batch size of your choosing, you might wanna start with 1 and go from there. Resize all your images so the longest side is 1024. If you're training on a eps model, make sure to check the box for it and also "scale v-pred loss". 8/4 Dim/Alpha should be enough for a character lora. Not sure what you use for learning rate, might want to try Prodigy or something similar at a lr of 1.0. Keep Token of 1 if you want a trigger word. Save each epochs and do a xyz grid with prompt s/r to see which one is the closest. Then retrain the lora accordingly, more/less repeats/epochs/batch size/etc. Who's the character btw?
>>86539062240 steps seems insane, I did prodigy LoHa (4/4 + 4 Conv) and the Lora fried hard after only 240ish steps (15 epochs). Training on Noob-Vpred1.0
Character is Pure White Demon from succubus prison.
Are well-defined noses /aco/?
I blame the anime style hater for all of this
>>8653972After 240~steps? That's what sounds insane to me. What's your learning rate like? Prodigy should start at 1 and then adjust itself. Are you using Gradient Checkpointing? With Gradient Accumulation 1. Is SNR Gamma on 8? Not sure what is frying your training, although I haven't touched Loha/LoCon in a while. Can you upload your dataset? Would try a quick 30~minutes training to see what comes out.
>>8653981usually, but there's more to it
compare:
https://danbooru.donmai.us/posts/3171471
https://danbooru.donmai.us/posts/8487474
>asura \(asurauser\)
timeless classic
>>8653988Yeah no clue. No checkpointing, no grad accum, SNR gamma 1, but I'll let you check both the dataset and config. Thanks for helping me out anon.
https://files.catbox.moe/6o3x0d.zip
https://files.catbox.moe/lrfpnr.json
t129
md5: 7c1311db3410ca67f041bd94fcd830fb
๐
>>8654002Seeing max_train_steps and max_train_epochs at 0, not sure if that's normal. SNR Gamma should be at 8 for anime (and 5 for realistic), or so I read. How many repeats do you have? The folder being named 1_whitedevil tells me one? Are you training on Kohya or? Dataset looks okay, probably gonna add a couple pictures and remove the kimono ones just so it doesn't get confused on the horns. Tags look alright. Anyways, gonna launch a quick training over here, tell you what in 30~minutes.
>>8653057 (OP)so generation = local
diffusion = NAIcucks?
why two generals
file
md5: d8f7e301d0e60aba6d3276d79e1b53e5
๐
>>8654008>SNR Gamma should be at 8 for anime (and 5 for realistic), or so I readwhere did you read that?
>>8654029I was going to tell you but honestly I rather wait for another anon(s) to do it since I don't even post that much anymore
>>8654031Been a while but years ago when I didn't have a good enough pc to train on, I used HollowStraberry's google colab trainer, and in the notes for SNR Gamma that's what it said, I believe that's where I got it from. Never tried with SNR Gamma 1, will do in the future. Training done, only did 1000-ish steps at batch size 3 for the sake of time, gonna try the lora now.
>>8654008I don't use repeats, since it will fuck up random shuffle if I increased batch sizes. I just train for more epoch instead, and I like the epoch = 1 dataset pass, I'm training on Kohya, but every day I get more tempted to switch to easy scripts. Yeah, autotagger gets confused with the horn ornament + tiara and the hime cut with the semi-twintails going on? No clue how to tag that shit, same with the energy/magic shit.
>>8653126I'm still using PersonalMerge
file
md5: 5b5f47fc110cf586c8206ae4e5b6d339
๐
>>8654037the "best" snr-based timestep weighting scheme you can do in sd-scripts for sdxl vpred is snr / (snr + 1)**2 and you can achieve that if you use 'debiased estimation' and 'scale vpred loss like noise pred' together (green line), without min snr
(well if you don't count the bug in sd-scripts which doesn't let snr reach zero for weighting even with zero terminal snr enabled, any snr-based weighting doesn't make sense for ztsnr)
>>8654047>'debiased estimation' and 'scale vpred loss like noise pred'incidentally, you can sort of approximate the effect of this with min snr 1 (purple line)
>>8654039>I don't use repeats, since it will fuck up random shuffle if I increased batch sizes.Do you mean caption shuffle? IIRC, batches can be imprecise because of bucketing (which never happens if you have only one bucket)
>every day I get more tempted to switch to easy scripts.Would have stayed on Kohya but couldn't get it to work reliably on the new pc, so switched to ez
>autotagger gets confused with the horn ornament + tiara and the hime cut with the semi-twintails going on? No clue how to tag that shit, same with the energy/magic shit.Yeah no idea either, how do you communicate this is just a different hairstyle from the usual, 2 images isn't enough to have a subfolder, I don't think. The energy and aura seem to have bled into the pics, hopefully there are more varied images for the dataset.
Anyways, this one was 324 steps per epoch (18 repeats of 18 pictures), 3 epochs for a total of 972 steps with Prodigy lr 1, batch size 3, cosine, 8*4 dim/alpha, snr 8. Quite a bit of work still needed, the markings are fucked, The horns are not as they should be and the wings are a nightmare to render properly. As you can see, almost a thousand steps and it didn't fry.
>>8654053Looks alright, thanks for giving it a try. But yeah when I say "fried", I guess I mean more stuff like
>The energy and aura seem to have bled into the pics>burned in wings even when not prompted foretc.
>completely burned in stylestylebleed could probably be mitigated by tagging shiki, or doing the fancy copier technique.
I'll try rebaking tomorrow and see what I get, thanks for the input and ideas.
>>8654061Not him but you have some pics in the dataset without the wings and aura right?
>>8654061I see what you mean, yeah. Haven't looked closely at the dataset but if she doesn't have one, remove tattoo from the tags. The aura and stuff could be put into negatives and lowering the weight of the lora to 0.8 or so, but that's a bit bothersome and not ideal. Try getting more pics (unless these are all the official pics?) and if all else fails try adding close up crops from the pics you already have, should make the lora focus the aura a bit less. What style did you use for your pics btw?
>>8654064Yep, there's 2-3 images without the aura and wings in the dataset. It's all the same artist though, so style bleed is pretty bad.
>>8654065Most pics I don't have in the dataset are part of a variant set, so no point to include 10 same images with minor differences. These are all the same artist (official creator), I've found like 2-3 other fanarts but they are complete crayon tier and changes the design so are a no-go for the dataset.
I'll likely retag everything again manually, train an almost fried lora and then start introducing artificial examples into the dataset with different artist styles. I have the metadata in my pics, it's in the stealth png format (works with the extension), but if you don't have it I used "hetza \(hellshock\)".
How do I darken hair color? Say I'm doing black hair but the artist/lora is making it gray. Do I just increase the weight of black hair? I vaguely remember this working with red hair.
>>8653999>asura \(asurauser\)I think we can all agree that hdg went into full decline when asura pillarino disappeared. 'fraid so.
https://files.catbox.moe/f5lwbp.png
>>8654005use a fork you savage
bros how do you train a lora on an artist who only draws one character
>>8654117The same way you do for style loras based on game CG: tag everything.
How is it possible that Chroma learned ZERO artist knowledge after 40 versions? Did they include the artist tags in their training at all?
>>8654129Not zero, but nearly, prompting slugbox does something consistent at least but yeah it sucks
>>8654129its both the fact he's training the model really weirdly using some method he invented, alongside the fact that the booru tags are fully shuffled, and also a small portion of the dataset. not to mention when training it'll randomly pick between NLP and a tag based prompt.
"Remember, his computing power is over 10 times slower than NoobAI. Sure, heโs managed to optimize it with some hacks that nobody on /h/ can pull off, but itโs still way slower than SDXL, and the speed for replicating styles is just abysmal.
>>8654135Should have added "drawn by [artist]" in the NLP prompts.
>>8654138did noob actually utilize a 256xH100 node lol
>>865414032xH100 from noob 0.25 to eps 1.0 iirc, they then started using most of the compute on the IP adapter and controlnets and the v-pred model was trained on 4-16 A100s I believe
>>8654129i think the great satan is t5 and that hes not training it and that he does not have the resources to brute force it without training it like nai possibly did
>>8654145>train T5genuinely you are better off using a different text encoder than trying to train T5
>>8654005thank you so much
>>8654082hands are faster
How would you call this kind of background/effect on the corners?
>https://danbooru.donmai.us/posts/6105075?q=hxd
>https://danbooru.donmai.us/posts/6421672?q=hxd
>https://danbooru.donmai.us/posts/9391957?q=hxd
>https://danbooru.donmai.us/posts/6357183?q=hxd
>>8654191A vignette that uses a crosshatching pattern. There is no crosshatch vignette tag it seems, but there is crosshatching and vignetting as separate tags.
absurdly detailed composition, complex exterior, green theme
>Train artist loras for vpred they turn out fine
>Train chara loras for vpred they completely brick
Using the same settings it's just weird, like shit full stop doesn't even work.
>>8654202Take the base illu pill and come home.
>>8654202can i trade luck with you
my artist loras turn out mediocre on vpred yet chara loras are easy
>>8654206Honestly trying to figure out why I left, sure I see shiny skin but I never neg'd for it and I like my old shit a lot. Got the loras still so might just go back and see why I swapped, only thing I notice is the colors suck way worse everything is kinda beige.
>>8654232>only thing I notice is the colors suck way worse everything is kinda beigeYou're right, but I fix colors with CD tuner so I don't see why I even switched to baking on vpred.
>>8654234That an extension might take a gander, other thing I noticed is if unprompted you get the same weird living room setting which I can probably neg out too.
>>8654236https://github.com/hako-mikan/sd-webui-cd-tuner
>>8654237Just use base settings or anything to tweak with it? And are vpred loras backwards compatible?
>>8654240Gotta tweak it on a per model basis. I tend to play with saturation2 and 1 is good enough for me but YMMV.
>And are vpred loras backwards compatibleWhat do you mean? Bake on vpred and run on illu? I've only ever tried this on Wai and it worked out well. Improved Wai 12's color problems too.
>>8654240You shouldn't gen with base illu, that's an even worse idea than training on it
>>8654242Yeah I got my stash of shit I baked on illustrious but remade most with vpred so didn't want to start another round on bakes on stuff post originals.
>>8654244The extension you goof.
anyone have config tips regarding finetune extractions?
lot of my attempts have been... so-so
>>8654232Noob and specifically vpred has a ton more knowledge that loras just don't provide for me, personally. Although Illu has a lot more loras for it given its age and the fact that they all work for noob anyway.
If anyone ever wanted to prompt cute small droopy dog ears, which look a bit like the "scottish fold" tagged ears on danbooru, you can do
>goldenglow \(arknights\) ears, black dog ears, (pink ears,:-1)
Goldenglow is a character the model knows pretty well, who has this type of folded ear. Negpip + prompting the desired color is able to remove the pink hair bias from the character tagging. Putting pink ears in the negative prompt further helps.
>>8654145Not training the TE is the correct choice.
>>8654072photoshop
or stick "grey hair" in your negatives
>>8654251if the lora turns out to be weak, try baking the finetune for a smidge longer than you really need
Training the TE destroys the entire SDXL.
If you don't train the TE, it doesn't work properly.
What should I do for a full fine-tune of SDXL? Please answer.
>>8654383destroy the SDXL
>>8654294well clearly chroma is not learning jack shit and the te is an obvious suspect
just repeating to not train te like a parrot is not gonna help it when every successful local tune had to train it (though not t5)
>>8654472yeah lets just ignore the completely new "divide and conquer" training method that merges a ton of tiny tunes together that lodestone invented. nope. it's t5.
Probably not the place to post this but I am looking for a discord invite to KirsiEngine's discord server. Without signing up for his patreon, obviously
>>8654486https://discord.com/invite/5CpkfYzdnx
>finetune vpred 1.0 for a couple epochs in attempt to train in some artstyles
>accomplishes nothing outside of stabilizing the model
not what i wanted but neat
You'll never be the next 291h
Gens? Take it to /hdg/, lil blud. This is a lora training general.
you will never be the painter
at least post the link to the model so I can test it myself
>>8654535Still blocked from seeing channels as a non-Patreon. Oh well..
>>8654595the 291h is on the civitai sir
>>8654599think lil blud means your experiment, anon.
>>8654578>outside of stabilizing the modelthat's based anon post it
>>8654605Yeah I wanna see if it's better than any style lora at 0.2 strength
>>8654607I've tried that method and it fixed nothing for me lol
>>8654605there's a decent chance it may be more slop than stable, still messing around
the best way to defeat a troll is to ignore him
>>8654624Which one is the troll though?
suppose I could just ignore everyone
>>8654595>>8654605>>8654607here, only done sparse testing myself. let me know if any of you see value in it lol
https://gofile.io/d/DGSNR9
How can I get rid of this artifact, it makes output blurry and destroy style and details when multidiffusion upscaling (https://civitai.com/articles/4560/upscaling-images-using-multidiffusion), I did 2x then 1.25x and it's getting worse. Maybe this is a bad method so I need an advice.
>>8654129He is working with both Pony and drhead, two retards that are vehemently opposed to artist tags. In addition, the natural language VLM shit almost certainly washes out proper nouns just like it did with base Flux
>>8654681how is mixture of diffusers better than simple image upscaling? it's great to do absurdres upscales and looks pretty smooth, but it's still essentially a tile upscale, albeit a bit less shitty than just simple tile upscale scripts. it doesn't have the whole context of the picture which might cause hallucinations unless you are content with really low denoise.
>>8654688Here, 1x, 2x, 1.25x upscaled in order
https://gofile.io/d/RFX4DT
warning:[spoiler] /aco/ [/spoiler]
>>8654688Here, 1x, 2x, 1.25x upscaled in order it's /aco/ tho
https://gofile.io/d/RFX4DT
>>8654677Did some basic tests and it does pretty much seems like a more stable and cohesive vpred
Didn't notice much slop in it at all, and also it got the details better than vpred in some gens
But then again, I'm not a great genner so gotta wait for someone else to comment on it
https://blog.novelai.net/novelai-diffusion-v2-weights-release-b9d5fef5b9a4
>>8654726Now that we have noob, if they released v3 weights would people be excited?
>>8654729Everything is relative. If they released it today, I'm sure people would be. If a new model better than Noob comes out and then they release v3, then of course people would not be.
>>8654729noob is basically novelai v3 at home. v3 is still unfortunately better than whats available
>>8654729bet someone could make a very good block merge with it and noob
>>8654729v2/v3/v4 were shit. NAI didn't get good until v4.5. It's currently the best FLUX based anime model.
>>8654735Why don't you ever post pictures then?
>>8654737Busy masturbating; sorry.
>>8654735v1 was good for its time, otherwise local would happily use WD
v3 is still good visually but it prompts like ass
Alright idiots, vpred models that I think are good, no snake oil required on any of those to get good looking gens (debatable), no quality tags and only very few basic negs
All of those were made using ER SDE with Beta at 50 steps 5 CFG (may not be the ideal setup for some of them but it's good enough for most of the cases)
>https://files.catbox.moe/p1afyv.png>https://files.catbox.moe/nw7rue.pngTo no one surprise, each model is biased towards certain styles so your favorite artist may be shit on one of them but great on another one WOW
It's almost like YOU SHOULD USE THE MODEL THAT FITS YOUR FUCKING HORRID TASTE THE BEST
>>8654677I like what I see at the moment but I need to use it for a little longer to draw an opinion on it
>>8654749>102d is still kingExcellent.
>>8654749thanks i'll continue to shill r3mix
>>8654749I think it'd be interesting to do this comparison but with loras that are considered stability enhances, on base vpred. If you can get the same results just by using a lora, then there's no reason to use a shitmix, as shitmixes always mess with the model's knowledge a bit and make it less flexible to work with, while swapping loras out is much faster.
>>8654735>It's currently the best FLUX based anime model.Without containing any FLUX too! Amazing!
>>8654758>loras that are considered stability enhancesIf you guys ever agree on that one, sure
>implying v4.5 is anything but dogshit
ahahah that's a good one
>>8654759Yeah bro, they totally trained it from scratch, all by themselves.
>>8654760If people disagree on which ones are the best then that's the reason the comparison should be made. I haven't seen anyone actually talking about existing/downloadable stabilizer loras though.
Am I retarded or is there a chance the lora I'm trying to use just not compatible with comfyUI for some reason?
I can't get it to work
>>8654761Kek, this.
It's literally impossible for paid proprietarded piece of trash to be good, by definition. SaaS garbage literally takes away your freedom and makes you a slave to the system that you should oppose by any means. Don't be a fucking cattle, resist. If it isn't "Free" as in Freedom, I am not interested, as I am Free myself.
>>8654762Yeah, they're just so good at optimizing shit that they can run 23 steps of FLUX with CFG in 2 seconds on an H100 lmao
Isn't just using a model like WAI good enough?
>>8654791It's always a trade it seems
WAI is good, but it's trained on a lot of slop
That makes it more consistent and gives it higher quality (like in anatomy and stuff) but breaks the prompt adherence and injects a lot of unwanted style into your gens by default
>>8654775And yet, none of the models people here use has a license that the FSF would approve of as a Free Software license.
>>8654766I've seen a few but never used it myself
I just want a model that's as good as 102d/291h, but that's easier to use
I still can't solve the shitty img2img/inpainting/adetailer/hi-res fix being broken because these models have the crazy ass noise at the start of the gens
mfw I can't just get a nice composition and throw it at i2i with the standard settings and it'll give me a good quality gen because it'll either change the image too much or it'll make it look blurry instead of adding details every single time
>>8654801mfw i2i sketch and inpaint sketch are no longer useful in my workflow now because of this
I think this one is good enough, no more rebaking for now.
>>8654807How did you do it?
>>8654808I went to the dataset again, removed some variations cutting it down to 14 images, added an additional close-up crop of the face as well. Did a full manual tagging pass again, adding additional matching wing tags (demon wings, bat wings, mutliple wings, etc) since it was bleeding through.
Ran prodigy to get an initial good starting learning rate by looking at the tensorboard logs, then switched back to AdamW8bit.
Did a couple of test bakes, tweaking the learning rate for both Unet and TE.
Eventually ended up using this config:
https://files.catbox.moe/7id47n.json
>>8654191It would be faster and simpler to just remove them.
Even style-bleed is not too bad, pretty surprising.
survey:
https://strawpoll.com/XOgOVDj1Gn3
>>8654832I want to clarify that I have a 4070ti super not a regular 4070
>>8654832>not .safetensorsgood try
>>8654677Re-ran a few old prompts on that. If you are using CFG++ like me there's very little difference between this, base 1.0, and even 102d custom.
pic mostly unrelated
>>8654832My NVIDIA GPU is not listed.
>>8654872H100?
Nice try, Jensen.
>>8654832Where is NovelAI on this list?
Trying to set up chroma has finally made me take the comfy pill. It's... it's not so bad bros... comfy is the future.
>>8654829It looks like I'm seeing some cutscene from Rance.
I was browsing tags today and came across this. It is now one of my favorite pixiv posts of all time.
https://www.pixiv.net/artworks/118263867
>>8654917nice, do you have a twitter I can follow for more microblogs like these?
>>8654920Yes you can follow me @/hgg/.
>>8654832what if i have multiple?
>>8654917Goddamn
This is actually really good
>>8654749greyscale sketch prompt is such a good test for detecting slopped models desu
what you want as a "stabilizer" is a good preference-optimized finetune, it can be a merge crap but usually merges work worse. you don't want to collapse the output distribution of a model with a lora because it will mess a lot of things up, especially if you are trying to use multiple loras.
what you will get out of a "good" preference-optimized finetune is a certain, defined "plastic" look of flux, piss tint and aco seeping through on pony, and the like.
>>8654934It filtered out most of them lol
>>8654946makes me wonder what would happen if you try to train solely on greyscale sketch gens
>>8654949I always wonder what would happen if you used an oil painting/classical artwork lora to make a merge rather than anime stuff
>>8654760>If you guys ever agree on that one, surethis isn't complicated.
A "stabilizer lora" is merely a lora of an artist you like and want incorporated into your mix. The only caveat being that it can't be watered down shit.
The existence of the lora existing in the first place is that it's introducing a much more stable and predictable u-net and imposing itself on the primary model to guide it.
It's really that fucking simple. Just use a style lora that isn't shit.
Man, do the models posted here get saved by the rentry?
Is there a site people use other than Civit since they did the purge?
>>8654749got the prompts for these? i wanna throw em on some models
didn't know there were so many people with 4090s here
>>8654995that survey seems to have been posted in every ai gen thread
>>8654677idk what you did but you need to do it for a little more or a little different
some samplers are completly broken
I want to like it as it gets some concepts and artists tags better but it's currently a little harder than I am willing to endure to get something good out of it
>>8654963yeah okay, give me 3 lora recommendations for that effect
>>8655020>some samplers are completly brokenSo, like vpred?
>>8655020>yeah okay, give me 3 lora recommendations for that effectthe entire point is THAT YOU CHOOSE THEM YOURSELF YOU FUCKING RETARD
It's not supposed to be recommended by anyone else! They don't fucking work well unless they actually suit what you want your shit to look like!
>>8655020>idk what you did but you need to do it for a little more or a little differentall this was was unet only, batch 1, on a roughly 200 image dataset for 3199 steps. pulled it early since i was saving every 100 steps lol ill continue to fuck around though since results while unintentional are promising.
>>8655023>It's not supposed to be recommended by anyone elseWhat a retard, what's the point of screaming for the guy to make a comparison if you don't even have loras in mind
>>8655021Well yeah but even more so, could just be me ngl
>>8655023I already have and use those, the point was to make a general agreement to have something to recommend when people ask for that as you know, "a good lora" is very ambiguous but whatever, I did my part
>>8655025Godspeed anon
>he uses nyalia over 748cmSDXL for stabilization
oh nyo nyo nyo nyooooooooooooo~
who you callin' bucko, chucko? this is sneed.
You two go back to lora training. This is not a discussion thread.
>>8653845Really nice composition!
>>8654832>40 minutes to generate a 720p videoEven with a 4090 I'm still a vramlet
>>8655052No I quit the gen because it's not worth it.
>>8655051>40 minutes to generate a 720p videosomeone isnt using lightx2v
>>8655064The guide says that one's quality is far worse than wan.
>>8655052https://files.catbox.moe/oiafrv.mp4
>>8655066>far worsevisually it's about on par. makes the model very biased towards slow motion, though less-so on the 720p model. most of the big caveats are present in the 480p model. it's basically required for convenient 720p gens imo
>>8655066the fuck's her problem?
>>8655072jiggling her butt for (me)
Tech illiterate here trying to get Comfy working. My laptop's several years old. What exactly can I do about this? I don't know what I'm looking for on PyTorch.
>>8655077How much VRAM do you have? You'll need around 6-8GB or so to run local gens, and if your laptop is old enough it might be too low.
For PyTorch just follow the exact instructions in the message. Go to the Nvidia link first and then the Torch one.
If your GPU is too old to run locally, there are free online options like frosting.ai and perchance.org and more.
>>8655079The sticker on my laptop says 2GD Dedicated VRAM.
Shit.
>>8655080Based time traveler.
>>8655080Plenty of stuff you can do online for gens these days.
ChatGPT has SORA for image generation and Microsoft has a Bing Image generator too. Those are both the highest quality, but censored to hell so you can't do porn. They both let you gen for free with a free account setup
perchance.org is free no account gens, but is censored as well.
frosting.ai can do uncensored gens, and is free with no account. The quality isn't the best unless you pay though.
CivitAI, Tensor.art and SeaArt.ai all let you do a limited number of free gens if you make a free account. They all have onsite currency that you get a certain amount of for free and can get more by liking, commenting, the usually "engagement" stuff.
NovelAI has the most advanced new model with their v4.5 model, and is doing a free trial. However, it's mostly a pay for site. If you're willing to pay it might be the best option, but you should probably try out all the free options first before you pay for anything.
>>8655020btw, mind sharing the broken examples? training a v2, gonna let it go until it explodes
>>8655085Dang. Alrighty then, really have to get a new computer. My buddy's made some awesome stuff for me, but it looks like it'll be a while before I can do it on my own. Thanks for the list though, I'll take a look!
>>8655092Turns out that perchance.org can do porn too, you just have to let it fail once, click the "change settings" button that comes up, and then turn off the filter.
Since that and frosting.ai don't require any fee or even a free account they're probably the best to start with if you want /h/ content.
Is there a fastest way to switch model like extension?
Nigga, you click the drop down in the upper left and choose the model.
How drop click change down model?
>>8655088Sure
>https://files.catbox.moe/cwrije.png>https://files.catbox.moe/kh0ctq.png>https://files.catbox.moe/l11gh2.png>training a v2, gonna let it go until it explodesholy based
>>8655103lol wtf, i wonder if base vp1.0 has the same issue on the problematic samplers
>>8655030why would i not use both, retard-kun?
>certainly 2 stabilizers will unslop it
kekerino
xir please administer the appropriate Slop Shine to your model before use. it is imperative
Someone did a test a while ago that demonstrated how some characters like Nahida make the model more accurately model the character as small relative to the environment, while others feel oversized. Well inspired by that, I did my own tests using the kitchen environment, and can confirm that Nahida is really one of the few characters that achieves this. There are a crazy ton of characters that are supposed to be short but noob still renders them like a normal sized people.
I wonder what would solve this problem in terms of model architecture. Or is it merely a training/dataset issue?
>>8654811Well done, must admit, didn't think of using Prodigy to figure out the learning rate. Solid work.
>>8655040Thanks. Just wish i had taken the time to correct her small hands.
>>8655186I just looked at danbooru's tag wiki and found the toddler tag. Didn't know that was a thing. Testing it, it does seem to make pretty small characters in the kitchen environment. If the goal is to make a short normal hag, then perhaps adding [toddler, aged down:petite:0.2] to a prompt of X character might work.
>>8655226He is NOT a pedophile. Those are NOT toddlers he's posting. Look, they've got curves!
who is they thems talking to
>>8655235Idk kek. If you just wanted shortstacks you can prompt for those just fine, no need to go through all this.
>>8655233>Look, they've got curvesWhere? No one posted any images. The last non-catbox image post was 12 hours ago...
You're not getting my metadata, Rajeej.
>>8655249Who are you talking to?
>>8655186Kitchen anon here, I also noticed when doing groups shots of named characters, ones from the same franchise would usually be fine because they appeared together in some dataset pics. But crossovers would mess up their relative sizes.
>>8655241Point is, small characters often end up huge compared to the environment. Sometimes even if you specifically prompt for loli/shortstack/etc. Picrel.
>>8655269How do we know that's not a custom built kitchen made to accommodate her night?
>>8655271it made her legs longer too
>>8655282Where's the rest of his forearm?
>>8655283idk camera angles
>>8655281>>8655282Reminds me of school days.
>she doesn't 748cm
A-anon..
>>8655362I don't know what that memes
>>8655224>[toddler, aged down:petite:0.2]I just tried this and it seems to be an inconsistent solution. Sometimes it does make the proportions right but most of the time it'll be messed up and closer to a shorstack/loli. Maybe if there was a tag for "normal proportions" then this might work.
Somehow I feel like putting "shortstack" in neg won't help either.
>>8655296i don't like the face but the rest is very cool
file
md5: c662c2175e4122338a63878eecd530e2
๐
luv me some chun li
>>8654749seconding for the prompts
curious how the models I use hold up
I did not think generating pussy would be so lucrative.
NAIfags, what else do you use in your workflow? Besides the in-house enhancement features (which are all terrible and cost Anlas to use prooperly) I use Upscayl to make 4K+ images. Does anyone actually use Photoshop to retouch images nowadays?
Don't you love when sometimes the same exact gen and inpaint settings you have used many times before suddenly don't work anymore?
I wish I never tried NAI 4.5, impossible for me to go back to local now. Coherent multi character scenes off cooldown and it nails the style I use perfectly..
>>8655937Eventually they will all become SFW only.
/h/ is just a bunch of frauds, and SOTA only comes from NAI. This has never changed in history. First, NAI creates SOTA, and then /h/ just copies it. We've definitely seen this pattern with the latest Flux generation too.
Time to merge with /hdg/. We've gone full circle, sisters.
>>8655051It takes me 4 minutes and 30 seconds on a 5090.
24fps 720x480 in WAN2.1.
The 4090 can't be that slower. You must have setup something wrong.
>>8654749why does base noob 1.0 look the best
>>8656019Probably because despite what all the armchair ML scientists say, the noob team actually knew what they were doing and everyone who tried to "fix" it only made it worse.
How did 291h get away with it?
Anon, if you were about to train a finetune of noob, which artists would've you add in the dataset?
new t5 CSAM var
https://huggingface.co/collections/google/t5gemma-686ba262fe290b881d21ec86
>t5gemma
what's the point of this
>>8656101tamiya akito, CGs not danbooru crap
from danbooru I guess nanameda kei. he kinda works but only on base noob, too weak for merges
>>8656189Didn't you already ask this like a year (and a half maybe) ago?
>>8656243Even if was the same person, how long should someone have to wait before asking again?
>>8656251Just use answers from that time, there were a lot of them, can't get that now that the whole thread is 3 samefags.
>>8655995It even says that on the guide my guy. The other anon was correct though, just switch to lightx2v/
>720x480No 1280x720.
>>8656019Because you have shit taste?
>>8656286are those 3 samefags on the room right now?
>>8656337We are all you, anon
Gah! Now you're making me angry!
SUFFER!!!
>>8656353Why aren't you using the sdxl vae?
>>8656349If you were you would be posting kino vanilla or 1girl standing gens
>>8656286>the whole thread is 3 samefagshow do we revive /h{d,g}g/
Anyone have know of any artist that do thin and "crisp" line art? Not quite Oekaki, but in the same vein
>>8656453just let it merge naturally back into /hdg/
>>8656458I may have some in mind but you need to post an example
>>8656453>>the whole thread is 3 samefagssaar, you are deboonked
>>8654832the poll has 206 votes (one unique ip per vote)
>>8656466I don't really have a concrete example right now, I just remember seeing a picture some days ago and thinking "hey I like the way that looks, I should try to replicate that"
I don't remember when or where I saw it so I can't really go looking for it again, I just have this very feint image in my head, so it's more like a feeling
Not very helpful I know, but I kind of just want to experiment, so feel free to post whatever you have
Does anyone have a snakeoil loaded finetune config for the machina fork of sd-scripts? blud isn't exactly keen on documentation and I wanna see what's possible without sifting through the code.
>>8656477the poll was reposted in every AI thread on the site
>>8656551pp grabbing viroos
>>8656551I don't see it on /lmg/ or /hdg/ so what do you mean by "every"
>>8656562well, the boards that matter.
>>8656562Just a guess, based on seeing it in the non-futa /d/ thread and the photorealism /aco/ thread.
Welp, just upgraded to a 5070ti, and now shit is broken. The rest of the net has no idea apparently, Is there anyone here who has gotten reforge working with a 5070ti?
>>8656572What kind of broken we're talking about?
>>8656572Try deleting your venv and any launch commands you have in your webui-user such as xformers and start over.
>>8656573RuntimeError: CUDA error: no kernel image is available for execution on the device
>>8656583Will try this and report if it works. Thanks for the suggestion.
>>8656611Can't tell if she has too many tails or if it's just some retarded BA design
>>8656611No sauce on that penisdog?
>>8656616Disgusting. Thank you.
can novelai do JP text? I know it technically can but I'm curious if the text encoder(?) was setup correctly to read JP input, or if it just gets automatically translated or something
>>8656636nai thread is down the road, lil' bro
>>8656636it can't, which is really shitty and funny at the same time
>>8656195>tamiya akito, CGs not danbooru crapdo you perhaps have them sorted and willing to upload somewhere?
>>8656636>Note: since V4.5 uses the T5 tokenizer, be aware that most Unicode characters (e.g. colorful emoji or Japanese characters) are not supported by the model as part of prompts.
>>8656669sorry, I don't
just sadpanda galleries
>>8656589No kernel image means your drivers are broken. You need to get drivers that support Blackwell (5000s). I had to manually get an updated driver on my linux machine for my 5000 card. Windows I assume it's just installing the official Nvidia stuff.
>>8656568Also on the dead /u/ thread.
Is there a comparison of the best local models compared to nai 4.5?
>>8656990In terms of what? Because I could generate tons of styles and characters NAI could never do, and also generate text and segmented characters that local could never do
>>8657005Styles and characters yeah. Couldn't care less for text.
what is lil bud vibin' to? :skull: :thinking:
My setup broke but I had fun editing silly shit, enjoy
>>8657126not bad, very cool
a shame about the forced cum on their tits
>>8657131I had to do it to stay with the rules but
https://files.catbox.moe/dns7nr.png
Can we propose trades between generals? I'd love to get Sir Solangeanon and Doodleanon (no, not the pedo one) here in exchange for lilbludskullanon. Thing /hdg/ would go for it?
>>8657164>*Think /hdg/ would go for it?
>>8657164you can just fuck off to that shit hole
I'm trying to build the best general through our front office, anon.
>>8657164Why do you want to make the thread worse?
>>8657171/hdg/ is already peak by your standards
>>8657172How so?
>SirSolangeanonEnthusiastic poster, somehow still not jaded like the majority of us.
>doodleanonMiss that lil potato headed nigga like you wouldn't believe.
>>8657173Nah. Those 2 are keeping that general afloat still instead of letting it sink to enter a proper rebuild phase. Whereas we're the explosive franchise with all the new talent that needs guidance from a few veteran pieces to put it together.
>>8657176I'll give you the point on solangeanon since he do listen to feedback
watch out chuds, you dont want me to uncage right here right now, ive been keeping this thread chaste so far.
>asking for avatarfags
Old 4chan culture is never coming back is it? Rules only exist if someone reports you.
what big boomer yappin bout :skull:
This place is now wholly indistinguishable from /hdg/, except nobody even bothers to pot gens.
>>8657263>except nobody even bothers to pot gensso just like hdg? most of the gens there are shitposts now, either civitai slop reposts or cathag garbage. guess hgg isn't getting spammed (yet)
>>8657263Maybe you shouldn't have run off the trap (formerly otoko_no_ko) genners just to be left with endless threads of austistic slap fighting over toml files.
>>8657157nice. more noodlenood.
>>8657263we need a third general
>>8657331Bake when? I'm ready to move on, sister.
>>8657331What should we call it? I propose /hdgpg/ hentai diffusion gens posting general.
>>8657263Not even close, the amount of retarded botposting in hdg is unbearable.
on more important news i retrained this lora and its worse now
either my lora training settings are fucked or this dataset is cursed
thanks for listening to my important news
>>8657404me with every lora i bake ever (i cant train the TE)
>>8657404whats the artist/s?
our three funny greek letter friend posted some new models
>>8657409it's for the concept of a dildo reveal 2koma
like this https://danbooru.donmai.us/posts/6868424
i thought it would be an easy train but it breaks down every time
>>8657414was there any sort of discussion about it?
>>8657420tldr: 1.5 may be okay but rest are objectively worse than the og.
>>8657413i shee
but what about the artists used for that image unless its style bleeding from the lora?
>>8657413maybe review your tagging
How do I bake a lora having 0 knowledge about it
A style lora in particular
>>8657661step 0. download lora easy training scripts
step 1. collect images. discard ones that're cluttered or potentially confusing.
step 2. disregard danbooru tags, retag all images with wd tagger eva02 large v3. add a trigger word to the start of every .txt
step 3. beg for a toml. keep tokens set to one
step 4. train
>>8657263why you retards always complain about no posting gens without posting anything at all
>>8657267this is good
>>8657404you were my last hope for making this concept work
>>8657662How do I use lora easy training scripts if I don't have display drivers? Is there a non-gui version?
>>8657728Are you this anon
>>8655077
>>8657731lol no, but I don't have a GPU that I can use for training on my normal PC, only on my headless linux server rig. Wanted to try easy training scripts but once I saw the GUI requirement I gave up.
>>8657735Just install backend and connect to your server from ex ui
>>8657662where download wd tagger eva02 large v3
>>8657795No clue what you mean with "ex ui" but I managed to tardwrangle the code, it's a bit buggy but I can run the UI on my normal machine and send the config to the backend on the server.
you need a lora for that?
>Skyla used gust!
>it's super effective!
>>8658068https://files.catbox.moe/50k7d8.png
>>8658103Just noticed that the literally me on the left had 2navels.
>>8657662Thank you, but it's still very vague
There's a jump from step 1 to step 2, I know that you're supposed to get images and then make a .txt file describing what's in them, but "retag" assumes they're already tagged. Am I missing something? And also, I don't use comfyUI, how do I make use of wd tagger?
>Beg for a tomlI actually have three I found here but don't even know what it does
>>8658162Also, forgot to mention but I have taggui v1.33, which I haven't opened past downloading it as recommended by some other anon
>>8658162>"retag" assumes they're already taggedYes, they are tagged on danbooru, but you should ignore those as they are usually extremely incomplete and redundant.
>And also, I don't use comfyUI, how do I make use of wd taggerGet https://github.com/67372a/stable-diffusion-webui-wd14-tagger and select WD14 EVA02 v3 Large. For manually refining the tags, you can do it with an image viewer and text editor of your choice or use a program like qapyq to handle it more smoothly
>>8658166Got it, I'm assuming I just copy and paste the directory folder with the images and then click interrogate and it'll give me all the .txts for manual editing
On the topic of manual work, what approach works best? Tagging everything in the image, using a trigger word, tagging only the main parts of the image, etc
And also, I've heard that base illustrious 1.0 is the best model for training, is it true?
Sorry for all the questions
>>8658199"base" illustrious is 0.1, not 1.0
most merges include some measure of noobAI, which branched off from 0.1. you'll want to be compatible with those, if not train directly on noob.
otaku, neet, jimiko, mojyo, messy, unkempt, slovenly,
>>8657662I see what you mean with beg for a toml
>>8658229Makes sense, thanks
>>8658331i was away all day after posting that lol apologies if you had questions that went unanswered
btw in the bottom right corner of easyscripts, you can set a URL, that'll allow you to type in a web address to an external server and it'll send it there instead of the localhost
>>8658398I look and hag like this
>>8654860underated gravel posting
please post more rat sex
The LoRA model isn't doing what I want perfectly so I have to draw on top of the generated pic.
Still pleasantly surprised by the result
>>8658199You should tag absolutely everything that you can, autotaggers can give you a solid base but you should always try to add anything they might have missed, especially since in genreal they seem pretty sloppy at detecting composition tags and sometimes backgrounds too.
You should train on illustrious 0.1, not 1.0, and only if you plan to make your lora compatible with every checkpoint from its family (including noob), otherwise train on noob vpred 1.0 (or eps if for some reason you hate vpred). Don't train on merges, shitmixes and the likes, on top of making it way less compatible with other checkpoints, it's possible that the model shits itself during training.
>>8658564>You should train on illustrious 0.1, not 1.0, and only if you plan to make your lora compatible with every checkpoint from its familythis is such BS advice borrowed from the 1.5 era where every model was some weird shitmix of NAIV1. training on illu 0.1 is the same as training on ill 1.0 and using it on noob. noob has been trained significantly past the point of "compatibility"
>>8658564I disagree slightly. I think the philosophy of "bake first, fix tags later" from the OP is still king. It's better that you bake and see what mistakes the lora makes, *then* go back and try to fix those with tags (you're better off deleting those pics instead) than manual tagging.
>>8658344>that'll allow you to type in a web address to an external server and it'll send it there instead of the localhostThat seems pretty useful, what server do you recommend?
>>8658564I do plan on making the lora compatible, since I use primarily two models (291h and an illustrious shitmix)
>>8658566Hmmm, you mean just baking with the autotagger stuff and then going back to fix the model if it sucks? Idk I have 0 knowledge on this
I tried baking one yesterday but it was completely broken, so I'll try it again properly today
Btw, what's the difference between using a trigger word or not? I know there's some flexibility you gain by being able to edit the prompt when a model strictly depends on the trigger word to activate, but I already have a lora scheduler extension for that
>>8658570>what server do you recommend?im not too familiar with that part, but im pretty sure all it is is setting up the backend on another server. only know if it since i used the jupyter notebook and that's how it worked. https://github.com/derrian-distro/LoRA_Easy_Training_Scripts?tab=readme-ov-file#colab
>>8658570I mean eva02 is good enough for 99% of things on its own. If eva doesn't see something in the pic, it means that pic is bad/confusing and you're better off deleting it rather than trying to fix it with manual tags. I've wasted way too many hours on this bullshit and this technique works far better. I only use a trigger word when I'm overwriting an artist the model already knows. And for characters obviously.
>>8658576this
only thing to delete is whenever it spits out conflicting tags. i've had it tag an image both white background & grey background once.
>>8658573I run a 3050 with 8GB of vram, going from yesterday's test it took me 2 hours to bake a lora
Good to know there's an alternative, thanks
>>8658576>I've wasted way too many hours on this bullshit and this technique works far betterLol I believe you
Btw, is there any settings I should be aware of in the autotagger? I just ran it with the default settings yesterday lol
>>8658579Default is fine. 0.35 confidence. If you're using Tagger in reforge set the other setting from 0.05 to 0.
>>8658581Ty, all of you
As for easy training scripts, is there any tutorial or guide that explains each setting? There's like a billion of them
Or should I just beg for a toml? lol
I only have 8GB of vram so I gotta take that into consideration as well
>>8658591>I gotta take that into consideration as wellthat'll limit your ability to train styles btw since you can only realistically train the unet, not both unet and TE
>>8658599FP8, batch 1, unet only training will be feasable with 8gb of vram
if you train both you'll go above 8gb vram
>>8658601And what are the implications of that?
Since I can't train the text encoder, I won't be able to use new words and concepts is what I'm guessing
So no trigger word?
training te is unnecessary tho.
>>8658577The files on my side are always double-tagged with simple background and white background.
>>8658602nah the unet will still latch onto the trigger word, it just wont be as strong w/o the te learning it
lil bro is going to train his first te...
>>8658604simple background and white background can work together. white is not grey, however
Honestly, if you're training with 8GB of VRAM, you'd be better off using some random Lora service. Their hardware is pretty powerful within reasonable limits.
>>8658608Unless you also have "gradient background" lol
>>8658608I just remembered another aspect: they are often tagged with two types of hair colors, and moreover, multi-color or various other color tags are frequently attached.
>>8658612I got a permaban from collab and making new google accounts is a huge hassle these days
>>8658612Alright, I'll do it then
>>8658615>I got a permaban from collabfucking how? did you try mine some shitcoin or something?
>>8658619nta but iirc they perma banned anyone using imagegen via colab
>>8658622yeah, figures out
Is it worth or needed to pay for colab pro?
>>8658624You can use free compute; their pro prices aren't very good.
If you plan to pay for compute, something like Runpod and a bunch of other similar services will be cheaper.
Switched back to an updated reForge from forge today, and I'm already getting insane placebo that the Euler A Comfy is way better than the Euler A A1111 version. I don't want to xyz all the sampler and schedulers for the 50th time... but the possibility that the forge samplers were somehow broken is too big to ignore.
>>8658635>ancestral samplersnot once
My ancestrals are blurring the picture with every step, determinism-kun. can you say the same?
>>8658644my sampler is stochastic
my steps are high
>>8658639>>8658635We need incestral samplers
>>8658635Any c*mfy sampler is fucked for me, it does weird things with the negs like only following some parts of it or none at all
>>8658676no, regular negs
>browsing for cute characters to gen
>click on the tokitsukaze
>see a few images of her being constricted by arbok
>ok whatever
>go 10 pages down the line
>still seeing arbok constriction images
>"wtf?"
>search for "tokitsukaze_(kancolle) pokemon"
>literally 12 fucking pages
>turns out someone has been constantly commissioning this exact image since 4 years ago and has not stopped
What in the god damn.
>>8658684the power of autism
>>8658684Absurd! Humanity belongs in its rightful place.
>>8658734Finally an actual NAI gen. Not bad. The eyes aren't as shit was v3 ones were.
>>8658678>using negsOh nyo nyo nyo
>>8658502We just don't get many arknights in general.
How do I go about tagging an ai-generated dataset for a lora? The images themselves have this pvc-figure / doll like aesthetic, is there a tag for these types of images?
>>8658635My Reforge is from around.. January maybe? A few months before Panchito abandoned us anyway. When I still used Ancestral, I did placebo myself into using the comfy version. Maybe just better gacha but that's the name of the hobby after all.
tried nai for a month with the final full release and it's really fucking bad. it "can" make nice stuff but it's so fucking schizophrenic and inconsistent god damn. inpainting is a nice but not huge upgrade over v3 and of course still completely above anything local has but that alone is not worth the price.
>>8658934But local is consistent and has high resolutions so how is it better? Text?
>>8658909ideally you'd use the prompts, if you have them
>pvc-figure / dolltag is figure_(medium)
if it's a style lora, I'm still not sure about using the aesthetic tags. For example with "3D" it becomes a trigger tag and the lora does almost nothing without. But if I don't tag it, it won't quite get the style and stay more flat.
>>8658937I think the last sentence was all about inpainting, not overall better.
>>8658942Nah, don't got the prompts, basically a style lora from a pixiv user
>>8658909Run it through an auto tagger and then add stuff you want to associate, it might not stick since ultimately it's weights on the model but it can't hurt it.
Man, is 291h broken when it comes to detail refinement or am I goofing?
>>8659040>Do nice gen>i2i to get more details>Low denoise>Blurry and no details>Medium denoise>Still no details>High denoise>Good details, changes the entire imageInpainting works but it's a pain in the ass
>>8659042>Inpainting works but it's a pain in the assStop being lazy
>>8659044No
If I didn't want to be lazy I wouldn't be tinkering with AI
>>8659042Oh. Were you the same anon saying the same a few threads back? If not, I'll tell you what I told him, I tile upscale through CN and have no issues. Not sure about straight i2i and hiresfix.
>>8659047Nope, some anon recommended it on slop and I decided to test
>I tile upscale through CNWhat's that?
>>8659053Not at my PC to share my settings so maybe some other anon can help with that meanwhile but it's this used through i2
>https://civitai.com/models/929685?modelVersionId=1239319
>>8659060How do I even use that?
I use reforge btw
>>8659062You put that in a folder called ControlNet in your models folders. Then in your i2i settings, there should be a section called ControlNet which you then select that model you downloaded it and choose tile_upscale. As far as settings go, don't know them off the top of my head.
>>8659062Controlnet is integrated in reforge so you go to the the box, select tile, select the model, then put control strength to 1 and set your control start step 0 end step to 0.8. Increase end step if you get bad anatomy or hallucinations. Save the settings as a preset once you're happy with the,.
>>8659089What preprocessor though? Also control mode and resolution?
>>8659093tile resample and balanced. The resolution is your choice. More is better but slower and increases chances of bad anatomy.
>>8659094Cool, seems to have worked
Do I use it for sketch, inpaint and inpaint sketch too?
>>8659098I just use those settings upscaling.
>Doesn't work with loras
wth man why can't I just upscale like normal in this fucking model
Why is latent upsale bad again? I never got extra nipples or whatever with the right amount of denoise.
Someone recommend me a non-sloppy model that isn't a goddamn nightmare to work with
I just want to do my regular workflow
merge greek letters 1.5 with 1.0 2d custom at a 60/40 ratio, receive kino
>>8659110>doesn't work well with loras*It does work.
>>8659126Exclusively gives me nonetype
>>8659129Does not work together with multidiffusion on reforge.
>>8659130Not using multidiffusion either
But since you said it works, I guess I should try and see what's causing the issue
>Adetailer just sometimes stops working or doesn't work at all til I restart the cmd prompt for reforge
It's such a weird / annoying problem and it's only really started happening when I swapped over to vpred models.
error
md5: af54245683ce6241b07efce1fa18fba2
๐
any tips for the Torch is not able to use GPU error? checked google and a lot of people have the issue with no clear answer, tried all the various suggested things with no results. I have a 3080. I am on a fresh install of windows 11.
>>8659141do you have the right driver for your graphics card?
r3mix is basically superior vpred
>>8659142yeah updating it was one of the first things I tried, rebooted pc after and same error
>>8659141What are you trying to use, A1111/Forge/Reforge or something else?
When you say you tried all the various suggestions, what did those entail? The most promising ones come up for me are:
https://stackoverflow.com/questions/75910666/how-to-solve-torch-is-not-able-to-use-gpuerror
https://www.reddit.com/r/StableDiffusion/comments/z6nkh0/torch_is_not_able_to_use_gpu/
It seems like it usually is a Torch version / GPU driver version mismatch. If you already did the update like you said in
>>8659145 did you check that the versions match? You might want to try reinstalling everything from scratch if you've updated.
Is your 3080 GPU 0? If your motherboard has an integrated graphics card that might be GPU 0 instead which might mess things up.
>>8659149Turned out I needed a specific visual c redistributable I think? it's still loading but it didn't give me that error anymore. I had tried stability matrix and it auto installed that redist among other things, and now reforge works. I had combed the a11 and reforge pages to make sure I had all dependencies but I guess they don't mention that.
Why didn't anyone tell me epsilon + cyberfix is superior to vpred in every single aspect
>>8659162now post one of your gens lil bud
>cyberfix
P-panchito.. onegai..
bro disappeared into the sepia aether
I've never had issues because the Torch version and GPU driver version didn't match. On Linux, I ran into problems when the Torch install command was incorrect.
>>8659175Best gen in the thread.
>>865917510/10 lil bud, keep it up
Nevermind it doesn't get my favorite artist
>>8659143Link me. I'll test it's trap (formerly otoko_no_ko) capability.
>>8659186https://civitai.com/models/1347947
>>8659204I am always lurking
>>8659122which sampler/scheduler? cfg, quality tags/negs?
>>8659224use it exactly like you would 1.0 2d custom, I like euler a cfg++ 1.5, simple, and quality tags/negs depends on style but generally newest is safe to keep as a quality tag
am i the only one who feels like their gens get worse with every new model/new experimentation? like i look at gens i did many months ago and they look marginally more interesting/appealing
>>8659248It's the tinkertroon fallacy, where the process of genning becomes more important than the actual results. This is commonly seen in cumfy users, who build impossibly convoluted noodle behemots to gen fried 1girl, butiful saarground noisy crap.
>he goofed when he should have gooned
>>8659248yes, thus I'm going back to sd 1.5 with nai v2
Could i request someone bake me lora if had an "ok" dataset ready?,
id like to do it on my own but for now sorting the bullshit that is the process to make one to begin with
>>8659263Link dataset, the worst that can happen is that people call you a faggot
>>8659280Here, i have no clue what i'm doing: https://litter.catbox.moe/0uewkgjbmyh9yoey.rar
>>8653556No, I'm pretty sure it's because the people into that shit are disproportionately more desperate for content because of how dogshit the style is and therefore fewer non AI content out there with it
>>8659367based horse fucker
hey, retard here, need some help with genning
last time I was here was during the pony days, now I see illustrous is the go-to model, should I use it with the sdxl vae?
Also now I see there's a sampling method and schedule type. How I can turn off the schedule type? I used to make gens with Euler A, now I'm using that one and SGM uniform in schedule type since it seems it gives the best results. I'm using forge becuase I'm too much of a brainlet to use reforget
thanks
Also, I'm using this illustrious, is this one alright? https://civitai.com/models/795765
>>8659455There is no reason to use a VAE override on SDXL, just leave it blank.
You always used a schedule type, only A11/Forge hid it from you. It was "karras" for the dpmpp sampler line, and "normal" for everything else.
That is the correct illustrious, though most people have moved on to noobAI which is a further finetune.
Start here https://civitai.com/models/1301670/291h
or https://civitai.com/models/1201815?modelVersionId=1491533
then once you're comfortable consider moving onto the base model https://civitai.com/models/833294/noobai-xl-nai-xl
it's harder to use than merges, kinda like pony and autismmix/reweik
>>8659461I guess he doesn't. We better tell him, before he makes a fool of himself in front of the whole thread.
>>8659464yeah, about that...
how do you guys find 200 pics in a coherent style to baka an artist lora, I am lucky if I can find 40 after removing shit that would be incomprehensible to the model
>>8659495I usually use hires patreon/fanbox rewards for the last 2-3 years
>>8659497Yeah, but I'd rather have more pictures, even if just to do crops.
>>8659248only my old 1.5 gens are worse than my current sdxl gens, almost all my sdxl gens have the same ""quality"" but some gens with some artist mixes in ""old"" vpred models have noticeable worse colours which is something expected
>>8659288https://mega.nz/folder/gTtRXRhI#JlvWr2DBl1bQpRzMO4MyoQ
Different Anon than the one who said they were working on it. The one that ends with -TW uses the activation tag you had in your dataset while the one without -TW doesn't have an activation tag. I also threw a grid in the folder that has the Lora without an activation tag Vs the Lora with the activation tag. The first image in the grid is without an activation tag and the second image is with an activation tag and so forth.
You can also throw in white pupils to get a bit closer to the way the artist does their eyes.
tensor is fucking dead
https://tensor.art/event/NSFW&CelebrityAdjustments
>>8659563oh nyo, anyway
I had this stupid idea the other day
>>8659559lost the metadata in an edit but it's on 102d custom, a 0.6 denoise img2img of https://cdni.pornpics.com/1280/1/45/36999346/36999346_011_3465.jpg
>>8659574Man that pose doesn't work normally? That really sucks.
>>8659585Maybe it does? I'm just going through my old porn collection and seeing what they look like in anime style.
>>8659557Different Lora baking anon here, just wondering what the difference is been activation tag and no activation tag?
As far as I understand, there are three (?) main ways of training a style lora.
1. No tags for any image + only train unet
2. Normal tagging for all images (exclude style tags) + train unet/TE or only unet
3. Normal tagging with activation style tag + train unet and TE
1 -> Rigid lora with heavy style "burned" into the unet, but very consistent since it isn't tied to any tags (always activates)
2 -> Slightly more flexible, but style is spread out over tags in the dataset. The more tags from the dataset you include in your gens, the heavier the style. Potentially learn "sub"-style tied to specific tags, I.E. 1girl activates the 1girl sub-style and leaves out style for say 1boy.
3 -> Less flexible than 2, but more consistent since the activation tag will eat up most of the weight updates (trained style). If trained properly, style should be minimal if not using the activation tag.
Am I understanding different style loras correctly?
Is 102d custom still peak? The guy made a new one and tried it and it's some hot ass.
>>8659585>lying on side, sex from behind, spooning, one leg upetc should be about right
>>8659587>>8659634Yeah I will test. Baking so I can't check right now.
>>8659630Yes it's basically this.
>>8659630>2 -> Slightly more flexible, but style is spread out over tags in the dataset. The more tags from the dataset you include in your gens, the heavier the style. Potentially learn "sub"-style tied to specific tags, I.E. 1girl activates the 1girl sub-style and leaves out style for say 1boy.This shouldn't happen unless your config is shit
>>8659630Here's a weird fact, TE doesn't do what you think it does. You can teach the model new words even when training Unet only.
>>8659646yeah if you train for 10x longer and even then your models will be schizo like novelais
I never saw a positive impact with TE training. the faster convergence never made up for the shitty hands
>>8659649Nope. And you can go back all the way to SD1.5 character loras, most were trained Unet only on characters the model didn't recognize at all.
You can try that on noob as well, train a character lora Unet only with some made up name like f78fg3f and it'll work as her activation tag.
>>8659630>1. No tags for any image + only train unetfor whatever reason, with unet only, i still got better results with a trigger word. no trigger word made it barely learn the style
>>8659646Sure you can, but if your tag happens to be completely undertrained in the embedding space, I.E. the token "x4jd4" has closest cosine similar vectors of completely unrelated concepts from your intended target, then you're forced to realign a lot of U-net weights to account for this outlier vector embedding. I think training the TE makes perfect sense if you want to slightly realign your vector embeddings, but you have to stop your TE training before your stop your U-Net training to avoid misalignment. Use a very low learning rate for TE and stop it 25%-50% of the total training steps seems to work pretty good.
Now, the downside is that you are re-aligning all the well-trained tags/tokens as well (1girl, etc). This is a solved problem in theory, just haven't seen anyone actually implement it for any of the popular trainers.
What you would do is create a new embedding for your style activation token (Texual inversion), then only update your embedding in the TE, then you train U-net with the embedding as a tag in your dataset. That would be the "Ideal" style/concept Lora training setup in my mind.
>>8659585>>8659634I've done spooning before a few times, works alright.
>>8659288>>8659490here's you LoRA saar
https://files.catbox.moe/q8xyxg.safetensors
Like other Anon, no trigger but "white pupils, white skin" can help.
>>8659721See that shit with the hair? I was *this* close to scrubbing those out but I was already annoyed after having to crop each pic properly. I don't want every "very long hair" to do that shit but it's so common in those pics. Guess I have to start over.
>>8659631all current shitmixes are more or less the same
>>8659742Yeah it just feels at this piont I'm just chasing ghosts after all the ones I've tried. I can get kinda what I want on non-vpred but I still can't dodge the shiny skin and vpred looks cleaner but then half the time it's vague or I still get bad hands more then non-vpred.
>>8659749are all of you allergic to inpaint or something
>>8659750if i inpaint, how am i supposed to feel smug about my model being superior!?
>>8659750That's more time I'm not genning the next image though.
>>8659566Cute Kula. Post more Kula. Even if it's /e/.
>>8659721>>8659557i kneel, thank you anons
>>8659495>how do you guys find 200 pics in a coherent styleyou don't, i got turbo lucky. Most of his stuff is not particularly hard to edit with photoshop and is mostly white backgrounds
also most artist can't into basic organization and/or consistent posting, you might be missing pictures buried somewhere
3 greek letters 1.5 is kinda alright. Genned with it a bit yesterday and wasn't immediately repulsed so I'll give it an extensive test later.
>>86598721.5 as in a 1.5 model?
>>8659876It's better to avoid the schizos and stick to 102d.
Just... merge greek letters 1.5 and 1.0 2d custom at a 60/40 ratio
>>8659881Except 291 w lora & 291h mog 102d, tourist.
>>8659872Saw some anon mention it yesterday.
>>8657417 Missed the initial conversation.
>https://civitai.com/models/1217645?modelVersionId=1976509
>>8659885that's almost what r3mix is
>>8659887hmm, actually kind of close, yeah
The one thing I will say about 3 greeks is that it needs muted color in neg. I liked everything about it on simple gen testing but it looked wash. Added that and then it started to click. Again, I'll give it a big test later.
>>8659885>>8659887>r3mixTried it from anon's suggestion yesterday and didn't clear the initial test phase for me. It's VERY good at anatomy. Don't recall a single issue with hands once. But it's one of those harsher models that can't do brush/sketch mixes too well. Even lowering CFG to levels where shit just starts floating around. Looked at the bake and
>chromayumeMakes sense.
Does anyone ever use Pony still? Haven't touched it in a while, not sure which scheduler to use. I'm guessing either normal or SGM?
Was hoping to test some styles on it.
>>8659894Isn't it Karras?
>>8659892>But it's one of those harsher models that can't do brush/sketch mixes too wellOn the other hand the meme merge I've been shilling is pretty good at them. This gen and
>>8658117 used it
>>8659903Shill it to me, my good sir.
>>8659917Yes. And it uses 1.0 2d custom's CLIP, thought I should mention that in case it matters
>>8659885I don't know how to merge teach me
Has anybody used https://civitai.com/models/99619/control-lora-collection ? Seems to me like it's one of those "Guidance" things like FreeU, Perturbed-Attention Guidance, Smoothed Energy Guidance, etc but in LORA form.
The CivitAI page recommends using it at half-strength. It seems like it might improve things sometimes? Like every other guidance thing I've tried it seems inconsistent on whether it's making things better or just making random changes.
>>8659926Nah it basically is a Pony "slider" for Illu/Noob
>>8659923thanks, will try it out. can you box
>>8659903 or
>>8658117 ?
mostly cause i want to know if you use any quality tags, negs, snakeoil, etc
>>8659909Thanks for the idea, anon. Whereas I didn't merge your models, I decided to try 291 + 3 greeks. We're SO back.
>>8659957euler A CFG++ 1.5 simple, guidance limiter sigma start 25, sigma end 0.28
quality tags: newest, very awa
negs: sepia, old, early, skinny, watermark, @ @, bkub, shiny skin,
I haven't really done any testing on those for this model though, just treating it like 1.0 2d and it's working well so far
>>8659924Unless you use comfy too you're going to have to find out how yourself I'm afraid
>>8659969I use comfy. Teach me instead.
>>8659972You'll need this https://github.com/Miyuutsu/comfyui-save-vpred
2 load checkpoint nodes, 1 modelmergesimple node, 1 save checkpoint v-pred node, set the ratio to 0.4, connect noodles, run
>>86599800.6 if you have greek as model1 right?
>>8659981The merge may in fact be 60% 1.0 2d and 40% greek then I am retarded
>>8659981>>8659981Nah. 0 ratio is basically 100% model A and 0% model B. If you set it to 0.6, you're getting 40% model A and 60% model B.
>>8659987https://comfyui-wiki.com/en/comfyui-nodes/advanced/model-merging/model-merge-simple
this says its 100% model 1 if you use ratio 1
>>8659987yeah i feel like comfy merge is exact opposite of webui merge
just c*mfy thing being contrarian
>>8660006Wait so it's backwards in reforge? The comfy version makes more sense though. Model A:Model B is Model A/Model B.
webui
md5: a8f777f62733251083f03fb4c9bbc425
๐
>>8660001>>8660006Yeah I should have said that's how it is on reforge.
>>8659984did you use comfy or reforge? just wanted to double check
>merge can't do darks as good anymore
More snakeoil...
>>8660030Comfy with greek letters 1.5 as model 1, 1.0 2d custom as model 2 and the ratio was set to 0.4, I guess that means it's 60% 1.0 2d custom and I misunderstood how the node works
>>8659317Ummm box please?
any noobai model that understands how to do triple anal properly?
>>8660134why you do dat lil bro?
>>8660134I didn't use any artist tag and quite literally copy pasted all the tags from one existing image on the booru but, seems like it does?
>https://files.catbox.moe/tvy9q6.png
It's not gay if only the tips touch.
>>8659969newest, very awa, artist, tags? or do you use artist before
>>8660134Shitmixes don't add knowledge, they dilute it.
Also, if you add more emphasis to "double anal", you get an increasing numbers of cock in the hole.
>Train lora on sfw artist
>Works perfectly on sfw
>Falls completely off on any nsfw
Nice
Not exactly related, but I don't want to deal with /sdg/ faggots. What model is the best for doing backgrounds? No characters just some pretty landscapes and shit.
>>8660144style, BREAK, 1girl prompts, BREAK, 1boy prompts, background, quality tags is basically how I prompt
>>8660221it doesn't do anime style landscapes that well unfortunately
>>8660262What if I don't want to give the roach any money?
>always thought seitoedaha had a very nice style
>he never draws the girls I like
AI is a blessing to scratch that itch
>>8659566seconding catbox, I like this a lot
Any loras to help with see-through? A lot of the time what's supposed to be under just end up going on top, especially bra/bikini straps and the bikini under clothes tag doesn't seem to to much to help
>>8660345It's very obviously inpainted.
>>8660365The tag is "bikini_visible_through_clothes". "under clothes" just means they're worn under non-transparent regular clothes and peek out.
I have not tried the bikini version and it has few images, but "bra visible through clothes" is very reliable as long as you don't prompt other bra tags alongside it.
>>8660169Unironic skill issue.
>>8660260You're not wrong, but take a look at this. Flux leans heavily toward 3d but I think these came out good.
https://files.catbox.moe/fib8r5.png
https://files.catbox.moe/m3knnv.png
https://files.catbox.moe/gvlk7o.png
https://files.catbox.moe/avirbj.png
https://files.catbox.moe/9tpeb7.png
These are just the ones I've done lately. I've been using flux for textgen story backgrounds for a year now and it works well. Also fuck /sdg/ and fuck /dalle/
>>8660428Based chastiser.
>>8660435Those are cool wtf
>>8660435Damn.
I wish Noob had that level of coherency and sharpness, not just for backgrounds.
>>8660437>>8660442Yeah adding studio ghibli to the prompt really helped to push it closer to 2d. Chroma will save us though, I can feel it. Flux models can really follow your prompts.
https://files.catbox.moe/korwws.png
https://files.catbox.moe/59nsan.png
https://files.catbox.moe/sswzye.png
https://files.catbox.moe/v2d7yx.png
https://files.catbox.moe/w74jda.png
>>8660445I really hope so, those backgrounds are way better than anything I have managed to pull off
>>8660445chroma won't save shit. it's incredibly poorly trained and does not hold up compared to flux dev. flux as a model had potential, but not with whatever the fuck chroma is doing
>>8660388Ah no wonder I wasn't getting what I wanted, thanks
>>8660435yeah, i played with flux quite a bit, but it doesn't really look quite how i want
>>8660484a finetune of chroma could work
chroma itself wont