← Home ← Back to /h/

Thread 8653057

679 posts 178 images /h/
Anonymous No.8653057 >>8654029
/hgg/ Hentai Generation General #012
High contrast edition

Previous Thread: >>>>8647788

>LOCAL UI
reForge: https://github.com/Panchovix/stable-diffusion-webui-reForge
Comfy: https://github.com/comfyanonymous/ComfyUI

>RESOURCES
Wiki: https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki | https://comfyanonymous.github.io/ComfyUI_examples
Training: https://rentry.org/59xed3 | https://github.com/derrian-distro/LoRA_Easy_Training_Scripts | https://github.com/bmaltais/kohya_ss | https://github.com/Nerogar/OneTrainer
Tags: https://danbooru.donmai.us/wiki_pages/tag_groups | https://danbooru.donmai.us/related_tag
ControlNet: https://rentry.org/dummycontrolnet | https://civitai.com/models/136070
IOPaint (LamaCleaner): https://www.iopaint.com/install
Upscalers: https://openmodeldb.info
Booru: https://aibooru.online
4chanX Catbox/NAI prompt userscript: https://rentry.org/hdgcb
Illustrious-related: https://rentry.org/illustrious_loras_n_stuff
Useful Nodes/Extensions: https://rentry.org/8csaevw5

OP Template/Logo: https://rentry.org/hgg-op/edit | https://files.catbox.moe/om5a99.png
Anonymous No.8653062
>>8653048
>1c is trained on the main cluster
>uploaded 7 months ago
They didn't have compute then afaik, one of euge's v-pred experiments?
Anonymous No.8653063
>dirty dozen
Anonymous No.8653101 >>8653124
>>8650468
What do you use to merge multiple custom masks like that? I think Attention Couple (PPM) node works but I'm not entirely sure if it's the best one.
Comfy Couple is just for 2 rectangle areas, right?
Anonymous No.8653124 >>8653129 >>8653133
>>8653101
Conditioning (Set Mask), it's a built-in node. It takes the conditioning for one area, and a mask telling it where the area is.

If you're also using controlnet then the whole mess together would look like this: https://litter.catbox.moe/jnsdso5o9352yq46.png
I use the mask editor built into Load Image node, you can just right-click it with an image loaded.
Anonymous No.8653125 >>8653127 >>8653444
What are the /h/ approved shitmixes again? it's been a while since I've been here and I'm getting into genning again.
Anonymous No.8653126 >>8653130 >>8654043 >>8654187
Anonymous No.8653127 >>8653130
>>8653125
how
how this fucking question gets asked EVERY SINGLE THREAD
HOW
Anonymous No.8653129 >>8653152
>>8653124
I was interested in what you are using after combining the conditionings. Turns out you are using Attention Couple, but when I'm trying to install missing nodes it installs Comfy Couple instead and your node is still missing. They have different inputs, Comfy Couple overrides the areas I set back to rectangles it looks like.
Anonymous No.8653130 >>8653135
>>8653126
Based Wai-KING.
>>8653127
Because the OPs are useless?
Anonymous No.8653133 >>8653148
>>8653124
Just want to make sure since it doesn't install automatically for me. Are you using this node atm? https://github.com/laksjdjf/attention-couple-ComfyUI
Looks like that one got archived and is now part of some large node compilation.
Anonymous No.8653134
Anonymous No.8653135
>>8653130
>Because the OPs are useless?
The autistic obscurantist mind cannot grasp this simple concept
Anonymous No.8653142
Anonymous No.8653145 >>8653256
Anonymous No.8653148 >>8653156
>>8653133
Yes. It's deprecated now in favor of https://github.com/laksjdjf/cgem156-ComfyUI but I couldn't figure out what the "base mask" is about, so I kept using the old one. Not like it needs new features.
Anonymous No.8653152 >>8653156
>>8653129
The node is optional btw, if you skip it you'll be using latent couple. Which is a lot more strict, attention mode prefers making a consistent image over sticking to the defined areas.
Anonymous No.8653156
>>8653148
Thanks, I'll test this one against PPM version I got working.
>>8653152
Yeah I figured. It does bleed styles more with attention thingie, but it probably works better this way for what I'm doing. This shit really blows up my workflow size, holy shit.
Anonymous No.8653256
>>8653145
Nice. Got a box?
Anonymous No.8653271 >>8653275 >>8653325 >>8653342
Please, does anyone one what model and LoRA this guy is using? I've wanted to copy this exact style for quite some time now but I can't get it right.

https://x.com/OnlyCakez1
https://www.pixiv.net/en/users/113960180
Anonymous No.8653275 >>8653360
>>8653271
Just make a lora
Anonymous No.8653325
>>8653271
nyalia and afrobull
Anonymous No.8653342 >>8653556
>>8653271
I wonder why these requests are always for the grossest grifter styles possible.
Anonymous No.8653360
>>8653275
give toml
Anonymous No.8653444 >>8653445
>>8653125
291H
Anonymous No.8653445
>>8653444
love this lil nigga like you wouldn't believe
Anonymous No.8653452 >>8653477 >>8653521
How should I approach training a lora for a character that noob vpred already knows but not really well? Should I train the TE and use the original tag or make a separate one? What dims/alpha should I set?
Anonymous No.8653477
>>8653452
no te, use the original tag, 8/4
Anonymous No.8653521
>>8653452
I've only done this for styles, when I reused the existing tag I only needed like a quarter of my usual steps. Don't think you need to adjust your config in any way, and it's always better to train more and save the earlier epochs too in case they're enough.
Anonymous No.8653556 >>8653561 >>8653613 >>8659314
>>8653342
Because they're the most popular?? Face it, your aesthetic "taste" is clearly in the minority.
Anonymous No.8653561
>>8653556
Which is why there threads are so slow. I think a better question is why ask for those styles here? The people who would know are those who enjoy them.
Anonymous No.8653613
>>8653556
I mean that slopper isn't even popular
Anonymous No.8653633
Anonymous No.8653635 >>8653637
>6 fingers
Anonymous No.8653637
>>8653635
based
Anonymous No.8653638
>nyogen
Anonymous No.8653642 >>8653644
Post your fingers if they are so great.
Anonymous No.8653644
>>8653642
but my fingers are /aco/
Anonymous No.8653691 >>8653748 >>8654019
Anonymous No.8653748
>>8653691
cute pink drool
Anonymous No.8653770
Move along friend, this train car's full
Anonymous No.8653780 >>8653796
Anonymous No.8653796 >>8653801
>>8653780
save some CFG for the rest of us
Anonymous No.8653801
>>8653796
No
Anonymous No.8653839 >>8654005
>kagami bday
>no sd on pc
grim
Anonymous No.8653845 >>8653852 >>8655040
Been genning in my corner since Noobai released, what's new on the block?
Anonymous No.8653852 >>8653871
>>8653845
>what's new on the block
For local, nothing.
Anonymous No.8653871
>>8653852
Oh well, still happy that I can generate random shit that comes to mind. Has anyone here trained concepts in an eps model? Characters and styles seem to come out okay but concept refuse to take for me.
Anonymous No.8653873
Anonymous No.8653879 >>8653884 >>8653906
Not really quite the character, but eh, good enough. Anyone got some good configs for characters? I've only got 16 images for this one.
Anonymous No.8653884
>>8653879
already posted in thread#10
Anonymous No.8653906 >>8653972
>>8653879
Assuming you're talking about making a lora,You can make a character lora with 15 images. Something like 30 is my ideal number but it's by no means a strict rule. something that worked quite well in the past is to aim for a number of steps between 300 and 400 per epoch. So 16 images x 20 repeats = 320 steps per epoch. Then multiply your epochs to get around to 2000 steps. So 20 repeats, 7 epochs and 2240 total steps. Batch size of your choosing, you might wanna start with 1 and go from there. Resize all your images so the longest side is 1024. If you're training on a eps model, make sure to check the box for it and also "scale v-pred loss". 8/4 Dim/Alpha should be enough for a character lora. Not sure what you use for learning rate, might want to try Prodigy or something similar at a lr of 1.0. Keep Token of 1 if you want a trigger word. Save each epochs and do a xyz grid with prompt s/r to see which one is the closest. Then retrain the lora accordingly, more/less repeats/epochs/batch size/etc. Who's the character btw?
Anonymous No.8653942
Anonymous No.8653963 >>8654038
Anonymous No.8653972 >>8653988
>>8653906
2240 steps seems insane, I did prodigy LoHa (4/4 + 4 Conv) and the Lora fried hard after only 240ish steps (15 epochs). Training on Noob-Vpred1.0
Character is Pure White Demon from succubus prison.
Anonymous No.8653981 >>8653994
Are well-defined noses /aco/?
Anonymous No.8653984
I blame the anime style hater for all of this
Anonymous No.8653987
Pony v7 will save local.
Anonymous No.8653988 >>8654002
>>8653972
After 240~steps? That's what sounds insane to me. What's your learning rate like? Prodigy should start at 1 and then adjust itself. Are you using Gradient Checkpointing? With Gradient Accumulation 1. Is SNR Gamma on 8? Not sure what is frying your training, although I haven't touched Loha/LoCon in a while. Can you upload your dataset? Would try a quick 30~minutes training to see what comes out.
Anonymous No.8653994
>>8653981
usually, but there's more to it
compare:
https://danbooru.donmai.us/posts/3171471
https://danbooru.donmai.us/posts/8487474
Anonymous No.8653999 >>8654080
>asura \(asurauser\)
timeless classic
Anonymous No.8654002 >>8654008
>>8653988
Yeah no clue. No checkpointing, no grad accum, SNR gamma 1, but I'll let you check both the dataset and config. Thanks for helping me out anon.
https://files.catbox.moe/6o3x0d.zip
https://files.catbox.moe/lrfpnr.json
Anonymous No.8654005 >>8654082 >>8654190
>>8653839
Anonymous No.8654008 >>8654031 >>8654039
>>8654002
Seeing max_train_steps and max_train_epochs at 0, not sure if that's normal. SNR Gamma should be at 8 for anime (and 5 for realistic), or so I read. How many repeats do you have? The folder being named 1_whitedevil tells me one? Are you training on Kohya or? Dataset looks okay, probably gonna add a couple pictures and remove the kimono ones just so it doesn't get confused on the horns. Tags look alright. Anyways, gonna launch a quick training over here, tell you what in 30~minutes.
Anonymous No.8654019
>>8653691
umm box?
Anonymous No.8654029 >>8654034
>>8653057 (OP)
so generation = local
diffusion = NAIcucks?
why two generals
Anonymous No.8654031 >>8654037
>>8654008
>SNR Gamma should be at 8 for anime (and 5 for realistic), or so I read
where did you read that?
Anonymous No.8654034
>>8654029
I was going to tell you but honestly I rather wait for another anon(s) to do it since I don't even post that much anymore
Anonymous No.8654037 >>8654047
>>8654031
Been a while but years ago when I didn't have a good enough pc to train on, I used HollowStraberry's google colab trainer, and in the notes for SNR Gamma that's what it said, I believe that's where I got it from. Never tried with SNR Gamma 1, will do in the future. Training done, only did 1000-ish steps at batch size 3 for the sake of time, gonna try the lora now.
Anonymous No.8654038
>>8653963
unfies...
Anonymous No.8654039 >>8654053
>>8654008
I don't use repeats, since it will fuck up random shuffle if I increased batch sizes. I just train for more epoch instead, and I like the epoch = 1 dataset pass, I'm training on Kohya, but every day I get more tempted to switch to easy scripts. Yeah, autotagger gets confused with the horn ornament + tiara and the hime cut with the semi-twintails going on? No clue how to tag that shit, same with the energy/magic shit.
Anonymous No.8654043
>>8653126
I'm still using PersonalMerge
Anonymous No.8654047 >>8654049
>>8654037
the "best" snr-based timestep weighting scheme you can do in sd-scripts for sdxl vpred is snr / (snr + 1)**2 and you can achieve that if you use 'debiased estimation' and 'scale vpred loss like noise pred' together (green line), without min snr
(well if you don't count the bug in sd-scripts which doesn't let snr reach zero for weighting even with zero terminal snr enabled, any snr-based weighting doesn't make sense for ztsnr)
Anonymous No.8654049
>>8654047
>'debiased estimation' and 'scale vpred loss like noise pred'
incidentally, you can sort of approximate the effect of this with min snr 1 (purple line)
Anonymous No.8654053 >>8654061
>>8654039
>I don't use repeats, since it will fuck up random shuffle if I increased batch sizes.
Do you mean caption shuffle? IIRC, batches can be imprecise because of bucketing (which never happens if you have only one bucket)

>every day I get more tempted to switch to easy scripts.
Would have stayed on Kohya but couldn't get it to work reliably on the new pc, so switched to ez

>autotagger gets confused with the horn ornament + tiara and the hime cut with the semi-twintails going on? No clue how to tag that shit, same with the energy/magic shit.
Yeah no idea either, how do you communicate this is just a different hairstyle from the usual, 2 images isn't enough to have a subfolder, I don't think. The energy and aura seem to have bled into the pics, hopefully there are more varied images for the dataset.

Anyways, this one was 324 steps per epoch (18 repeats of 18 pictures), 3 epochs for a total of 972 steps with Prodigy lr 1, batch size 3, cosine, 8*4 dim/alpha, snr 8. Quite a bit of work still needed, the markings are fucked, The horns are not as they should be and the wings are a nightmare to render properly. As you can see, almost a thousand steps and it didn't fry.
Anonymous No.8654054
Anonymous No.8654055
Anonymous No.8654058
we achieved agi
Anonymous No.8654060
Anonymous No.8654061 >>8654064 >>8654065
>>8654053
Looks alright, thanks for giving it a try. But yeah when I say "fried", I guess I mean more stuff like
>The energy and aura seem to have bled into the pics
>burned in wings even when not prompted for
etc.
>completely burned in style
stylebleed could probably be mitigated by tagging shiki, or doing the fancy copier technique.
I'll try rebaking tomorrow and see what I get, thanks for the input and ideas.
Anonymous No.8654064 >>8654067
>>8654061
Not him but you have some pics in the dataset without the wings and aura right?
Anonymous No.8654065 >>8654067
>>8654061
I see what you mean, yeah. Haven't looked closely at the dataset but if she doesn't have one, remove tattoo from the tags. The aura and stuff could be put into negatives and lowering the weight of the lora to 0.8 or so, but that's a bit bothersome and not ideal. Try getting more pics (unless these are all the official pics?) and if all else fails try adding close up crops from the pics you already have, should make the lora focus the aura a bit less. What style did you use for your pics btw?
Anonymous No.8654067
>>8654064
Yep, there's 2-3 images without the aura and wings in the dataset. It's all the same artist though, so style bleed is pretty bad.

>>8654065
Most pics I don't have in the dataset are part of a variant set, so no point to include 10 same images with minor differences. These are all the same artist (official creator), I've found like 2-3 other fanarts but they are complete crayon tier and changes the design so are a no-go for the dataset.
I'll likely retag everything again manually, train an almost fried lora and then start introducing artificial examples into the dataset with different artist styles. I have the metadata in my pics, it's in the stealth png format (works with the extension), but if you don't have it I used "hetza \(hellshock\)".
Anonymous No.8654072 >>8654305
How do I darken hair color? Say I'm doing black hair but the artist/lora is making it gray. Do I just increase the weight of black hair? I vaguely remember this working with red hair.
Anonymous No.8654080
>>8653999
>asura \(asurauser\)
I think we can all agree that hdg went into full decline when asura pillarino disappeared. 'fraid so.
https://files.catbox.moe/f5lwbp.png
Anonymous No.8654082 >>8654190
>>8654005
use a fork you savage
Anonymous No.8654117 >>8654122
bros how do you train a lora on an artist who only draws one character
Anonymous No.8654122
>>8654117
The same way you do for style loras based on game CG: tag everything.
Anonymous No.8654129 >>8654133 >>8654135 >>8654145 >>8654684
How is it possible that Chroma learned ZERO artist knowledge after 40 versions? Did they include the artist tags in their training at all?
Anonymous No.8654133
>>8654129
Not zero, but nearly, prompting slugbox does something consistent at least but yeah it sucks
Anonymous No.8654135 >>8654137
>>8654129
its both the fact he's training the model really weirdly using some method he invented, alongside the fact that the booru tags are fully shuffled, and also a small portion of the dataset. not to mention when training it'll randomly pick between NLP and a tag based prompt.
Anonymous No.8654136
"Remember, his computing power is over 10 times slower than NoobAI. Sure, he’s managed to optimize it with some hacks that nobody on /h/ can pull off, but it’s still way slower than SDXL, and the speed for replicating styles is just abysmal.
Anonymous No.8654137
>>8654135
Should have added "drawn by [artist]" in the NLP prompts.
Anonymous No.8654138 >>8654140
h100/256 vs h100/4-8
Anonymous No.8654140 >>8654141
>>8654138
did noob actually utilize a 256xH100 node lol
Anonymous No.8654141
>>8654140
32xH100 from noob 0.25 to eps 1.0 iirc, they then started using most of the compute on the IP adapter and controlnets and the v-pred model was trained on 4-16 A100s I believe
Anonymous No.8654145 >>8654146 >>8654294
>>8654129
i think the great satan is t5 and that hes not training it and that he does not have the resources to brute force it without training it like nai possibly did
Anonymous No.8654146
>>8654145
>train T5
genuinely you are better off using a different text encoder than trying to train T5
Anonymous No.8654187
>>8653126
Box please
Anonymous No.8654190
>>8654005
thank you so much
>>8654082
hands are faster
Anonymous No.8654191 >>8654192 >>8654823
How would you call this kind of background/effect on the corners?
>https://danbooru.donmai.us/posts/6105075?q=hxd
>https://danbooru.donmai.us/posts/6421672?q=hxd
>https://danbooru.donmai.us/posts/9391957?q=hxd
>https://danbooru.donmai.us/posts/6357183?q=hxd
Anonymous No.8654192
>>8654191
A vignette that uses a crosshatching pattern. There is no crosshatch vignette tag it seems, but there is crosshatching and vignetting as separate tags.
Anonymous No.8654195
absurdly detailed composition, complex exterior, green theme
Anonymous No.8654202 >>8654206 >>8654215
>Train artist loras for vpred they turn out fine
>Train chara loras for vpred they completely brick
Using the same settings it's just weird, like shit full stop doesn't even work.
Anonymous No.8654206 >>8654232
>>8654202
Take the base illu pill and come home.
Anonymous No.8654215
>>8654202
can i trade luck with you
my artist loras turn out mediocre on vpred yet chara loras are easy
Anonymous No.8654232 >>8654234 >>8654252
>>8654206
Honestly trying to figure out why I left, sure I see shiny skin but I never neg'd for it and I like my old shit a lot. Got the loras still so might just go back and see why I swapped, only thing I notice is the colors suck way worse everything is kinda beige.
Anonymous No.8654234 >>8654236
>>8654232
>only thing I notice is the colors suck way worse everything is kinda beige
You're right, but I fix colors with CD tuner so I don't see why I even switched to baking on vpred.
Anonymous No.8654236 >>8654237
>>8654234
That an extension might take a gander, other thing I noticed is if unprompted you get the same weird living room setting which I can probably neg out too.
Anonymous No.8654237 >>8654240
>>8654236
https://github.com/hako-mikan/sd-webui-cd-tuner
Anonymous No.8654240 >>8654242 >>8654244
>>8654237
Just use base settings or anything to tweak with it? And are vpred loras backwards compatible?
Anonymous No.8654242 >>8654245
>>8654240
Gotta tweak it on a per model basis. I tend to play with saturation2 and 1 is good enough for me but YMMV.
>And are vpred loras backwards compatible
What do you mean? Bake on vpred and run on illu? I've only ever tried this on Wai and it worked out well. Improved Wai 12's color problems too.
Anonymous No.8654244 >>8654245
>>8654240
You shouldn't gen with base illu, that's an even worse idea than training on it
Anonymous No.8654245
>>8654242
Yeah I got my stash of shit I baked on illustrious but remade most with vpred so didn't want to start another round on bakes on stuff post originals.

>>8654244
The extension you goof.
Anonymous No.8654251 >>8654374
anyone have config tips regarding finetune extractions?
lot of my attempts have been... so-so
Anonymous No.8654252
>>8654232
Noob and specifically vpred has a ton more knowledge that loras just don't provide for me, personally. Although Illu has a lot more loras for it given its age and the fact that they all work for noob anyway.
Anonymous No.8654283
If anyone ever wanted to prompt cute small droopy dog ears, which look a bit like the "scottish fold" tagged ears on danbooru, you can do
>goldenglow \(arknights\) ears, black dog ears, (pink ears,:-1)
Goldenglow is a character the model knows pretty well, who has this type of folded ear. Negpip + prompting the desired color is able to remove the pink hair bias from the character tagging. Putting pink ears in the negative prompt further helps.
Anonymous No.8654294 >>8654308 >>8654472
>>8654145
Not training the TE is the correct choice.
Anonymous No.8654305
>>8654072
photoshop
or stick "grey hair" in your negatives
Anonymous No.8654308
>>8654294
Thanks kurumuz
Anonymous No.8654335
How did 291h do it?
Anonymous No.8654372 >>8654378
do what?
Anonymous No.8654373 >>8654378
do me
Anonymous No.8654374
>>8654251
if the lora turns out to be weak, try baking the finetune for a smidge longer than you really need
Anonymous No.8654378
>>8654372
>>8654373
all me and 291h
Anonymous No.8654383 >>8654384
Training the TE destroys the entire SDXL.
If you don't train the TE, it doesn't work properly.

What should I do for a full fine-tune of SDXL? Please answer.
Anonymous No.8654384
>>8654383
destroy the SDXL
Anonymous No.8654472 >>8654477
>>8654294
well clearly chroma is not learning jack shit and the te is an obvious suspect
just repeating to not train te like a parrot is not gonna help it when every successful local tune had to train it (though not t5)
Anonymous No.8654477
>>8654472
yeah lets just ignore the completely new "divide and conquer" training method that merges a ton of tiny tunes together that lodestone invented. nope. it's t5.
Anonymous No.8654485
For me? t5.
Anonymous No.8654486 >>8654516 >>8654535
Probably not the place to post this but I am looking for a discord invite to KirsiEngine's discord server. Without signing up for his patreon, obviously
Anonymous No.8654516 >>8654530
>>8654486
Sent.
Anonymous No.8654530
>>8654516
based thanks
Anonymous No.8654535 >>8654597
>>8654486
https://discord.com/invite/5CpkfYzdnx
Anonymous No.8654578 >>8654605
>finetune vpred 1.0 for a couple epochs in attempt to train in some artstyles
>accomplishes nothing outside of stabilizing the model
not what i wanted but neat
Anonymous No.8654589
You'll never be the next 291h
Anonymous No.8654590
291h gens when
Anonymous No.8654592
Gens? Take it to /hdg/, lil blud. This is a lora training general.
Anonymous No.8654594
you will never be the painter
Anonymous No.8654595 >>8654599 >>8654677
at least post the link to the model so I can test it myself
Anonymous No.8654597
>>8654535
Still blocked from seeing channels as a non-Patreon. Oh well..
Anonymous No.8654599 >>8654600
>>8654595
the 291h is on the civitai sir
Anonymous No.8654600
>>8654599
think lil blud means your experiment, anon.
Anonymous No.8654605 >>8654607 >>8654610 >>8654677
>>8654578
>outside of stabilizing the model
that's based anon post it
Anonymous No.8654607 >>8654609 >>8654677
>>8654605
Yeah I wanna see if it's better than any style lora at 0.2 strength
Anonymous No.8654609
>>8654607
I've tried that method and it fixed nothing for me lol
Anonymous No.8654610
>>8654605
there's a decent chance it may be more slop than stable, still messing around
Anonymous No.8654624 >>8654646
the best way to defeat a troll is to ignore him
Anonymous No.8654646
>>8654624
Which one is the troll though?
suppose I could just ignore everyone
Anonymous No.8654677 >>8654701 >>8654749 >>8654860 >>8655020
>>8654595
>>8654605
>>8654607
here, only done sparse testing myself. let me know if any of you see value in it lol
https://gofile.io/d/DGSNR9
Anonymous No.8654681 >>8654688
How can I get rid of this artifact, it makes output blurry and destroy style and details when multidiffusion upscaling (https://civitai.com/articles/4560/upscaling-images-using-multidiffusion), I did 2x then 1.25x and it's getting worse. Maybe this is a bad method so I need an advice.
Anonymous No.8654684
>>8654129
He is working with both Pony and drhead, two retards that are vehemently opposed to artist tags. In addition, the natural language VLM shit almost certainly washes out proper nouns just like it did with base Flux
Anonymous No.8654688 >>8654696 >>8654697
>>8654681
how is mixture of diffusers better than simple image upscaling? it's great to do absurdres upscales and looks pretty smooth, but it's still essentially a tile upscale, albeit a bit less shitty than just simple tile upscale scripts. it doesn't have the whole context of the picture which might cause hallucinations unless you are content with really low denoise.
Anonymous No.8654696
>>8654688
Here, 1x, 2x, 1.25x upscaled in order
https://gofile.io/d/RFX4DT
warning:[spoiler] /aco/ [/spoiler]
Anonymous No.8654697
>>8654688
Here, 1x, 2x, 1.25x upscaled in order it's /aco/ tho
https://gofile.io/d/RFX4DT
Anonymous No.8654701
>>8654677
Did some basic tests and it does pretty much seems like a more stable and cohesive vpred
Didn't notice much slop in it at all, and also it got the details better than vpred in some gens
But then again, I'm not a great genner so gotta wait for someone else to comment on it
Anonymous No.8654723 >>8654726
https://blog.novelai.net/novelai-diffusion-v2-weights-release-b9d5fef5b9a4
Anonymous No.8654726 >>8654729
>>8654723
lmao who cares
Anonymous No.8654729 >>8654730 >>8654732 >>8654734 >>8654735
>>8654726
Now that we have noob, if they released v3 weights would people be excited?
Anonymous No.8654730
>>8654729
Everything is relative. If they released it today, I'm sure people would be. If a new model better than Noob comes out and then they release v3, then of course people would not be.
Anonymous No.8654732
>>8654729
noob is basically novelai v3 at home. v3 is still unfortunately better than whats available
Anonymous No.8654734
>>8654729
bet someone could make a very good block merge with it and noob
Anonymous No.8654735 >>8654737 >>8654742 >>8654759
>>8654729
v2/v3/v4 were shit. NAI didn't get good until v4.5. It's currently the best FLUX based anime model.
Anonymous No.8654737 >>8654740
>>8654735
Why don't you ever post pictures then?
Anonymous No.8654740
>>8654737
Busy masturbating; sorry.
Anonymous No.8654742
>>8654735
v1 was good for its time, otherwise local would happily use WD
v3 is still good visually but it prompts like ass
Anonymous No.8654749 >>8654751 >>8654754 >>8654758 >>8654934 >>8654971 >>8655567 >>8656019
Alright idiots, vpred models that I think are good, no snake oil required on any of those to get good looking gens (debatable), no quality tags and only very few basic negs

All of those were made using ER SDE with Beta at 50 steps 5 CFG (may not be the ideal setup for some of them but it's good enough for most of the cases)
>https://files.catbox.moe/p1afyv.png
>https://files.catbox.moe/nw7rue.png

To no one surprise, each model is biased towards certain styles so your favorite artist may be shit on one of them but great on another one WOW
It's almost like YOU SHOULD USE THE MODEL THAT FITS YOUR FUCKING HORRID TASTE THE BEST

>>8654677
I like what I see at the moment but I need to use it for a little longer to draw an opinion on it
Anonymous No.8654751
>>8654749
>102d is still king
Excellent.
Anonymous No.8654754
>>8654749
thanks i'll continue to shill r3mix
Anonymous No.8654758 >>8654760
>>8654749
I think it'd be interesting to do this comparison but with loras that are considered stability enhances, on base vpred. If you can get the same results just by using a lora, then there's no reason to use a shitmix, as shitmixes always mess with the model's knowledge a bit and make it less flexible to work with, while swapping loras out is much faster.
Anonymous No.8654759 >>8654762
>>8654735
>It's currently the best FLUX based anime model.
Without containing any FLUX too! Amazing!
Anonymous No.8654760 >>8654766 >>8654963
>>8654758
>loras that are considered stability enhances
If you guys ever agree on that one, sure
Anonymous No.8654761 >>8654775
>implying v4.5 is anything but dogshit
ahahah that's a good one
Anonymous No.8654762 >>8654789
>>8654759
Yeah bro, they totally trained it from scratch, all by themselves.
Anonymous No.8654766 >>8654798
>>8654760
If people disagree on which ones are the best then that's the reason the comparison should be made. I haven't seen anyone actually talking about existing/downloadable stabilizer loras though.
Anonymous No.8654769
Am I retarded or is there a chance the lora I'm trying to use just not compatible with comfyUI for some reason?
I can't get it to work
Anonymous No.8654775 >>8654797
>>8654761
Kek, this.
It's literally impossible for paid proprietarded piece of trash to be good, by definition. SaaS garbage literally takes away your freedom and makes you a slave to the system that you should oppose by any means. Don't be a fucking cattle, resist. If it isn't "Free" as in Freedom, I am not interested, as I am Free myself.
Anonymous No.8654789
>>8654762
Yeah, they're just so good at optimizing shit that they can run 23 steps of FLUX with CFG in 2 seconds on an H100 lmao
Anonymous No.8654791 >>8654795
Isn't just using a model like WAI good enough?
Anonymous No.8654795
>>8654791
It's always a trade it seems
WAI is good, but it's trained on a lot of slop
That makes it more consistent and gives it higher quality (like in anatomy and stuff) but breaks the prompt adherence and injects a lot of unwanted style into your gens by default
Anonymous No.8654797
>>8654775
And yet, none of the models people here use has a license that the FSF would approve of as a Free Software license.
Anonymous No.8654798
>>8654766
I've seen a few but never used it myself
Anonymous No.8654801 >>8654803
I just want a model that's as good as 102d/291h, but that's easier to use
I still can't solve the shitty img2img/inpainting/adetailer/hi-res fix being broken because these models have the crazy ass noise at the start of the gens
mfw I can't just get a nice composition and throw it at i2i with the standard settings and it'll give me a good quality gen because it'll either change the image too much or it'll make it look blurry instead of adding details every single time
Anonymous No.8654803
>>8654801
mfw i2i sketch and inpaint sketch are no longer useful in my workflow now because of this
Anonymous No.8654807 >>8654808
I think this one is good enough, no more rebaking for now.
Anonymous No.8654808 >>8654811
>>8654807
How did you do it?
Anonymous No.8654811 >>8654816 >>8655198
>>8654808
I went to the dataset again, removed some variations cutting it down to 14 images, added an additional close-up crop of the face as well. Did a full manual tagging pass again, adding additional matching wing tags (demon wings, bat wings, mutliple wings, etc) since it was bleeding through.
Ran prodigy to get an initial good starting learning rate by looking at the tensorboard logs, then switched back to AdamW8bit.
Did a couple of test bakes, tweaking the learning rate for both Unet and TE.
Eventually ended up using this config:
https://files.catbox.moe/7id47n.json
Anonymous No.8654816
>>8654811
Thank you!
Anonymous No.8654823
>>8654191
It would be faster and simpler to just remove them.
Anonymous No.8654829 >>8654899
Even style-bleed is not too bad, pretty surprising.
Anonymous No.8654832 >>8654842 >>8654851 >>8654872 >>8654882 >>8654924 >>8655022 >>8655051 >>8656477
survey:
https://strawpoll.com/XOgOVDj1Gn3
Anonymous No.8654837 >>8654840
>20 unique IPs
Anonymous No.8654840
>>8654837
where
Anonymous No.8654842
>>8654832
I want to clarify that I have a 4070ti super not a regular 4070
Anonymous No.8654851
>>8654832
>not .safetensors
good try
Anonymous No.8654860 >>8658502
>>8654677
Re-ran a few old prompts on that. If you are using CFG++ like me there's very little difference between this, base 1.0, and even 102d custom.

pic mostly unrelated
Anonymous No.8654872 >>8654876 >>8654877
>>8654832
My NVIDIA GPU is not listed.
Anonymous No.8654876
>>8654872
1660s-san...
Anonymous No.8654877
>>8654872
H100?
Nice try, Jensen.
Anonymous No.8654882
>>8654832
Where is NovelAI on this list?
Anonymous No.8654887
A6000
Anonymous No.8654889
RTX Pro 6000
Anonymous No.8654892
8800GT bros...
Anonymous No.8654894
Trying to set up chroma has finally made me take the comfy pill. It's... it's not so bad bros... comfy is the future.
Anonymous No.8654899
>>8654829
It looks like I'm seeing some cutscene from Rance.
Anonymous No.8654917 >>8654920 >>8654930
I was browsing tags today and came across this. It is now one of my favorite pixiv posts of all time.
https://www.pixiv.net/artworks/118263867
Anonymous No.8654920 >>8654922
>>8654917
nice, do you have a twitter I can follow for more microblogs like these?
Anonymous No.8654922
>>8654920
Yes you can follow me @/hgg/.
Anonymous No.8654924
>>8654832
what if i have multiple?
Anonymous No.8654930
>>8654917
Goddamn
This is actually really good
Anonymous No.8654934 >>8654946
>>8654749
greyscale sketch prompt is such a good test for detecting slopped models desu
Anonymous No.8654944
what you want as a "stabilizer" is a good preference-optimized finetune, it can be a merge crap but usually merges work worse. you don't want to collapse the output distribution of a model with a lora because it will mess a lot of things up, especially if you are trying to use multiple loras.
what you will get out of a "good" preference-optimized finetune is a certain, defined "plastic" look of flux, piss tint and aco seeping through on pony, and the like.
Anonymous No.8654946 >>8654949
>>8654934
It filtered out most of them lol
Anonymous No.8654949 >>8654951
>>8654946
makes me wonder what would happen if you try to train solely on greyscale sketch gens
Anonymous No.8654951
>>8654949
I always wonder what would happen if you used an oil painting/classical artwork lora to make a merge rather than anime stuff
Anonymous No.8654955
How did 291h do it
Anonymous No.8654963 >>8655020
>>8654760
>If you guys ever agree on that one, sure
this isn't complicated.
A "stabilizer lora" is merely a lora of an artist you like and want incorporated into your mix. The only caveat being that it can't be watered down shit.
The existence of the lora existing in the first place is that it's introducing a much more stable and predictable u-net and imposing itself on the primary model to guide it.
It's really that fucking simple. Just use a style lora that isn't shit.
Anonymous No.8654966
Man, do the models posted here get saved by the rentry?
Anonymous No.8654970
Is there a site people use other than Civit since they did the purge?
Anonymous No.8654971
>>8654749
got the prompts for these? i wanna throw em on some models
Anonymous No.8654995 >>8654997
didn't know there were so many people with 4090s here
Anonymous No.8654997
>>8654995
that survey seems to have been posted in every ai gen thread
Anonymous No.8655018
>100 unique posters
Anonymous No.8655020 >>8655021 >>8655023 >>8655025 >>8655088
>>8654677
idk what you did but you need to do it for a little more or a little different
some samplers are completly broken

I want to like it as it gets some concepts and artists tags better but it's currently a little harder than I am willing to endure to get something good out of it

>>8654963
yeah okay, give me 3 lora recommendations for that effect
Anonymous No.8655021 >>8655027
>>8655020
>some samplers are completly broken
So, like vpred?
Anonymous No.8655022
>>8654832
ayyymd
Anonymous No.8655023 >>8655026 >>8655027
>>8655020
>yeah okay, give me 3 lora recommendations for that effect
the entire point is THAT YOU CHOOSE THEM YOURSELF YOU FUCKING RETARD
It's not supposed to be recommended by anyone else! They don't fucking work well unless they actually suit what you want your shit to look like!
Anonymous No.8655025 >>8655027
>>8655020
>idk what you did but you need to do it for a little more or a little different
all this was was unet only, batch 1, on a roughly 200 image dataset for 3199 steps. pulled it early since i was saving every 100 steps lol ill continue to fuck around though since results while unintentional are promising.
Anonymous No.8655026
>>8655023
>It's not supposed to be recommended by anyone else
What a retard, what's the point of screaming for the guy to make a comparison if you don't even have loras in mind
Anonymous No.8655027
>>8655021
Well yeah but even more so, could just be me ngl

>>8655023
I already have and use those, the point was to make a general agreement to have something to recommend when people ask for that as you know, "a good lora" is very ambiguous but whatever, I did my part

>>8655025
Godspeed anon
Anonymous No.8655030 >>8655031 >>8655143 >>8655145
>he uses nyalia over 748cmSDXL for stabilization
oh nyo nyo nyo nyooooooooooooo~
Anonymous No.8655031
>>8655030
your gen?
Anonymous No.8655032
Wrong tab, bucko.
Anonymous No.8655033
who you callin' bucko, chucko? this is sneed.
Anonymous No.8655035
You two go back to lora training. This is not a discussion thread.
Anonymous No.8655040 >>8655198
>>8653845
Really nice composition!
Anonymous No.8655051 >>8655052 >>8655064 >>8655995
>>8654832
>40 minutes to generate a 720p video
Even with a 4090 I'm still a vramlet
Anonymous No.8655052 >>8655053 >>8655066
>>8655051
can i see?
Anonymous No.8655053 >>8655055
>>8655052
No I quit the gen because it's not worth it.
Anonymous No.8655055
>>8655053
Based quitter.
Anonymous No.8655064 >>8655066
>>8655051
>40 minutes to generate a 720p video
someone isnt using lightx2v
Anonymous No.8655066 >>8655067 >>8655072
>>8655064
The guide says that one's quality is far worse than wan.
>>8655052
https://files.catbox.moe/oiafrv.mp4
Anonymous No.8655067
>>8655066
>far worse
visually it's about on par. makes the model very biased towards slow motion, though less-so on the 720p model. most of the big caveats are present in the 480p model. it's basically required for convenient 720p gens imo
Anonymous No.8655072 >>8655074
>>8655066
the fuck's her problem?
Anonymous No.8655074
>>8655072
jiggling her butt for (me)
Anonymous No.8655077 >>8655079 >>8657731
Tech illiterate here trying to get Comfy working. My laptop's several years old. What exactly can I do about this? I don't know what I'm looking for on PyTorch.
Anonymous No.8655079 >>8655080
>>8655077
How much VRAM do you have? You'll need around 6-8GB or so to run local gens, and if your laptop is old enough it might be too low.

For PyTorch just follow the exact instructions in the message. Go to the Nvidia link first and then the Torch one.

If your GPU is too old to run locally, there are free online options like frosting.ai and perchance.org and more.
Anonymous No.8655080 >>8655082 >>8655083 >>8655085
>>8655079
The sticker on my laptop says 2GD Dedicated VRAM.
Shit.
Anonymous No.8655082
>>8655080
*GB
Anonymous No.8655083
>>8655080
Based time traveler.
Anonymous No.8655085 >>8655092
>>8655080
Plenty of stuff you can do online for gens these days.

ChatGPT has SORA for image generation and Microsoft has a Bing Image generator too. Those are both the highest quality, but censored to hell so you can't do porn. They both let you gen for free with a free account setup

perchance.org is free no account gens, but is censored as well.

frosting.ai can do uncensored gens, and is free with no account. The quality isn't the best unless you pay though.

CivitAI, Tensor.art and SeaArt.ai all let you do a limited number of free gens if you make a free account. They all have onsite currency that you get a certain amount of for free and can get more by liking, commenting, the usually "engagement" stuff.

NovelAI has the most advanced new model with their v4.5 model, and is doing a free trial. However, it's mostly a pay for site. If you're willing to pay it might be the best option, but you should probably try out all the free options first before you pay for anything.
Anonymous No.8655088 >>8655103
>>8655020
btw, mind sharing the broken examples? training a v2, gonna let it go until it explodes
Anonymous No.8655092 >>8655096
>>8655085
Dang. Alrighty then, really have to get a new computer. My buddy's made some awesome stuff for me, but it looks like it'll be a while before I can do it on my own. Thanks for the list though, I'll take a look!
Anonymous No.8655096
>>8655092
Turns out that perchance.org can do porn too, you just have to let it fail once, click the "change settings" button that comes up, and then turn off the filter.

Since that and frosting.ai don't require any fee or even a free account they're probably the best to start with if you want /h/ content.
Anonymous No.8655097
Is there a fastest way to switch model like extension?
Anonymous No.8655098
Nigga, you click the drop down in the upper left and choose the model.
Anonymous No.8655099
How drop click change down model?
Anonymous No.8655101
sarsbros..
Anonymous No.8655103 >>8655104
>>8655088
Sure
>https://files.catbox.moe/cwrije.png
>https://files.catbox.moe/kh0ctq.png
>https://files.catbox.moe/l11gh2.png

>training a v2, gonna let it go until it explodes
holy based
Anonymous No.8655104
>>8655103
lol wtf, i wonder if base vp1.0 has the same issue on the problematic samplers
Anonymous No.8655143
>>8655030
why would i not use both, retard-kun?
Anonymous No.8655145
>>8655030
ywnbac
Anonymous No.8655146
>certainly 2 stabilizers will unslop it
kekerino
Anonymous No.8655148
xir please administer the appropriate Slop Shine to your model before use. it is imperative
Anonymous No.8655186 >>8655224 >>8655269
Someone did a test a while ago that demonstrated how some characters like Nahida make the model more accurately model the character as small relative to the environment, while others feel oversized. Well inspired by that, I did my own tests using the kitchen environment, and can confirm that Nahida is really one of the few characters that achieves this. There are a crazy ton of characters that are supposed to be short but noob still renders them like a normal sized people.

I wonder what would solve this problem in terms of model architecture. Or is it merely a training/dataset issue?
Anonymous No.8655198
>>8654811
Well done, must admit, didn't think of using Prodigy to figure out the learning rate. Solid work.

>>8655040
Thanks. Just wish i had taken the time to correct her small hands.
Anonymous No.8655224 >>8655439
>>8655186
I just looked at danbooru's tag wiki and found the toddler tag. Didn't know that was a thing. Testing it, it does seem to make pretty small characters in the kitchen environment. If the goal is to make a short normal hag, then perhaps adding [toddler, aged down:petite:0.2] to a prompt of X character might work.
Anonymous No.8655226 >>8655233
Wrong tab, oekaki anon.
Anonymous No.8655233 >>8655239 >>8655248
>>8655226
He is NOT a pedophile. Those are NOT toddlers he's posting. Look, they've got curves!
Anonymous No.8655235 >>8655241
who is they thems talking to
Anonymous No.8655239
>>8655233
Based.
Anonymous No.8655241 >>8655269
>>8655235
Idk kek. If you just wanted shortstacks you can prompt for those just fine, no need to go through all this.
Anonymous No.8655248 >>8655268
>>8655233
>Look, they've got curves
Where? No one posted any images. The last non-catbox image post was 12 hours ago...
Anonymous No.8655249 >>8655250
You're not getting my metadata, Rajeej.
Anonymous No.8655250
>>8655249
Who are you talking to?
Anonymous No.8655251
all me
Anonymous No.8655255
me too
Anonymous No.8655264 >>8655268 >>8655485
Anonymous No.8655266 >>8655268
Anonymous No.8655268
>>8655248
>>8655264
>>8655266
;)
Anonymous No.8655269 >>8655271
>>8655186
Kitchen anon here, I also noticed when doing groups shots of named characters, ones from the same franchise would usually be fine because they appeared together in some dataset pics. But crossovers would mess up their relative sizes.

>>8655241
Point is, small characters often end up huge compared to the environment. Sometimes even if you specifically prompt for loli/shortstack/etc. Picrel.
Anonymous No.8655271 >>8655277
>>8655269
How do we know that's not a custom built kitchen made to accommodate her night?
Anonymous No.8655273
it's nai
Anonymous No.8655274 >>8655772
Anonymous No.8655277
>>8655271
it made her legs longer too
Anonymous No.8655281 >>8655288
Anonymous No.8655282 >>8655283 >>8655288
Anonymous No.8655283 >>8655286
>>8655282
Where's the rest of his forearm?
Anonymous No.8655286
>>8655283
idk camera angles
Anonymous No.8655288
>>8655281
>>8655282
Reminds me of school days.
Anonymous No.8655296 >>8655354 >>8655470
Anonymous No.8655354
>>8655296
catbox?
Anonymous No.8655362 >>8655364
>she doesn't 748cm
A-anon..
Anonymous No.8655364
>>8655362
I don't know what that memes
Anonymous No.8655439
>>8655224
>[toddler, aged down:petite:0.2]
I just tried this and it seems to be an inconsistent solution. Sometimes it does make the proportions right but most of the time it'll be messed up and closer to a shorstack/loli. Maybe if there was a tag for "normal proportions" then this might work.
Somehow I feel like putting "shortstack" in neg won't help either.
Anonymous No.8655470
>>8655296
i don't like the face but the rest is very cool
Anonymous No.8655485
>>8655264
box please?
Anonymous No.8655499
Anonymous No.8655522
Anonymous No.8655546
luv me some chun li
Anonymous No.8655567
>>8654749
seconding for the prompts
curious how the models I use hold up
Anonymous No.8655759
I did not think generating pussy would be so lucrative.

NAIfags, what else do you use in your workflow? Besides the in-house enhancement features (which are all terrible and cost Anlas to use prooperly) I use Upscayl to make 4K+ images. Does anyone actually use Photoshop to retouch images nowadays?
Anonymous No.8655768
Don't you love when sometimes the same exact gen and inpaint settings you have used many times before suddenly don't work anymore?
Anonymous No.8655772
>>8655274
box?
Anonymous No.8655937 >>8655944
I wish I never tried NAI 4.5, impossible for me to go back to local now. Coherent multi character scenes off cooldown and it nails the style I use perfectly..
Anonymous No.8655942
kurumuz...
Anonymous No.8655944
>>8655937
Eventually they will all become SFW only.
Anonymous No.8655946
/h/ is just a bunch of frauds, and SOTA only comes from NAI. This has never changed in history. First, NAI creates SOTA, and then /h/ just copies it. We've definitely seen this pattern with the latest Flux generation too.
Anonymous No.8655976
Time to merge with /hdg/. We've gone full circle, sisters.
Anonymous No.8655978
Petition denied, again
Anonymous No.8655995 >>8656311
>>8655051
It takes me 4 minutes and 30 seconds on a 5090.
24fps 720x480 in WAN2.1.
The 4090 can't be that slower. You must have setup something wrong.
Anonymous No.8656019 >>8656024 >>8656313
>>8654749
why does base noob 1.0 look the best
Anonymous No.8656024
>>8656019
Probably because despite what all the armchair ML scientists say, the noob team actually knew what they were doing and everyone who tried to "fix" it only made it worse.
Anonymous No.8656033
How did 291h get away with it?
Anonymous No.8656101 >>8656110 >>8656195
Anon, if you were about to train a finetune of noob, which artists would've you add in the dataset?
Anonymous No.8656110 >>8656189
>>8656101
The ones I like
Anonymous No.8656111 >>8656149
new t5 CSAM var
https://huggingface.co/collections/google/t5gemma-686ba262fe290b881d21ec86
Anonymous No.8656149
>>8656111
???
Anonymous No.8656152
>t5gemma
what's the point of this
Anonymous No.8656189 >>8656190 >>8656194 >>8656243
>>8656110
name them
Anonymous No.8656190
>>8656189
asura asurauser
Anonymous No.8656194
>>8656189
cromachina
Anonymous No.8656195 >>8656669
>>8656101
tamiya akito, CGs not danbooru crap
from danbooru I guess nanameda kei. he kinda works but only on base noob, too weak for merges
Anonymous No.8656215
Anonymous No.8656243 >>8656251
>>8656189
Didn't you already ask this like a year (and a half maybe) ago?
Anonymous No.8656251 >>8656286
>>8656243
Even if was the same person, how long should someone have to wait before asking again?
Anonymous No.8656286 >>8656337 >>8656453
>>8656251
Just use answers from that time, there were a lot of them, can't get that now that the whole thread is 3 samefags.
Anonymous No.8656311
>>8655995
It even says that on the guide my guy. The other anon was correct though, just switch to lightx2v/
>720x480
No 1280x720.
Anonymous No.8656313
>>8656019
Because you have shit taste?
Anonymous No.8656337 >>8656349
>>8656286
are those 3 samefags on the room right now?
Anonymous No.8656349 >>8656396
>>8656337
We are all you, anon
Anonymous No.8656351
Gah! Now you're making me angry!
SUFFER!!!
Anonymous No.8656353 >>8656355
Anonymous No.8656355
>>8656353
Why aren't you using the sdxl vae?
Anonymous No.8656396
>>8656349
If you were you would be posting kino vanilla or 1girl standing gens
Anonymous No.8656453 >>8656459 >>8656477
>>8656286
>the whole thread is 3 samefags
how do we revive /h{d,g}g/
Anonymous No.8656458 >>8656466
Anyone have know of any artist that do thin and "crisp" line art? Not quite Oekaki, but in the same vein
Anonymous No.8656459
>>8656453
just let it merge naturally back into /hdg/
Anonymous No.8656466 >>8656485
>>8656458
I may have some in mind but you need to post an example
Anonymous No.8656477 >>8656551
>>8656453
>>the whole thread is 3 samefags
saar, you are deboonked
>>8654832
the poll has 206 votes (one unique ip per vote)
Anonymous No.8656485
>>8656466
I don't really have a concrete example right now, I just remember seeing a picture some days ago and thinking "hey I like the way that looks, I should try to replicate that"
I don't remember when or where I saw it so I can't really go looking for it again, I just have this very feint image in my head, so it's more like a feeling
Not very helpful I know, but I kind of just want to experiment, so feel free to post whatever you have
Anonymous No.8656523
Does anyone have a snakeoil loaded finetune config for the machina fork of sd-scripts? blud isn't exactly keen on documentation and I wanna see what's possible without sifting through the code.
Anonymous No.8656551 >>8656556 >>8656558 >>8656562
>>8656477
the poll was reposted in every AI thread on the site
Anonymous No.8656556
>>8656551
grim
Anonymous No.8656558
>>8656551
pp grabbing viroos
Anonymous No.8656562 >>8656564 >>8656568
>>8656551
I don't see it on /lmg/ or /hdg/ so what do you mean by "every"
Anonymous No.8656564
>>8656562
well, the boards that matter.
Anonymous No.8656568 >>8656768
>>8656562
Just a guess, based on seeing it in the non-futa /d/ thread and the photorealism /aco/ thread.
Anonymous No.8656572 >>8656573 >>8656583
Welp, just upgraded to a 5070ti, and now shit is broken. The rest of the net has no idea apparently, Is there anyone here who has gotten reforge working with a 5070ti?
Anonymous No.8656573 >>8656589
>>8656572
What kind of broken we're talking about?
Anonymous No.8656583 >>8656589
>>8656572
Try deleting your venv and any launch commands you have in your webui-user such as xformers and start over.
Anonymous No.8656589 >>8656729
>>8656573
RuntimeError: CUDA error: no kernel image is available for execution on the device

>>8656583
Will try this and report if it works. Thanks for the suggestion.
Anonymous No.8656611 >>8656612 >>8656613
Anonymous No.8656612
>>8656611
Can't tell if she has too many tails or if it's just some retarded BA design
Anonymous No.8656613 >>8656616
>>8656611
No sauce on that penisdog?
Anonymous No.8656616 >>8656619
>>8656613
here
Anonymous No.8656619
>>8656616
Disgusting. Thank you.
Anonymous No.8656636 >>8656639 >>8656649 >>8656684
can novelai do JP text? I know it technically can but I'm curious if the text encoder(?) was setup correctly to read JP input, or if it just gets automatically translated or something
Anonymous No.8656639
>>8656636
nai thread is down the road, lil' bro
Anonymous No.8656640
Anonymous No.8656649
>>8656636
it can't, which is really shitty and funny at the same time
Anonymous No.8656669 >>8656690
>>8656195
>tamiya akito, CGs not danbooru crap
do you perhaps have them sorted and willing to upload somewhere?
Anonymous No.8656684
>>8656636
>Note: since V4.5 uses the T5 tokenizer, be aware that most Unicode characters (e.g. colorful emoji or Japanese characters) are not supported by the model as part of prompts.
Anonymous No.8656690
>>8656669
sorry, I don't
just sadpanda galleries
Anonymous No.8656729
>>8656589
No kernel image means your drivers are broken. You need to get drivers that support Blackwell (5000s). I had to manually get an updated driver on my linux machine for my 5000 card. Windows I assume it's just installing the official Nvidia stuff.
Anonymous No.8656768
>>8656568
Also on the dead /u/ thread.
Anonymous No.8656990 >>8657005
Is there a comparison of the best local models compared to nai 4.5?
Anonymous No.8657005 >>8657017
>>8656990
In terms of what? Because I could generate tons of styles and characters NAI could never do, and also generate text and segmented characters that local could never do
Anonymous No.8657017
>>8657005
Styles and characters yeah. Couldn't care less for text.
Anonymous No.8657020
vibin'
Anonymous No.8657045
what is lil bud vibin' to? :skull: :thinking:
Anonymous No.8657126 >>8657131
My setup broke but I had fun editing silly shit, enjoy
Anonymous No.8657131 >>8657133
>>8657126
not bad, very cool
a shame about the forced cum on their tits
Anonymous No.8657133
>>8657131
I had to do it to stay with the rules but
https://files.catbox.moe/dns7nr.png
Anonymous No.8657157 >>8657298
Anonymous No.8657164 >>8657167 >>8657168 >>8657172
Can we propose trades between generals? I'd love to get Sir Solangeanon and Doodleanon (no, not the pedo one) here in exchange for lilbludskullanon. Thing /hdg/ would go for it?
Anonymous No.8657167
>>8657164
>*Think /hdg/ would go for it?
Anonymous No.8657168
>>8657164
you can just fuck off to that shit hole
Anonymous No.8657171 >>8657173
I'm trying to build the best general through our front office, anon.
Anonymous No.8657172 >>8657176
>>8657164
Why do you want to make the thread worse?
Anonymous No.8657173 >>8657176
>>8657171
/hdg/ is already peak by your standards
Anonymous No.8657176 >>8657191
>>8657172
How so?
>SirSolangeanon
Enthusiastic poster, somehow still not jaded like the majority of us.
>doodleanon
Miss that lil potato headed nigga like you wouldn't believe.
>>8657173
Nah. Those 2 are keeping that general afloat still instead of letting it sink to enter a proper rebuild phase. Whereas we're the explosive franchise with all the new talent that needs guidance from a few veteran pieces to put it together.
Anonymous No.8657191
>>8657176
I'll give you the point on solangeanon since he do listen to feedback
Anonymous No.8657219
watch out chuds, you dont want me to uncage right here right now, ive been keeping this thread chaste so far.
Anonymous No.8657246
>asking for avatarfags
Old 4chan culture is never coming back is it? Rules only exist if someone reports you.
Anonymous No.8657249
what big boomer yappin bout :skull:
Anonymous No.8657263 >>8657265 >>8657267 >>8657331 >>8657361 >>8657665
This place is now wholly indistinguishable from /hdg/, except nobody even bothers to pot gens.
Anonymous No.8657265
>>8657263
>except nobody even bothers to pot gens
so just like hdg? most of the gens there are shitposts now, either civitai slop reposts or cathag garbage. guess hgg isn't getting spammed (yet)
Anonymous No.8657267 >>8657665
>>8657263
Maybe you shouldn't have run off the trap (formerly otoko_no_ko) genners just to be left with endless threads of austistic slap fighting over toml files.
Anonymous No.8657277
Anonymous No.8657298
>>8657157
nice. more noodlenood.
Anonymous No.8657308
Orc bros?
Anonymous No.8657312
Anonymous No.8657328
Anonymous No.8657331 >>8657340 >>8657358
>>8657263
we need a third general
Anonymous No.8657340
>>8657331
Bake when? I'm ready to move on, sister.
Anonymous No.8657358
>>8657331
What should we call it? I propose /hdgpg/ hentai diffusion gens posting general.
Anonymous No.8657361
>>8657263
Not even close, the amount of retarded botposting in hdg is unbearable.
Anonymous No.8657404 >>8657407 >>8657409 >>8657665
on more important news i retrained this lora and its worse now
either my lora training settings are fucked or this dataset is cursed
thanks for listening to my important news
Anonymous No.8657407
>>8657404
me with every lora i bake ever (i cant train the TE)
Anonymous No.8657409 >>8657413
>>8657404
whats the artist/s?
Anonymous No.8657412 >>8657414
our three funny greek letter friend posted some new models
Anonymous No.8657413 >>8657429 >>8657484
>>8657409
it's for the concept of a dildo reveal 2koma
like this https://danbooru.donmai.us/posts/6868424
i thought it would be an easy train but it breaks down every time
Anonymous No.8657414 >>8657416
>>8657412
old news
Anonymous No.8657416 >>8657417
>>8657414
was there any sort of discussion about it?
Anonymous No.8657417 >>8657420 >>8659886
>>8657416
>>8656335
Anonymous No.8657420 >>8657421
>>8657417
oh...
Anonymous No.8657421
>>8657420
tldr: 1.5 may be okay but rest are objectively worse than the og.
Anonymous No.8657427
Nyo. Still 291h.
Anonymous No.8657429 >>8657434
>>8657413
i shee
but what about the artists used for that image unless its style bleeding from the lora?
Anonymous No.8657434
>>8657429
style bleeding
Anonymous No.8657484
>>8657413
maybe review your tagging
Anonymous No.8657661 >>8657662
How do I bake a lora having 0 knowledge about it
A style lora in particular
Anonymous No.8657662 >>8657728 >>8657816 >>8658162 >>8658331
>>8657661
step 0. download lora easy training scripts
step 1. collect images. discard ones that're cluttered or potentially confusing.
step 2. disregard danbooru tags, retag all images with wd tagger eva02 large v3. add a trigger word to the start of every .txt
step 3. beg for a toml. keep tokens set to one
step 4. train
Anonymous No.8657665
>>8657263
why you retards always complain about no posting gens without posting anything at all

>>8657267
this is good

>>8657404
you were my last hope for making this concept work
Anonymous No.8657728 >>8657731
>>8657662
How do I use lora easy training scripts if I don't have display drivers? Is there a non-gui version?
Anonymous No.8657731 >>8657735
>>8657728
Are you this anon >>8655077
Anonymous No.8657735 >>8657795
>>8657731
lol no, but I don't have a GPU that I can use for training on my normal PC, only on my headless linux server rig. Wanted to try easy training scripts but once I saw the GUI requirement I gave up.
Anonymous No.8657795 >>8657909
>>8657735
Just install backend and connect to your server from ex ui
Anonymous No.8657816 >>8657824
>>8657662
where download wd tagger eva02 large v3
Anonymous No.8657824
>>8657816
internet
Anonymous No.8657893 >>8658068
Anonymous No.8657909
>>8657795
No clue what you mean with "ex ui" but I managed to tardwrangle the code, it's a bit buggy but I can run the UI on my normal machine and send the config to the backend on the server.
Anonymous No.8657947
fart lora when?
Anonymous No.8657951
you need a lora for that?
Anonymous No.8657954
nyes
Anonymous No.8657994
>Skyla used gust!
>it's super effective!
Anonymous No.8658068 >>8658103
>>8657893
box please
Anonymous No.8658103 >>8658115 >>8658334
>>8658068
https://files.catbox.moe/50k7d8.png
Anonymous No.8658115
>>8658103
Just noticed that the literally me on the left had 2navels.
Anonymous No.8658117 >>8659903 >>8659957
Anonymous No.8658162 >>8658163 >>8658166
>>8657662
Thank you, but it's still very vague
There's a jump from step 1 to step 2, I know that you're supposed to get images and then make a .txt file describing what's in them, but "retag" assumes they're already tagged. Am I missing something? And also, I don't use comfyUI, how do I make use of wd tagger?
>Beg for a toml
I actually have three I found here but don't even know what it does
Anonymous No.8658163
>>8658162
Also, forgot to mention but I have taggui v1.33, which I haven't opened past downloading it as recommended by some other anon
Anonymous No.8658166 >>8658199
>>8658162
>"retag" assumes they're already tagged
Yes, they are tagged on danbooru, but you should ignore those as they are usually extremely incomplete and redundant.
>And also, I don't use comfyUI, how do I make use of wd tagger
Get https://github.com/67372a/stable-diffusion-webui-wd14-tagger and select WD14 EVA02 v3 Large. For manually refining the tags, you can do it with an image viewer and text editor of your choice or use a program like qapyq to handle it more smoothly
Anonymous No.8658199 >>8658229 >>8658564
>>8658166
Got it, I'm assuming I just copy and paste the directory folder with the images and then click interrogate and it'll give me all the .txts for manual editing
On the topic of manual work, what approach works best? Tagging everything in the image, using a trigger word, tagging only the main parts of the image, etc
And also, I've heard that base illustrious 1.0 is the best model for training, is it true?
Sorry for all the questions
Anonymous No.8658229 >>8658331
>>8658199
"base" illustrious is 0.1, not 1.0
most merges include some measure of noobAI, which branched off from 0.1. you'll want to be compatible with those, if not train directly on noob.
Anonymous No.8658236 >>8658398
otaku, neet, jimiko, mojyo, messy, unkempt, slovenly,
Anonymous No.8658331 >>8658344
>>8657662
I see what you mean with beg for a toml
>>8658229
Makes sense, thanks
Anonymous No.8658334
>>8658103
ty
Anonymous No.8658344 >>8658570
>>8658331
i was away all day after posting that lol apologies if you had questions that went unanswered
btw in the bottom right corner of easyscripts, you can set a URL, that'll allow you to type in a web address to an external server and it'll send it there instead of the localhost
Anonymous No.8658398 >>8658416
>>8658236
Anonymous No.8658416 >>8658554
>>8658398
I look and hag like this
Anonymous No.8658502 >>8658834
>>8654860
underated gravel posting
please post more rat sex
Anonymous No.8658552
The LoRA model isn't doing what I want perfectly so I have to draw on top of the generated pic.

Still pleasantly surprised by the result
Anonymous No.8658554
>>8658416
poofs?
Anonymous No.8658564 >>8658565 >>8658566 >>8658570
>>8658199
You should tag absolutely everything that you can, autotaggers can give you a solid base but you should always try to add anything they might have missed, especially since in genreal they seem pretty sloppy at detecting composition tags and sometimes backgrounds too.
You should train on illustrious 0.1, not 1.0, and only if you plan to make your lora compatible with every checkpoint from its family (including noob), otherwise train on noob vpred 1.0 (or eps if for some reason you hate vpred). Don't train on merges, shitmixes and the likes, on top of making it way less compatible with other checkpoints, it's possible that the model shits itself during training.
Anonymous No.8658565
>>8658564
>You should train on illustrious 0.1, not 1.0, and only if you plan to make your lora compatible with every checkpoint from its family
this is such BS advice borrowed from the 1.5 era where every model was some weird shitmix of NAIV1. training on illu 0.1 is the same as training on ill 1.0 and using it on noob. noob has been trained significantly past the point of "compatibility"
Anonymous No.8658566 >>8658570
>>8658564
I disagree slightly. I think the philosophy of "bake first, fix tags later" from the OP is still king. It's better that you bake and see what mistakes the lora makes, *then* go back and try to fix those with tags (you're better off deleting those pics instead) than manual tagging.
Anonymous No.8658570 >>8658573 >>8658576
>>8658344
>that'll allow you to type in a web address to an external server and it'll send it there instead of the localhost
That seems pretty useful, what server do you recommend?
>>8658564
I do plan on making the lora compatible, since I use primarily two models (291h and an illustrious shitmix)
>>8658566
Hmmm, you mean just baking with the autotagger stuff and then going back to fix the model if it sucks? Idk I have 0 knowledge on this
I tried baking one yesterday but it was completely broken, so I'll try it again properly today
Btw, what's the difference between using a trigger word or not? I know there's some flexibility you gain by being able to edit the prompt when a model strictly depends on the trigger word to activate, but I already have a lora scheduler extension for that
Anonymous No.8658573 >>8658579
>>8658570
>what server do you recommend?
im not too familiar with that part, but im pretty sure all it is is setting up the backend on another server. only know if it since i used the jupyter notebook and that's how it worked. https://github.com/derrian-distro/LoRA_Easy_Training_Scripts?tab=readme-ov-file#colab
Anonymous No.8658576 >>8658577 >>8658579
>>8658570
I mean eva02 is good enough for 99% of things on its own. If eva doesn't see something in the pic, it means that pic is bad/confusing and you're better off deleting it rather than trying to fix it with manual tags. I've wasted way too many hours on this bullshit and this technique works far better. I only use a trigger word when I'm overwriting an artist the model already knows. And for characters obviously.
Anonymous No.8658577 >>8658604
>>8658576
this
only thing to delete is whenever it spits out conflicting tags. i've had it tag an image both white background & grey background once.
Anonymous No.8658579 >>8658581
>>8658573
I run a 3050 with 8GB of vram, going from yesterday's test it took me 2 hours to bake a lora
Good to know there's an alternative, thanks
>>8658576
>I've wasted way too many hours on this bullshit and this technique works far better
Lol I believe you
Btw, is there any settings I should be aware of in the autotagger? I just ran it with the default settings yesterday lol
Anonymous No.8658581 >>8658591
>>8658579
Default is fine. 0.35 confidence. If you're using Tagger in reforge set the other setting from 0.05 to 0.
Anonymous No.8658591 >>8658593
>>8658581
Ty, all of you

As for easy training scripts, is there any tutorial or guide that explains each setting? There's like a billion of them
Or should I just beg for a toml? lol
I only have 8GB of vram so I gotta take that into consideration as well
Anonymous No.8658593 >>8658599
>>8658591
>I gotta take that into consideration as well
that'll limit your ability to train styles btw since you can only realistically train the unet, not both unet and TE
Anonymous No.8658599 >>8658601
>>8658593
Wdym?
Anonymous No.8658601 >>8658602
>>8658599
FP8, batch 1, unet only training will be feasable with 8gb of vram
if you train both you'll go above 8gb vram
Anonymous No.8658602 >>8658605
>>8658601
And what are the implications of that?
Since I can't train the text encoder, I won't be able to use new words and concepts is what I'm guessing
So no trigger word?
Anonymous No.8658603
training te is unnecessary tho.
Anonymous No.8658604 >>8658608
>>8658577
The files on my side are always double-tagged with simple background and white background.
Anonymous No.8658605
>>8658602
nah the unet will still latch onto the trigger word, it just wont be as strong w/o the te learning it
Anonymous No.8658607
lil bro is going to train his first te...
Anonymous No.8658608 >>8658610 >>8658613
>>8658604
simple background and white background can work together. white is not grey, however
Anonymous No.8658609 >>8658611
Honestly, if you're training with 8GB of VRAM, you'd be better off using some random Lora service. Their hardware is pretty powerful within reasonable limits.
Anonymous No.8658610
>>8658608
Unless you also have "gradient background" lol
Anonymous No.8658611
>>8658609
Such as?
Anonymous No.8658612 >>8658615 >>8658617
just use collab
Anonymous No.8658613
>>8658608
I just remembered another aspect: they are often tagged with two types of hair colors, and moreover, multi-color or various other color tags are frequently attached.
Anonymous No.8658615 >>8658619
>>8658612
I got a permaban from collab and making new google accounts is a huge hassle these days
Anonymous No.8658617
>>8658612
Alright, I'll do it then
Anonymous No.8658619 >>8658620 >>8658622
>>8658615
>I got a permaban from collab
fucking how? did you try mine some shitcoin or something?
Anonymous No.8658620
>>8658619
nta but iirc they perma banned anyone using imagegen via colab
Anonymous No.8658622 >>8658623 >>8658660
>>8658619
Hashcat
Anonymous No.8658623
>>8658622
yeah, figures out
Anonymous No.8658624 >>8658627
Is it worth or needed to pay for colab pro?
Anonymous No.8658627
>>8658624
You can use free compute; their pro prices aren't very good.
If you plan to pay for compute, something like Runpod and a bunch of other similar services will be cheaper.
Anonymous No.8658635 >>8658639 >>8658649 >>8658661 >>8658911
Switched back to an updated reForge from forge today, and I'm already getting insane placebo that the Euler A Comfy is way better than the Euler A A1111 version. I don't want to xyz all the sampler and schedulers for the 50th time... but the possibility that the forge samplers were somehow broken is too big to ignore.
Anonymous No.8658639 >>8658649
>>8658635
>ancestral samplers
not once
Anonymous No.8658644 >>8658645 >>8658650
My ancestrals are blurring the picture with every step, determinism-kun. can you say the same?
Anonymous No.8658645
>>8658644
my sampler is stochastic
my steps are high
Anonymous No.8658649
>>8658639
>>8658635
We need incestral samplers
Anonymous No.8658650
>>8658644
skill issue
Anonymous No.8658660
>>8658622
What's that?
Anonymous No.8658661 >>8658676
>>8658635
Any c*mfy sampler is fucked for me, it does weird things with the negs like only following some parts of it or none at all
Anonymous No.8658676 >>8658678
>>8658661
>negs
neg pip?
Anonymous No.8658678 >>8658752
>>8658676
no, regular negs
Anonymous No.8658684 >>8658729 >>8658732
>browsing for cute characters to gen
>click on the tokitsukaze
>see a few images of her being constricted by arbok
>ok whatever
>go 10 pages down the line
>still seeing arbok constriction images
>"wtf?"
>search for "tokitsukaze_(kancolle) pokemon"
>literally 12 fucking pages
>turns out someone has been constantly commissioning this exact image since 4 years ago and has not stopped
What in the god damn.
Anonymous No.8658729
>>8658684
the power of autism
Anonymous No.8658732
>>8658684
Absurd! Humanity belongs in its rightful place.
Anonymous No.8658734 >>8658746
Anonymous No.8658746
>>8658734
Finally an actual NAI gen. Not bad. The eyes aren't as shit was v3 ones were.
Anonymous No.8658752
>>8658678
>using negs
Oh nyo nyo nyo
Anonymous No.8658807
ring highlights
Anonymous No.8658834
>>8658502
We just don't get many arknights in general.
Anonymous No.8658909 >>8658942 >>8658980
How do I go about tagging an ai-generated dataset for a lora? The images themselves have this pvc-figure / doll like aesthetic, is there a tag for these types of images?
Anonymous No.8658911
>>8658635
My Reforge is from around.. January maybe? A few months before Panchito abandoned us anyway. When I still used Ancestral, I did placebo myself into using the comfy version. Maybe just better gacha but that's the name of the hobby after all.
Anonymous No.8658934 >>8658937
tried nai for a month with the final full release and it's really fucking bad. it "can" make nice stuff but it's so fucking schizophrenic and inconsistent god damn. inpainting is a nice but not huge upgrade over v3 and of course still completely above anything local has but that alone is not worth the price.
Anonymous No.8658937 >>8658944
>>8658934
But local is consistent and has high resolutions so how is it better? Text?
Anonymous No.8658942 >>8658975
>>8658909
ideally you'd use the prompts, if you have them

>pvc-figure / doll
tag is figure_(medium)

if it's a style lora, I'm still not sure about using the aesthetic tags. For example with "3D" it becomes a trigger tag and the lora does almost nothing without. But if I don't tag it, it won't quite get the style and stay more flat.
Anonymous No.8658944
>>8658937
I think the last sentence was all about inpainting, not overall better.
Anonymous No.8658975
>>8658942
Nah, don't got the prompts, basically a style lora from a pixiv user
Anonymous No.8658980
>>8658909
Run it through an auto tagger and then add stuff you want to associate, it might not stick since ultimately it's weights on the model but it can't hurt it.
Anonymous No.8659033 >>8659040
Man, is 291h broken when it comes to detail refinement or am I goofing?
Anonymous No.8659040 >>8659042
>>8659033
Explain.
Anonymous No.8659042 >>8659044 >>8659047
>>8659040
>Do nice gen
>i2i to get more details
>Low denoise
>Blurry and no details
>Medium denoise
>Still no details
>High denoise
>Good details, changes the entire image
Inpainting works but it's a pain in the ass
Anonymous No.8659044 >>8659046
>>8659042
>Inpainting works but it's a pain in the ass
Stop being lazy
Anonymous No.8659046
>>8659044
No
If I didn't want to be lazy I wouldn't be tinkering with AI
Anonymous No.8659047 >>8659053
>>8659042
Oh. Were you the same anon saying the same a few threads back? If not, I'll tell you what I told him, I tile upscale through CN and have no issues. Not sure about straight i2i and hiresfix.
Anonymous No.8659053 >>8659060
>>8659047
Nope, some anon recommended it on slop and I decided to test
>I tile upscale through CN
What's that?
Anonymous No.8659060 >>8659062
>>8659053
Not at my PC to share my settings so maybe some other anon can help with that meanwhile but it's this used through i2
>https://civitai.com/models/929685?modelVersionId=1239319
Anonymous No.8659062 >>8659067 >>8659089
>>8659060
How do I even use that?
I use reforge btw
Anonymous No.8659067
>>8659062
You put that in a folder called ControlNet in your models folders. Then in your i2i settings, there should be a section called ControlNet which you then select that model you downloaded it and choose tile_upscale. As far as settings go, don't know them off the top of my head.
Anonymous No.8659089 >>8659093
>>8659062
Controlnet is integrated in reforge so you go to the the box, select tile, select the model, then put control strength to 1 and set your control start step 0 end step to 0.8. Increase end step if you get bad anatomy or hallucinations. Save the settings as a preset once you're happy with the,.
Anonymous No.8659093 >>8659094
>>8659089
What preprocessor though? Also control mode and resolution?
Anonymous No.8659094 >>8659098
>>8659093
tile resample and balanced. The resolution is your choice. More is better but slower and increases chances of bad anatomy.
Anonymous No.8659098 >>8659099
>>8659094
Cool, seems to have worked
Do I use it for sketch, inpaint and inpaint sketch too?
Anonymous No.8659099
>>8659098
I just use those settings upscaling.
Anonymous No.8659110 >>8659126
>Doesn't work with loras
wth man why can't I just upscale like normal in this fucking model
Anonymous No.8659116
Why is latent upsale bad again? I never got extra nipples or whatever with the right amount of denoise.
Anonymous No.8659119
Someone recommend me a non-sloppy model that isn't a goddamn nightmare to work with
I just want to do my regular workflow
Anonymous No.8659122 >>8659224
merge greek letters 1.5 with 1.0 2d custom at a 60/40 ratio, receive kino
Anonymous No.8659126 >>8659129
>>8659110
>doesn't work well with loras*
It does work.
Anonymous No.8659129 >>8659130
>>8659126
Exclusively gives me nonetype
Anonymous No.8659130 >>8659131
>>8659129
Does not work together with multidiffusion on reforge.
Anonymous No.8659131
>>8659130
Not using multidiffusion either
But since you said it works, I guess I should try and see what's causing the issue
Anonymous No.8659134
>Adetailer just sometimes stops working or doesn't work at all til I restart the cmd prompt for reforge
It's such a weird / annoying problem and it's only really started happening when I swapped over to vpred models.
Anonymous No.8659141 >>8659142 >>8659149
any tips for the Torch is not able to use GPU error? checked google and a lot of people have the issue with no clear answer, tried all the various suggested things with no results. I have a 3080. I am on a fresh install of windows 11.
Anonymous No.8659142 >>8659145
>>8659141
do you have the right driver for your graphics card?
Anonymous No.8659143 >>8659186
r3mix is basically superior vpred
Anonymous No.8659145 >>8659149
>>8659142
yeah updating it was one of the first things I tried, rebooted pc after and same error
Anonymous No.8659149 >>8659152
>>8659141
What are you trying to use, A1111/Forge/Reforge or something else?

When you say you tried all the various suggestions, what did those entail? The most promising ones come up for me are:

https://stackoverflow.com/questions/75910666/how-to-solve-torch-is-not-able-to-use-gpuerror

https://www.reddit.com/r/StableDiffusion/comments/z6nkh0/torch_is_not_able_to_use_gpu/

It seems like it usually is a Torch version / GPU driver version mismatch. If you already did the update like you said in >>8659145 did you check that the versions match? You might want to try reinstalling everything from scratch if you've updated.

Is your 3080 GPU 0? If your motherboard has an integrated graphics card that might be GPU 0 instead which might mess things up.
Anonymous No.8659152
>>8659149
Turned out I needed a specific visual c redistributable I think? it's still loading but it didn't give me that error anymore. I had tried stability matrix and it auto installed that redist among other things, and now reforge works. I had combed the a11 and reforge pages to make sure I had all dependencies but I guess they don't mention that.
Anonymous No.8659162 >>8659163
Why didn't anyone tell me epsilon + cyberfix is superior to vpred in every single aspect
Anonymous No.8659163 >>8659175
>>8659162
now post one of your gens lil bud
Anonymous No.8659164
>cyberfix
P-panchito.. onegai..
Anonymous No.8659171
bro disappeared into the sepia aether
Anonymous No.8659173
I've never had issues because the Torch version and GPU driver version didn't match. On Linux, I ran into problems when the Torch install command was incorrect.
Anonymous No.8659175 >>8659177 >>8659179
>>8659163
Anonymous No.8659177
>>8659175
Best gen in the thread.
Anonymous No.8659179 >>8659204
>>8659175
10/10 lil bud, keep it up
Anonymous No.8659185
Nevermind it doesn't get my favorite artist
Anonymous No.8659186 >>8659202
>>8659143
Link me. I'll test it's trap (formerly otoko_no_ko) capability.
Anonymous No.8659202
>>8659186
https://civitai.com/models/1347947
Anonymous No.8659204 >>8659217
>>8659179
you still live?
Anonymous No.8659217
>>8659204
I am always lurking
Anonymous No.8659224 >>8659233
>>8659122
which sampler/scheduler? cfg, quality tags/negs?
Anonymous No.8659233
>>8659224
use it exactly like you would 1.0 2d custom, I like euler a cfg++ 1.5, simple, and quality tags/negs depends on style but generally newest is safe to keep as a quality tag
Anonymous No.8659248 >>8659249 >>8659252 >>8659519
am i the only one who feels like their gens get worse with every new model/new experimentation? like i look at gens i did many months ago and they look marginally more interesting/appealing
Anonymous No.8659249
>>8659248
It's the tinkertroon fallacy, where the process of genning becomes more important than the actual results. This is commonly seen in cumfy users, who build impossibly convoluted noodle behemots to gen fried 1girl, butiful saarground noisy crap.
Anonymous No.8659250
>he goofed when he should have gooned
Anonymous No.8659252
>>8659248
yes, thus I'm going back to sd 1.5 with nai v2
Anonymous No.8659261
lol
lmao
Anonymous No.8659263 >>8659280
Could i request someone bake me lora if had an "ok" dataset ready?,
id like to do it on my own but for now sorting the bullshit that is the process to make one to begin with
Anonymous No.8659280 >>8659288
>>8659263
Link dataset, the worst that can happen is that people call you a faggot
Anonymous No.8659288 >>8659490 >>8659557 >>8659721
>>8659280
Here, i have no clue what i'm doing: https://litter.catbox.moe/0uewkgjbmyh9yoey.rar
Anonymous No.8659314
>>8653556
No, I'm pretty sure it's because the people into that shit are disproportionately more desperate for content because of how dogshit the style is and therefore fewer non AI content out there with it
Anonymous No.8659317 >>8660081
Anonymous No.8659367 >>8659416
Anonymous No.8659384
Can make xim darker?
Anonymous No.8659396
Darker than black?
Anonymous No.8659398
exactly
Anonymous No.8659399
bible black even
Anonymous No.8659416
>>8659367
based horse fucker
Anonymous No.8659455 >>8659463
hey, retard here, need some help with genning
last time I was here was during the pony days, now I see illustrous is the go-to model, should I use it with the sdxl vae?
Also now I see there's a sampling method and schedule type. How I can turn off the schedule type? I used to make gens with Euler A, now I'm using that one and SGM uniform in schedule type since it seems it gives the best results. I'm using forge becuase I'm too much of a brainlet to use reforget
thanks
Also, I'm using this illustrious, is this one alright? https://civitai.com/models/795765
Anonymous No.8659461 >>8659464
Does he know?
Anonymous No.8659463
>>8659455
There is no reason to use a VAE override on SDXL, just leave it blank.

You always used a schedule type, only A11/Forge hid it from you. It was "karras" for the dpmpp sampler line, and "normal" for everything else.

That is the correct illustrious, though most people have moved on to noobAI which is a further finetune.
Start here https://civitai.com/models/1301670/291h
or https://civitai.com/models/1201815?modelVersionId=1491533

then once you're comfortable consider moving onto the base model https://civitai.com/models/833294/noobai-xl-nai-xl
it's harder to use than merges, kinda like pony and autismmix/reweik
Anonymous No.8659464 >>8659465
>>8659461
I guess he doesn't. We better tell him, before he makes a fool of himself in front of the whole thread.
Anonymous No.8659465
>>8659464
yeah, about that...
Anonymous No.8659490 >>8659721
>>8659288
I'm on it.
Anonymous No.8659495 >>8659497 >>8659499 >>8659771
how do you guys find 200 pics in a coherent style to baka an artist lora, I am lucky if I can find 40 after removing shit that would be incomprehensible to the model
Anonymous No.8659497 >>8659502
>>8659495
thats enough
Anonymous No.8659499
>>8659495
I usually use hires patreon/fanbox rewards for the last 2-3 years
Anonymous No.8659502
>>8659497
Yeah, but I'd rather have more pictures, even if just to do crops.
Anonymous No.8659519
>>8659248
only my old 1.5 gens are worse than my current sdxl gens, almost all my sdxl gens have the same ""quality"" but some gens with some artist mixes in ""old"" vpred models have noticeable worse colours which is something expected
Anonymous No.8659524 >>8659559
Anonymous No.8659557 >>8659630 >>8659771
>>8659288
https://mega.nz/folder/gTtRXRhI#JlvWr2DBl1bQpRzMO4MyoQ
Different Anon than the one who said they were working on it. The one that ends with -TW uses the activation tag you had in your dataset while the one without -TW doesn't have an activation tag. I also threw a grid in the folder that has the Lora without an activation tag Vs the Lora with the activation tag. The first image in the grid is without an activation tag and the second image is with an activation tag and so forth.
You can also throw in white pupils to get a bit closer to the way the artist does their eyes.
Anonymous No.8659559 >>8659574
>>8659524
b-box...?
Anonymous No.8659563 >>8659566
tensor is fucking dead

https://tensor.art/event/NSFW&CelebrityAdjustments
Anonymous No.8659566 >>8659586 >>8659769 >>8660345
>>8659563
oh nyo, anyway
I had this stupid idea the other day
Anonymous No.8659574 >>8659585
>>8659559
lost the metadata in an edit but it's on 102d custom, a 0.6 denoise img2img of https://cdni.pornpics.com/1280/1/45/36999346/36999346_011_3465.jpg
Anonymous No.8659585 >>8659587 >>8659634 >>8659710
>>8659574
Man that pose doesn't work normally? That really sucks.
Anonymous No.8659586
>>8659566
catbox?
Anonymous No.8659587 >>8659638
>>8659585
Maybe it does? I'm just going through my old porn collection and seeing what they look like in anime style.
Anonymous No.8659599
Anonymous No.8659630 >>8659643 >>8659644 >>8659646 >>8659656
>>8659557
Different Lora baking anon here, just wondering what the difference is been activation tag and no activation tag?
As far as I understand, there are three (?) main ways of training a style lora.
1. No tags for any image + only train unet
2. Normal tagging for all images (exclude style tags) + train unet/TE or only unet
3. Normal tagging with activation style tag + train unet and TE
1 -> Rigid lora with heavy style "burned" into the unet, but very consistent since it isn't tied to any tags (always activates)
2 -> Slightly more flexible, but style is spread out over tags in the dataset. The more tags from the dataset you include in your gens, the heavier the style. Potentially learn "sub"-style tied to specific tags, I.E. 1girl activates the 1girl sub-style and leaves out style for say 1boy.
3 -> Less flexible than 2, but more consistent since the activation tag will eat up most of the weight updates (trained style). If trained properly, style should be minimal if not using the activation tag.
Am I understanding different style loras correctly?
Anonymous No.8659631 >>8659742
Is 102d custom still peak? The guy made a new one and tried it and it's some hot ass.
Anonymous No.8659634 >>8659638 >>8659710
>>8659585
>lying on side, sex from behind, spooning, one leg up
etc should be about right
Anonymous No.8659638
>>8659587
>>8659634
Yeah I will test. Baking so I can't check right now.
Anonymous No.8659643
>>8659630
Yes it's basically this.
Anonymous No.8659644
>>8659630
>2 -> Slightly more flexible, but style is spread out over tags in the dataset. The more tags from the dataset you include in your gens, the heavier the style. Potentially learn "sub"-style tied to specific tags, I.E. 1girl activates the 1girl sub-style and leaves out style for say 1boy.
This shouldn't happen unless your config is shit
Anonymous No.8659646 >>8659649 >>8659668
>>8659630
Here's a weird fact, TE doesn't do what you think it does. You can teach the model new words even when training Unet only.
Anonymous No.8659649 >>8659652
>>8659646
yeah if you train for 10x longer and even then your models will be schizo like novelais
Anonymous No.8659651
I never saw a positive impact with TE training. the faster convergence never made up for the shitty hands
Anonymous No.8659652
>>8659649
Nope. And you can go back all the way to SD1.5 character loras, most were trained Unet only on characters the model didn't recognize at all.

You can try that on noob as well, train a character lora Unet only with some made up name like f78fg3f and it'll work as her activation tag.
Anonymous No.8659656
>>8659630
>1. No tags for any image + only train unet
for whatever reason, with unet only, i still got better results with a trigger word. no trigger word made it barely learn the style
Anonymous No.8659668
>>8659646
Sure you can, but if your tag happens to be completely undertrained in the embedding space, I.E. the token "x4jd4" has closest cosine similar vectors of completely unrelated concepts from your intended target, then you're forced to realign a lot of U-net weights to account for this outlier vector embedding. I think training the TE makes perfect sense if you want to slightly realign your vector embeddings, but you have to stop your TE training before your stop your U-Net training to avoid misalignment. Use a very low learning rate for TE and stop it 25%-50% of the total training steps seems to work pretty good.
Now, the downside is that you are re-aligning all the well-trained tags/tokens as well (1girl, etc). This is a solved problem in theory, just haven't seen anyone actually implement it for any of the popular trainers.
What you would do is create a new embedding for your style activation token (Texual inversion), then only update your embedding in the TE, then you train U-net with the embedding as a tag in your dataset. That would be the "Ideal" style/concept Lora training setup in my mind.
Anonymous No.8659710
>>8659585
>>8659634
I've done spooning before a few times, works alright.
Anonymous No.8659721 >>8659726 >>8659771
>>8659288
>>8659490
here's you LoRA saar
https://files.catbox.moe/q8xyxg.safetensors
Like other Anon, no trigger but "white pupils, white skin" can help.
Anonymous No.8659726
>>8659721
See that shit with the hair? I was *this* close to scrubbing those out but I was already annoyed after having to crop each pic properly. I don't want every "very long hair" to do that shit but it's so common in those pics. Guess I have to start over.
Anonymous No.8659742 >>8659749
>>8659631
all current shitmixes are more or less the same
Anonymous No.8659749 >>8659750
>>8659742
Yeah it just feels at this piont I'm just chasing ghosts after all the ones I've tried. I can get kinda what I want on non-vpred but I still can't dodge the shiny skin and vpred looks cleaner but then half the time it's vague or I still get bad hands more then non-vpred.
Anonymous No.8659750 >>8659752 >>8659758
>>8659749
are all of you allergic to inpaint or something
Anonymous No.8659752
>>8659750
if i inpaint, how am i supposed to feel smug about my model being superior!?
Anonymous No.8659758
>>8659750
That's more time I'm not genning the next image though.
Anonymous No.8659769
>>8659566
Cute Kula. Post more Kula. Even if it's /e/.
Anonymous No.8659771
>>8659721
>>8659557
i kneel, thank you anons

>>8659495
>how do you guys find 200 pics in a coherent style
you don't, i got turbo lucky. Most of his stuff is not particularly hard to edit with photoshop and is mostly white backgrounds
also most artist can't into basic organization and/or consistent posting, you might be missing pictures buried somewhere
Anonymous No.8659872 >>8659876 >>8659886
3 greek letters 1.5 is kinda alright. Genned with it a bit yesterday and wasn't immediately repulsed so I'll give it an extensive test later.
Anonymous No.8659876 >>8659881
>>8659872
1.5 as in a 1.5 model?
Anonymous No.8659881 >>8659886
>>8659876
It's better to avoid the schizos and stick to 102d.
Anonymous No.8659885 >>8659887 >>8659892 >>8659909 >>8659917 >>8659924
Just... merge greek letters 1.5 and 1.0 2d custom at a 60/40 ratio
Anonymous No.8659886
>>8659881
Except 291 w lora & 291h mog 102d, tourist.
>>8659872
Saw some anon mention it yesterday. >>8657417 Missed the initial conversation.
>https://civitai.com/models/1217645?modelVersionId=1976509
Anonymous No.8659887 >>8659888 >>8659892
>>8659885
that's almost what r3mix is
Anonymous No.8659888
>>8659887
hmm, actually kind of close, yeah
Anonymous No.8659892 >>8659903
The one thing I will say about 3 greeks is that it needs muted color in neg. I liked everything about it on simple gen testing but it looked wash. Added that and then it started to click. Again, I'll give it a big test later.
>>8659885
>>8659887
>r3mix
Tried it from anon's suggestion yesterday and didn't clear the initial test phase for me. It's VERY good at anatomy. Don't recall a single issue with hands once. But it's one of those harsher models that can't do brush/sketch mixes too well. Even lowering CFG to levels where shit just starts floating around. Looked at the bake and
>chromayume
Makes sense.
Anonymous No.8659894 >>8659895
Does anyone ever use Pony still? Haven't touched it in a while, not sure which scheduler to use. I'm guessing either normal or SGM?

Was hoping to test some styles on it.
Anonymous No.8659895
>>8659894
Isn't it Karras?
Anonymous No.8659903 >>8659907 >>8659917 >>8659957
>>8659892
>But it's one of those harsher models that can't do brush/sketch mixes too well
On the other hand the meme merge I've been shilling is pretty good at them. This gen and >>8658117 used it
Anonymous No.8659907 >>8659909
>>8659903
Shill it to me, my good sir.
Anonymous No.8659909 >>8659966
>>8659907
see >>8659885
Anonymous No.8659917 >>8659923
>>8659903
is it this one >>8659885
Anonymous No.8659923 >>8659957
>>8659917
Yes. And it uses 1.0 2d custom's CLIP, thought I should mention that in case it matters
Anonymous No.8659924 >>8659969
>>8659885
I don't know how to merge teach me
Anonymous No.8659926 >>8659929 >>8659933
Has anybody used https://civitai.com/models/99619/control-lora-collection ? Seems to me like it's one of those "Guidance" things like FreeU, Perturbed-Attention Guidance, Smoothed Energy Guidance, etc but in LORA form.

The CivitAI page recommends using it at half-strength. It seems like it might improve things sometimes? Like every other guidance thing I've tried it seems inconsistent on whether it's making things better or just making random changes.
Anonymous No.8659929
>>8659926
holy snakeoil
Anonymous No.8659933
>>8659926
Nah it basically is a Pony "slider" for Illu/Noob
Anonymous No.8659957 >>8659969
>>8659923
thanks, will try it out. can you box >>8659903 or
>>8658117 ?

mostly cause i want to know if you use any quality tags, negs, snakeoil, etc
Anonymous No.8659966
>>8659909
Thanks for the idea, anon. Whereas I didn't merge your models, I decided to try 291 + 3 greeks. We're SO back.
Anonymous No.8659969 >>8659972 >>8660144
>>8659957
euler A CFG++ 1.5 simple, guidance limiter sigma start 25, sigma end 0.28
quality tags: newest, very awa
negs: sepia, old, early, skinny, watermark, @ @, bkub, shiny skin,
I haven't really done any testing on those for this model though, just treating it like 1.0 2d and it's working well so far
>>8659924
Unless you use comfy too you're going to have to find out how yourself I'm afraid
Anonymous No.8659972 >>8659980
>>8659969
I use comfy. Teach me instead.
Anonymous No.8659980 >>8659981
>>8659972
You'll need this https://github.com/Miyuutsu/comfyui-save-vpred
2 load checkpoint nodes, 1 modelmergesimple node, 1 save checkpoint v-pred node, set the ratio to 0.4, connect noodles, run
Anonymous No.8659981 >>8659984 >>8659987 >>8659987
>>8659980
0.6 if you have greek as model1 right?
Anonymous No.8659984 >>8660030
>>8659981
The merge may in fact be 60% 1.0 2d and 40% greek then I am retarded
Anonymous No.8659987 >>8660001 >>8660006
>>8659981
>>8659981
Nah. 0 ratio is basically 100% model A and 0% model B. If you set it to 0.6, you're getting 40% model A and 60% model B.
Anonymous No.8660001 >>8660011
>>8659987
https://comfyui-wiki.com/en/comfyui-nodes/advanced/model-merging/model-merge-simple

this says its 100% model 1 if you use ratio 1
Anonymous No.8660006 >>8660009 >>8660011
>>8659987
yeah i feel like comfy merge is exact opposite of webui merge
just c*mfy thing being contrarian
Anonymous No.8660009
>>8660006
Wait so it's backwards in reforge? The comfy version makes more sense though. Model A:Model B is Model A/Model B.
Anonymous No.8660011 >>8660029
>>8660001
>>8660006
Yeah I should have said that's how it is on reforge.
Anonymous No.8660029
>>8660011
ah ok all good
Anonymous No.8660030 >>8660036
>>8659984
did you use comfy or reforge? just wanted to double check
Anonymous No.8660033
>merge can't do darks as good anymore
More snakeoil...
Anonymous No.8660036
>>8660030
Comfy with greek letters 1.5 as model 1, 1.0 2d custom as model 2 and the ratio was set to 0.4, I guess that means it's 60% 1.0 2d custom and I misunderstood how the node works
Anonymous No.8660081
>>8659317
Ummm box please?
Anonymous No.8660134 >>8660136 >>8660140 >>8660147
any noobai model that understands how to do triple anal properly?
Anonymous No.8660136 >>8660137
>>8660134
why you do dat lil bro?
Anonymous No.8660137
>>8660136
why not?
Anonymous No.8660138
gottem
Anonymous No.8660140
>>8660134
I didn't use any artist tag and quite literally copy pasted all the tags from one existing image on the booru but, seems like it does?
>https://files.catbox.moe/tvy9q6.png
Anonymous No.8660141
It's not gay if only the tips touch.
Anonymous No.8660144 >>8660242
>>8659969
newest, very awa, artist, tags? or do you use artist before
Anonymous No.8660147
>>8660134
Shitmixes don't add knowledge, they dilute it.
Also, if you add more emphasis to "double anal", you get an increasing numbers of cock in the hole.
Anonymous No.8660156
Anonymous No.8660169 >>8660428
>Train lora on sfw artist
>Works perfectly on sfw
>Falls completely off on any nsfw
Nice
Anonymous No.8660220 >>8660221
Not exactly related, but I don't want to deal with /sdg/ faggots. What model is the best for doing backgrounds? No characters just some pretty landscapes and shit.
Anonymous No.8660221 >>8660260
>>8660220
flux
Anonymous No.8660242
>>8660144
style, BREAK, 1girl prompts, BREAK, 1boy prompts, background, quality tags is basically how I prompt
Anonymous No.8660260 >>8660262 >>8660435
>>8660221
it doesn't do anime style landscapes that well unfortunately
Anonymous No.8660262 >>8660263
>>8660260
nai 4.5 then
Anonymous No.8660263 >>8660264
>>8660262
What if I don't want to give the roach any money?
Anonymous No.8660264
>>8660263
based64
Anonymous No.8660271
Anonymous No.8660289
>always thought seitoedaha had a very nice style
>he never draws the girls I like
AI is a blessing to scratch that itch
Anonymous No.8660345 >>8660381
>>8659566
seconding catbox, I like this a lot
Anonymous No.8660365 >>8660388
Any loras to help with see-through? A lot of the time what's supposed to be under just end up going on top, especially bra/bikini straps and the bikini under clothes tag doesn't seem to to much to help
Anonymous No.8660381
>>8660345
It's very obviously inpainted.
Anonymous No.8660388 >>8660491
>>8660365
The tag is "bikini_visible_through_clothes". "under clothes" just means they're worn under non-transparent regular clothes and peek out.

I have not tried the bikini version and it has few images, but "bra visible through clothes" is very reliable as long as you don't prompt other bra tags alongside it.
Anonymous No.8660428 >>8660436
>>8660169
Unironic skill issue.
Anonymous No.8660435 >>8660437 >>8660442 >>8660493
>>8660260
You're not wrong, but take a look at this. Flux leans heavily toward 3d but I think these came out good.
https://files.catbox.moe/fib8r5.png
https://files.catbox.moe/m3knnv.png
https://files.catbox.moe/gvlk7o.png
https://files.catbox.moe/avirbj.png
https://files.catbox.moe/9tpeb7.png
These are just the ones I've done lately. I've been using flux for textgen story backgrounds for a year now and it works well. Also fuck /sdg/ and fuck /dalle/
Anonymous No.8660436
>>8660428
Based chastiser.
Anonymous No.8660437 >>8660445
>>8660435
Those are cool wtf
Anonymous No.8660442 >>8660445
>>8660435
Damn.
I wish Noob had that level of coherency and sharpness, not just for backgrounds.
Anonymous No.8660445 >>8660448 >>8660471 >>8660484
>>8660437
>>8660442
Yeah adding studio ghibli to the prompt really helped to push it closer to 2d. Chroma will save us though, I can feel it. Flux models can really follow your prompts.
https://files.catbox.moe/korwws.png
https://files.catbox.moe/59nsan.png
https://files.catbox.moe/sswzye.png
https://files.catbox.moe/v2d7yx.png
https://files.catbox.moe/w74jda.png
Anonymous No.8660448
>>8660445
I really hope so, those backgrounds are way better than anything I have managed to pull off
Anonymous No.8660471
>>8660445
nice
Anonymous No.8660484 >>8660596
>>8660445
chroma won't save shit. it's incredibly poorly trained and does not hold up compared to flux dev. flux as a model had potential, but not with whatever the fuck chroma is doing
Anonymous No.8660491
>>8660388
Ah no wonder I wasn't getting what I wanted, thanks
Anonymous No.8660493
>>8660435
yeah, i played with flux quite a bit, but it doesn't really look quite how i want
Anonymous No.8660596
>>8660484
a finetune of chroma could work
chroma itself wont
Anonymous No.8660613