← Home ← Back to /h/

Thread 8624386

824 posts 214 images /h/
Anonymous No.8624386 [Report] >>8624399 >>8624401 >>8624637 >>8625345
/hgg/ Hentai Generation General #008
Not this shit again edition

Previous Thread: >>>>8613148

>LOCAL UI
reForge: https://github.com/Panchovix/stable-diffusion-webui-reForge
Comfy: https://github.com/comfyanonymous/ComfyUI

>RESOURCES
Wiki: https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki | https://comfyanonymous.github.io/ComfyUI_examples
Training: https://rentry.org/59xed3 | https://github.com/derrian-distro/LoRA_Easy_Training_Scripts | https://github.com/bmaltais/kohya_ss | https://github.com/Nerogar/OneTrainer
Tags: https://danbooru.donmai.us/wiki_pages/tag_groups | https://danbooru.donmai.us/related_tag
ControlNet: https://rentry.org/dummycontrolnet | https://civitai.com/models/136070
IOPaint (LamaCleaner): https://www.iopaint.com/install
Upscalers: https://openmodeldb.info
Booru: https://aibooru.online
4chanX Catbox/NAI prompt userscript: https://rentry.org/hdgcb
Illustrious-related: https://rentry.org/illustrious_loras_n_stuff
Useful Nodes/Extensions: https://rentry.org/8csaevw5

OP Template/Logo: https://rentry.org/hgg-op/edit | https://files.catbox.moe/om5a99.png
Anonymous No.8624388 [Report] >>8624393 >>8624403
>>8624387
thx bwo, ily
Anonymous No.8624393 [Report] >>8625223
>>8624388
hey bwo, what resolutions do you train on? saw in the last thread some anons saying you recommend more than 1mp
Anonymous No.8624399 [Report] >>8624540
>>8624386 (OP)
Could've just let it die, there doesn't seem to be much difference to /hdg/ these past few days.
Anonymous No.8624401 [Report] >>8624405
>>8624386 (OP)
Thread moves faster at page 10 than it does at page 1.
Anonymous No.8624403 [Report] >>8624436 >>8625223
>>8624388
>[masterpiece, best quality::0.6]
Out of curiousity, what's this for? Do you think those tags negatively impact finer details?
Anonymous No.8624405 [Report]
>>8624401
I blame all of you for this
Anonymous No.8624406 [Report]
where are the highlights
Anonymous No.8624411 [Report] >>8624439 >>8624441 >>8624541 >>8624603
Anyone have a plan of attack on more consistent and better backgrounds. I want to make a sequence of images of 2 characters on a bed and have the camera move from shot to shot. Is there a way to keep the windows facing the right way, the nightstand to stay to the right of of the bed, the mattress to stay the same color, etc.

I'm open to any bat-shit theories or even the use of 3d modeling to solve it.
Anonymous No.8624436 [Report] >>8624993 >>8625223
>>8624403
It's placebo that he can't explain. Everyone's doing something different with quality tags anyway.
Anonymous No.8624439 [Report]
>>8624411
The best way is to sketch and inpaint, anon. There's no getting around the fact that you should be learning to draw at this point.
Anonymous No.8624441 [Report]
>>8624411
I have tried many things to get consistent backgrounds, the most reliable way to do it is controlnet but, due to the nature of all of this, while having some sort of consistency on the objects, the shading and overall colours will inevitable vary, you'll need to correct them using an external tool like PS
Anonymous No.8624493 [Report] >>8624598 >>8624606
Anonymous No.8624505 [Report]
Anonymous No.8624540 [Report] >>8624608
>>8624399
nvm I take it back
Anonymous No.8624541 [Report]
>>8624411
>backgrounds
ishiggydiggy
Anonymous No.8624598 [Report] >>8624613
>>8624493
Catbox? Did you add the camera effect after? Very rarely does it come out that clean.
Anonymous No.8624603 [Report]
>>8624411
Even if you get the locations right, you're unlikely to get the exact same design of every piece of furniture. Maybe if your checkpoint/loras are really overfit, or if you give each object a long and detailed prompt. You can use regional prompter with very fine masks, tell it exactly where you want every piece of furniture. Combine with controlnet of some very rough geometry, edges of the room, window frame, a box for the bedside table, etc.

Just guessing here, I've only done this for characters not backgrounds. Might give it a try later.
Anonymous No.8624605 [Report] >>8624607 >>8624617 >>8625131
Anonymous No.8624606 [Report] >>8624613
>>8624493
>apse
sad it doesn't say arse
Anonymous No.8624607 [Report] >>8624612
>>8624605
That's a surprisingly human-looking black person.
Anonymous No.8624608 [Report]
>>8624540
The spam has been quite constant over there for some reason no one even remembers. Annoying but funny how the lack of care from any noderation is the only thing really going for the trolls.
Anonymous No.8624612 [Report] >>8624616
>>8624607
I got tired of self insert pov and the 1boys look less rapey when you don't prompt ntr or giga penis.
Anonymous No.8624613 [Report]
>>8624598
I added those on post and then inpaint them a little, sadly
Here is the box anyway if you want it
>https://files.catbox.moe/dw0ht9.png

>>8624606
Got a little lazy fixing the text on both images
Anonymous No.8624616 [Report]
>>8624612
yeah but there's literally no point having sex without stomach bulge
Anonymous No.8624617 [Report]
>>8624605
based contrast enjoyer
Anonymous No.8624637 [Report] >>8624662
>>8624386 (OP)
isn't that pic a bit too risky?
Anonymous No.8624656 [Report] >>8624666 >>8624675
Alright, I've got a new AI rig all setup and ready to train some Loras. I have some datasets ready to go. What's a good VPRED config I could start with and which trainer do people use these days?
Anonymous No.8624662 [Report]
>>8624637
It is fine and there is nothing wrong with it saar
Anonymous No.8624666 [Report]
>>8624656
ez scripts. There's a couple configs posted last thread I think.
Anonymous No.8624675 [Report]
>>8624656
inorganic post
Anonymous No.8624714 [Report]
Anonymous No.8624720 [Report] >>8624722
In order to get the best possible lora, you'll have to use sdg and then autistically rebake until you get lr and training time just right
Anonymous No.8624722 [Report] >>8624724
>>8624720
>sdg
stochastic descent gradient?
Anonymous No.8624724 [Report]
>>8624722
it's french
Anonymous No.8624740 [Report]
Can someone tell me why when using Regional prompter, I have a prompt works well but whenever I erase a tag or two from one of the regions, the image composition just breaks entirely and gives me anatomic horrors instead until I put back those tags?
Additionally, can anyone give me tips for reg prompter I feel like the original repo itself is kinda shit at explaining things out
Anonymous No.8624764 [Report] >>8624788
Holy kek, anyone tried FreSca node in comfy? If you set scale_low to something between 0.7-0.8, it completely gets rid of fried colors on noob
Anonymous No.8624788 [Report] >>8624796
>>8624764
example?
Anonymous No.8624796 [Report]
>>8624788
Exact same seed, euler a, cfg 5
Anonymous No.8624797 [Report] >>8624804
cloudflare is down; the end is nigh
Anonymous No.8624804 [Report]
>>8624797
Huh, explains why half of the sites are down for me...
Anonymous No.8624806 [Report] >>8625035
https://files.catbox.moe/a19c3q.png
Anonymous No.8624841 [Report] >>8624851 >>8624852 >>8624863 >>8625020
it's starting to get good at epoch 8 i guess, this is a 22-step 1152x2048 base res gen
Anonymous No.8624842 [Report] >>8624859 >>8624868
First stuff i prompted, it's super vanilla but i kinda like.Any ideas or suggestion to make better stuff ?
Anonymous No.8624851 [Report] >>8624863
>>8624841
And... Do we have to guess what model is this?
Anonymous No.8624852 [Report]
>>8624841
config?
Anonymous No.8624859 [Report]
>>8624842
1toehoe, 1boy, dark skin, very dark skin, huge penis, large penis, sagging testicles, veiny penis, penis over eyes, squatting, spread legs
Anonymous No.8624863 [Report] >>8625205 >>8625207 >>8625229
>>8624841
noob vpred 1.0 for comparison
>>8624851
see >>8624232
Anonymous No.8624868 [Report] >>8624877
>>8624842
Just do what you want to do man, the entire point of making your own porn is that you can make your own porn.
Anonymous No.8624877 [Report]
>>8624868
nu-uh, the real point is putting increasingly larger penises inside toehoes
Anonymous No.8624892 [Report] >>8624895 >>8624925
I made a small userscript to save and retrieve prompts or artists combos on-the-fly.
Anonymous No.8624895 [Report] >>8624967
>>8624892
isn't that feature already built in a1111
Anonymous No.8624918 [Report] >>8624932 >>8625738
If this is the real non schizo thread, can anyone check >>8624866 and >>8624910 ?

It just doesn't seem right, im not using a lora or anything. looks specially suspicious when furry models are doing better
https://files.catbox.moe/ircmls.png
Anonymous No.8624925 [Report] >>8624967
>>8624892
What do you mean? What about this is different than what infinite image browsing can do?
Anonymous No.8624932 [Report] >>8624947 >>8624976
>>8624918
Is his how it's supposed to look?
Anonymous No.8624947 [Report] >>8624951
>>8624932
he asked about noob vpred, not 102d shitmix
Anonymous No.8624951 [Report] >>8624965
>>8624947
What newfag is genning on noob vpred without a lora? He should just pick up the shitmix if he's mad.
Anonymous No.8624965 [Report] >>8624968
>>8624951
he wants to check if his setup is correct retard
>duty calls
Anonymous No.8624967 [Report] >>8624980
>>8624895
Nope, otherwise I wouldn't have made it.

>>8624925
Just a fast way to quickly store prompts, or any text you want really, and retrieve it. It's always there ready for when you need it.
Anonymous No.8624968 [Report]
>>8624965
>he should use 102d
>he wants to check if his setup is correct
How are these things mutually exclusive you brainlet? Good luck trying to get anyone to help you though.
Anonymous No.8624970 [Report]
>tfw the power of generalization means you can use the \(cosplay\) token with any character the model knows and it kind of works, even if the real character_(cosplay) tag doesn't exist or has low amount of samples
>you can also create tons of pokemon cosplay with this and pokemon_ears, pokemon_bodysuit kinds of tags
Anonymous No.8624976 [Report] >>8624985 >>8624999
>>8624932
That looks way, way better. Is vpred just unusable without loras, that's the meme? I legitimately don't know anon, im pretty new thats why I prefaced it like that
Anonymous No.8624980 [Report] >>8624987
>>8624967
Oh okay thanks for the demo. How bloated does this get when you have lots of artists combos and such? I like infinite image browsing because I can just search for a pic in my folder then copy the metadata easily.
Anonymous No.8624985 [Report] >>8624999 >>8625008 >>8625015 >>8625042
>>8624976
The point of my post was to say don't use negatives and use negpip extension for things you really need. Then to point out that new people shouldn't be using base noob since it's hard to use. 102d is far simpler and if that's what those artists actually look like then you're better off using 102d. Of course this simple logic attracts console war retards, unfortunately.
Anonymous No.8624987 [Report]
>>8624980
Well you'd have to scroll through a list of all the prompts you saved, but I don't plan on saving every single prompt.

You just gave me a really good idea though: a search bar.
Anonymous No.8624993 [Report]
>>8624436
don't use them :3
Anonymous No.8624999 [Report] >>8625002
>>8624985
>Of course this simple logic attracts console war retards, unfortunately
now this is a strawman, you just gave him a gen which is mostly unrelated to his request without much of an explanation, doubling down on not doing what he asked when questioned directly. pointing this out doesn't make anyone a console war retard.
>>8624976
>Is vpred just unusable without loras, that's the meme?
it is ughh usable but like the other anon said it requires some wrangling
Anonymous No.8625002 [Report] >>8625006
>>8624999
No it's not a strawman but your intense desire to start a fight where there was none. You are the one trying to defend yourself since you butted in and fucked up, an unforced error.
Anonymous No.8625005 [Report] >>8625013
for me it's um hmmm not genning
Anonymous No.8625006 [Report]
>>8625002
>duty calls
Anonymous No.8625008 [Report] >>8625011
>>8624985
>negpip
Interesting. What else do the proompters on the cutting edge use these days?
Anonymous No.8625011 [Report]
>>8625008
I like using cd tuner with saturation2 set to 1 on my img2img pass for color corrections.
Anonymous No.8625013 [Report] >>8625017
>>8625005
i will gen when i am good and ready.
Anonymous No.8625015 [Report] >>8625206
>>8624985
I'm sorry I didnt realize there was metadata to the picture so that all went over my head. Also what makes base noob hard to use?

desu I followed the prompt format and sampler thing on the page so i figured it would be fine, this issue wasnt present with eps when I tried that so I really figured something was broken.

I'd still like someone to use the catbox and generate that same image in vpred just to see if its fucked or not
Anonymous No.8625017 [Report] >>8625036
>>8625013
desu half the time i can be bothered to gen nodaways it's non /h/ stuff
regular sexo is the most boring stuff to gen
Anonymous No.8625020 [Report] >>8625023 >>8625032
>>8624841
e9 >>8624259
Anonymous No.8625021 [Report]
>haven't pulled in ages, like literally since Flux came out
>pull
>try a card just to see if anything was messed up
>the gen comes out exactly the same
Sweet.
Anonymous No.8625023 [Report] >>8625025
>>8625020
are you training with cloud gpus or locally?
Anonymous No.8625025 [Report] >>8625029 >>8625032 >>8625229
>>8625023
locally on a 3090
Anonymous No.8625029 [Report] >>8625037
>>8625025
tempted to train on high res now myself. gonna prep some in my new dataset
Anonymous No.8625032 [Report] >>8625037
>>8625020
>>8625025
what artist/s are these?
Anonymous No.8625035 [Report]
>>8624806
the girl looks like fate testarossa
Anonymous No.8625036 [Report]
>>8625017
>regular sexo is the most boring stuff to gen
that's why I gen girls kissing girls
Anonymous No.8625037 [Report] >>8625044 >>8625045 >>8625052
pic uses zero negative but fore some reason the style wildly varies between seeds, also there are no tponynai images in the dataset i swear
>>8625029
gonna be tough without a second gpu to test things on
>>8625032
>what artist/s are these?
doesn't matter because they aren't recognized lol, like i said for some reason the style changes a lot if you change the seed
Anonymous No.8625038 [Report]
So is there any news about whatever the fuck the NoobAI guys are doing or is everyone still stuck using base V-pred/EPS model and shitmerges? I tried the v29 shitmerge but don't fuck with it much.
Anonymous No.8625042 [Report]
>>8624985
I just realized my workflow already has negpip and I just never took advantage of it because I stole it and didn't bother looking into what everything did kek.
Anonymous No.8625044 [Report] >>8625052
>>8625037
>doesn't matter because they aren't recognized
why hide the artist name? maybe i just want to see their original work
Anonymous No.8625045 [Report]
>>8625037
>gonna be tough without a second gpu to test things on
luckily I got 2
Anonymous No.8625052 [Report] >>8625054
>>8625037
I think it may have started overfitting on the train set...Regardless, I'll try to extract a 1536x lora, maybe it'll be useful for upscaling.
>>8625044
if you really want it then it's gishiki_(gshk) and for the second one it's arsenixc, void_0 plus a bunch of lewd artists who don't draw bgs

>mfw base res gen gives this: file.png: File too large (file: 4.11 MB, max: 4 MB).
Anonymous No.8625054 [Report]
>>8625052
thanks bwo
also are you the lion optimizer anon from pony days? i noticed you posted a sparkle picture
Anonymous No.8625058 [Report] >>8625060 >>8625064
Just tried out negpip. The example with the gothic dress really works. With that said, when I tried converting my negative prompt from a real world complex gen I had, the outputs were worse and adhered to the prompt less. Maybe the weights need to be adjusted. Will experiment more.
Anonymous No.8625059 [Report] >>8625063 >>8625067 >>8625070
you may be shocked if you learnt of all of my identities...
Anonymous No.8625060 [Report] >>8625071
>>8625058
The last time some anon tried to sell on negpip failed miserably, if I were you I wouldn't bother with it, just keep your regular negatives to minimum
Anonymous No.8625063 [Report] >>8625074 >>8625099
>>8625059
It's good that you've at least toned down the bullshit elitist shitposting from pony days.
Anonymous No.8625064 [Report] >>8625071
>>8625058
The point is that you are not supposed to be using any negatives at all and negpip only the specific things that you don't want to show up but are "embedded" into other tags.
Anonymous No.8625067 [Report] >>8625074
>>8625059
g-force anon...
Anonymous No.8625069 [Report]
>>8624319
Increiblemente Basado.
Anonymous No.8625070 [Report] >>8625074
>>8625059
plot twist: you're gay.
Anonymous No.8625071 [Report] >>8625073
>>8625060
I mean, the fact that it can do something normally impossible like subtract concepts that were previously impossible to subtract, it would seem like it has potential, but there may be a learning curve to how to use it for highly complicated prompts when you are used to using the negative prompt.

>>8625064
Yeah that's what I used negatives for, so now I am testing moving the negative to the positive with negative weight as instructed. In my negative is both quality tags as well as specific things I was trying to subtract. I.e. latex, shiny clothes from bodysuit (in the positive).

If you have a problem with the idea of using negative quality tags, I do still use them because some of the artists I use are (likely) associated with their old art which looks bad, and my AB testing shows me that those tags have a clear good effect on the model and prompts I use.
Anonymous No.8625073 [Report]
>>8625071
I'm not sure how similar negpip and NAI's negative emphasis are, but for the latter, trying to take my normal negs and putting them in the prompt with negative weight just causes a mess, but it works very well for removing things in a targeted way.
Anonymous No.8625074 [Report] >>8625075 >>8625077
>>8625063
well, you never know
>>8625067
>>8625070
i'm surprised no one connected at least 2-3 of my identities (out of maybe 10), actually. even though i've been called names multiple times.
Anonymous No.8625075 [Report]
>>8625074
give us a hint schizonon. what do the numbers mean?
Anonymous No.8625077 [Report]
>>8625074
shut up birdschizo/momoura
Anonymous No.8625080 [Report]
god bless anonymity
Anonymous No.8625097 [Report] >>8625100 >>8625106
and god bless nai
Anonymous No.8625099 [Report]
>>8625063
Elitist was (is) me. I just don't bother with your shit general(s) anymore, swim in your diarrhea yourselves...
Anonymous No.8625100 [Report]
>>8625097
>and god bless nai
This tells about you more than it does about me, do you realize that?
Anonymous No.8625102 [Report] >>8625103
oh sorry please dont spam the big lips character again...
Anonymous No.8625103 [Report]
>>8625102
heh heh...
Anonymous No.8625106 [Report]
>>8625097
Anonymous No.8625107 [Report]
Ma'am, I believe /hdg/ is what you were looking for. This is /hgg/.
Anonymous No.8625118 [Report] >>8625340
e10
>4.79 MB
Why
Anonymous No.8625126 [Report] >>8625206
post gens
Anonymous No.8625127 [Report]
Anonymous No.8625129 [Report]
haven't been doing much nsfw lately
Anonymous No.8625131 [Report] >>8625134
>>8624605
catbox? like the style here
Anonymous No.8625133 [Report]
Anonymous No.8625134 [Report] >>8625135
>>8625131
It's teruya (6w6y)
Anonymous No.8625135 [Report]
>>8625134
thank you anon
Anonymous No.8625138 [Report]
After genning a ton, I feel like my perspective on traditional art has changed. Now whenever I look at most art, I can't help but feel how shitty they are, how off the proportions are, how inconsistent a ton of artists are, while I've become more appreciative of the artists that have better standards.
Anonymous No.8625182 [Report]
more like lora baking general
Anonymous No.8625183 [Report] >>8625186
why are we thriving, bros?
Anonymous No.8625186 [Report]
>>8625183
Shitposters see this place as high effort, low reward.
Anonymous No.8625200 [Report] >>8625210
Anonymous No.8625201 [Report] >>8625208 >>8625457
Anonymous No.8625205 [Report] >>8625338
>>8624863
are you planning on uploading it or will it be a private finetune?
Anonymous No.8625206 [Report]
>>8625015
>>8625126
box please?
Anonymous No.8625207 [Report] >>8625229
>>8624863
I'd rather just have your tips on how to finetune. Learning to fish and all that.
Anonymous No.8625208 [Report]
>>8625201
box?
Anonymous No.8625210 [Report]
>>8625200
box?? :3
Anonymous No.8625216 [Report]
>boooooox?
nyo!
Anonymous No.8625223 [Report] >>8625265 >>8625333
>>8624393
bwo i've been training on 1536, found that finer details and textures are replicated better (empirical) with less artifacts, would need lesser face crops to get better looking eyes for example.
also noted that it led to the losses converging in tighter groups & at lower minimas. i have not tested for training on noob, but so far i do find it beneficial when training on base illu0.1.
>>8624403
Like >>8624436 said, it could be placebo, but I do that to reduce the effect of the quality tags on the style.
>[masterpiece, best quality::0.6]
>Out of curiousity, what's this for?
this will apply the quality tags for the first 60% of the steps only.
>Do you think those tags negatively impact finer details?
quality tags tend to be biased towards a certain style and might detract from the style you might be going for. i.e. scratchier lines of a style you are using might become smoother due to quality tags.
0.6 is just an arbitrary value that i selected to 'give the image good enough quality' before letting the other style tags / loras have 'more effect' (honestly the effect is quite minor - see picrel, outlines slightly more emphasized with quality tags)
Anonymous No.8625229 [Report] >>8625338
>>8624863
seconding >>8625207, i'm interested to know how your are going about your finetuning; i've got some questions too
1) do you have a custom training script or are you using an existing one?
2) what is the training config you have setup for your finetuning, and is there any particular factors that made you consider those hyperparameters?
3) in terms of data preparation, is the prep for finetuning different from training loras? do you do anything special with the dataset?
4) i too am using a 3090 >>8625025, how much vram usage are you running at when performing a finetune at your current batch size?
Anonymous No.8625265 [Report]
>>8625223
I train on noob, but the other anon was also recommending training at higher res, so I'll give it a go
Anonymous No.8625273 [Report] >>8625279 >>8625460
Anonymous No.8625279 [Report] >>8625281
>>8625273
>ork
Anonymous No.8625281 [Report] >>8625305
>>8625279
I am an orc
Anonymous No.8625305 [Report]
>>8625281
still not an excuse for ruining what could have otherwise been a good gen
Anonymous No.8625306 [Report]
i came here for 'da ork cantent.
Anonymous No.8625331 [Report] >>8625341
Has anyone experimented with putting negpip stuff in the negative prompt? What happens if you do that?
Anonymous No.8625333 [Report]
>>8625223
Thanks for explaining and the comparison image, and don't worry, as an aficionado of fine snake oils I can appreciate the finer methods that are sometimes hard to see. I've been doing similar with scheduling artists late into upscale for finer details like blush lines, prompt scheduling is a great tool.
Anonymous No.8625338 [Report] >>8625352 >>8625359 >>8625364 >>8625373
e12
>>8625205
I'll upload base 1024x checkpoint, a 1536x checkpoint and a lora extract between the two. I'll also probably upload a merge of the last two epochs if it turns out to be good.
>>8625229
>do you have a custom training script or are you using an existing one?
I'm using a modified naifu script
>what is the training config you have setup for your finetuning, and is there any particular factors that made you consider those hyperparameters?
Full bf16, AdamW4bit + bf16_sr, bs=12 lr=5e-6 for 1024x, bs=4*3 lr=7e-6 for 1536x, 15 epochs, cosine schedule with warmup, pretrained edm2 weights, captions are shuffled with 0.6 probability, the first token is kept (for artists), captions are replaced with zeros with 0.1 probability (for cfg). I settled on these empirically.
>in terms of data preparation, is the prep for finetuning different from training loras? do you do anything special with the dataset?
Yes and no. You should tag what you see and give it enough room for contrastive learning in general. Obviously no contradicting shit should be present. Multi-level dropout rules like described in illustrious 0.1 tech report will also help with short prompts but a good implementation would require implementing more complicated processing pipeline, so I'm not using it.
>how much vram usage are you running at when performing a finetune at your current batch size?
23.0 gb at batch size 4 with gradient accumulation.
Anonymous No.8625340 [Report]
>>8625118
same seed
Anonymous No.8625341 [Report]
>>8625331
I'm testing it right now and it feels like it does have some use. You can't add or subtract large things to an image using this method, but you can nudge, mostly, colors, without affecting composition or other things in the image, whereas, for instance, if you prompted "red theme" in the positive like normal, it might turn a forest autumn or something. But doing negpip in the negative prompt makes it look like the original gen but with a more red tint to it.

This makes sense as the negative and positive prompts do not pay attention to each other's context.

I was also able to make the sky more clearly visible through the leaves in a forest gen, while not altering the composition of the image much. So I think this is what it (negpip in the neg) could be useful for. Nudges to existing gens without changing composition or subject matter, which might happen in pure positive prompting.
Anonymous No.8625345 [Report] >>8625362 >>8625546
>>8624386 (OP)
this isn't ai. Artist name?
Anonymous No.8625352 [Report] >>8625381
>>8625338
thanks for sharing!
i still have a couple of (ml noob) questions that i'd like to ask if you don't mind...
>I'm using a modified naifu script
was any part of naifu lacking in any way such that you had to make modification? or was there a custom feature that you required specific to the finetuning?
>captions are replaced with zeros with 0.1 probability (for cfg)
would you care to explain why the approach where captions are replaced with zeros is used for cfg? what impact does this make to the cfg, is it for the color blow out?
>bs=12 lr=5e-6 for 1024x, bs=4*3
>batch size 4 with gradient accumulation
i saw that your target batch size is 12 (GA (3) * BS (4))
is there any hard and fast rule as to how large a batch should be when training a diffusion model? i noted that many models are baked with a high bs (>100), e.g. illustrious 0.1 was baked with a bs of 192. should batch size be scaled relative to the size of the training dataset?
Anonymous No.8625359 [Report] >>8625363 >>8625381
>>8625338
have you tried data augmentation like flips and color shifts?
Anonymous No.8625362 [Report]
>>8625345
>this isn't ai
???
Anonymous No.8625363 [Report]
>>8625359
flip aug is not only bad it's actively harmful to training. it unnecessarily uses your parameters and fucks everything up since it's, more or less, forcing training to do something twice that it's already effectively doing without you telling it to.
have a paper
https://arxiv.org/abs/2304.02628
Anonymous No.8625364 [Report] >>8625381
>>8625338
>pretrained edm2 weights
Huh, you can reuse edm2 between different runs?
Anonymous No.8625373 [Report] >>8625381
>>8625338
>pretrained edm2 weights
could you share those? I already have some, but wouldn't hurt to see if I could be training with better ones
Anonymous No.8625381 [Report] >>8625383 >>8625393 >>8625396
>>8625352
>or was there a custom feature that you required specific to the finetuning?
Mostly this, I've been using naifu since sd-scripts suck too much
>would you care to explain why the approach where captions are replaced with zeros is used for cfg?
it's used to train uncond for the main cfg equation which is
>guidance = uncond + guidance_scale * (cond - uncond)
cfg will work regardless, but it will work better (for guiding purposes) if you train uncond. in general you shouldn't drop captions on small lora-style datasets.
>is it for the color blow out?
it has absolutely nothing to do with it
>is there any hard and fast rule as to how large a batch should be when training a diffusion model?
no, but if you are training clip you would never want to have batch size < 500 since clip uses batchnorm. large batch sizes will help the model not to encounter catastrophic forgetting due to unstable gradients, and since sdxl is such a deep model you basically never enter local minima because there is always a dimension to improve upon, as long as your lr is sufficiently high.
however, if you are relying first on gradient checkpointing, then on gradient accumulation to achieve larger batches, having very large batches may quickly become very expensive compute-wise.
>>8625359
don't you realize this is harmful, especially if you want to train asymmetrical features
>>8625364
of course, you'd not want to start from scratch every time, would you?
>>8625373
i'll upload the weights next to checkpoint then
Anonymous No.8625383 [Report] >>8625776
>>8625381
>i'll upload the weights next to checkpoint then
did you share before? I missed that if you did
Anonymous No.8625393 [Report] >>8625776
>>8625381
nta but what did you modify in naifu?
Anonymous No.8625396 [Report] >>8625406 >>8625776
>>8625381
Just curious - how do you extract locons from full_bf16-trained checkpoints?
Tried ratio method from the rentry - and for some reason it gives me huge extracts - like 2-2.5GB. Perhaps it has something to do with weights decomposition after training in full_bf16 mode.
The ratio works fine for checkpoints trained in full_fp16, but I didn't managed to get good results from fp16 trains...
Anonymous No.8625399 [Report] >>8625416 >>8625465
Anonymous No.8625406 [Report] >>8625407
>>8625396
nta, I use 64 dimensions and 24 conv dimension (for locon) and that gives me 430mb and 350 mb if you don't train the te
Anonymous No.8625407 [Report]
>>8625406
Yeah, fixed works just fine. I was curious about the ratio one.
Anonymous No.8625413 [Report] >>8625453 >>8625461 >>8625467 >>8625476 >>8625480 >>8625489 >>8625521 >>8625549 >>8625733 >>8625763 >>8625977 >>8625991 >>8626576
retrained the modded vae and now it is actually kinda usable, unlike the garbage before: https://pixeldrain com/l/FpB4R8sa
though i think it still needs more training for use in anything large scale
also updated the node: https://pixeldrain com/l/9AS19nrf
a comparison of 1(one) image enc+dec test, though this is not fair as the modded vae has a much larger latent space (for the same res) compared to base sdxl vae: https://slow.pics/s/5Kc8RkPa
the practical effect of it is basically that you dont have to damage the images by upscaling to 2048 to get the equivalent quality level of a 16ch vae
i tried training a lora with it and it was slow as balls, like 3-4x
if someone wants to give it a try training, i can walk you through how to modify sd-scripts (its just applying the vae modification at one point)
Anonymous No.8625416 [Report]
>>8625399
is that zhao?
Anonymous No.8625453 [Report] >>8625458
>>8625413
im assuming that node is how you can load the vae? wouldnt be able to use this vae on reforge?
Anonymous No.8625457 [Report]
>>8625201
good looking titties,, I want to squeeze and kiss and suck on them
Anonymous No.8625458 [Report]
>>8625453
you can load it, but it will still have the downscaling making it incompatible with the trained weights, someone would have to modify forge to support it yeah
but ultimately you need to train sdxl with the vae on the higher latent resolution or you will have the same body horrors like if you try to raise genning res too high
Anonymous No.8625460 [Report]
>>8625273
nice
Anonymous No.8625461 [Report] >>8625478
>>8625413
What makes this VAE different and why does it need its own node?
Anonymous No.8625465 [Report]
>>8625399
What model is this?
Anonymous No.8625467 [Report]
>>8625413
>needs its own node
Does it work on reforge? I'm getting errors in the console but I do see a difference. Might just be placebo.
Anonymous No.8625476 [Report] >>8625482 >>8625488
>>8625413
comparison looks nice, almost too good
Anonymous No.8625478 [Report] >>8625483
>>8625461
nta, but I think he said that he removed some downscaling layer which in theory, if the vae is trained enough to adapt, would lead to sharper outputs.
Anonymous No.8625480 [Report] >>8625488
>>8625413
gotta ask, but wouldn't increasing the latent space require a full retrain of sdxl?
Anonymous No.8625482 [Report]
>>8625476
>almost too good
too good to be true
Anonymous No.8625483 [Report]
>>8625478
I see, cool
Anonymous No.8625488 [Report] >>8625499
>>8625476
the latent the decoder can work with is much larger, which also leads to much larger training costs, but better performance
>>8625480
this is basically training at a much higher res, rather than changing the entire dimensionality, it wont require full retrain, but it will require training for sdxl to learn to work with bigger latent sizes (very similar if you wanted to train it to not shit itself at generating 2048x2048 images)
the training and generation is also gonna be slower (though peak vram usage at the vae decoder output should be the same), but you can downscale the images and train at 512x512 if you want standard sdxl compression
Anonymous No.8625489 [Report] >>8625495
>>8625413
How many epochs and what's the training set?
It looks much better than sdxl one for sure, but still a bit blurry, especially in background details.
Anonymous No.8625495 [Report]
>>8625489
10 epochs for encoder and 3 for decoder with 3k images, the adaptation is very fast
it will probably still improve in terms of adapting but still there is a limit to what can be done in terms of small details, the improvement is not really from the training, but rather from the larger encoded latents
Anonymous No.8625499 [Report] >>8625500 >>8625517
>>8625488
so you basically doubled the internal resolution
Anonymous No.8625500 [Report]
>>8625499
yes exactly
Anonymous No.8625517 [Report]
>>8625499
think is Cascade could do this on the fly and it didn't work well. You still had to gen small then upscale, and manually adjust the compression level to match what you were doing. Low compression would break anatomy and high compression would kill details.

I think this may just be taking advantage of the fact that illustrious/noob are unusually stable at higher resolutions, compared to other SDXL models.
Anonymous No.8625521 [Report] >>8625535
>>8625413
Is this just for trainers? I tried just replacing my vae and gens are coming out at half the resolution, so I doubled the latent dimensions but then the image becomes incoherent.
Anonymous No.8625525 [Report]
is finetuneschizo here? anything i can do in kohya for better hands?
Anonymous No.8625535 [Report]
>>8625521
yes, the body horrors stopped after i trained a lora with it on illust 0.1, but it would require a larger tune to truly settle in
comfy has hardcoded 8x compression for emptylatentimage so yeah you gotta put in 2x
Anonymous No.8625546 [Report] >>8625548
>>8625345
>this isn't ai
kekmao
>Artist name?
Me, your favorite slopper
Anonymous No.8625548 [Report] >>8625599
>>8625546
this isn't hentai. Cum splotch?
Anonymous No.8625549 [Report] >>8625562
>>8625413
I tested genning with this and I feel like it's a bit muddier in how it renders textures compared to noob's normal vae (I guess that's just standard sdxl vae?), though it does seem slightly sharper for linework. And both feel slightly worse than lpips_avgpool_e4.safetensors.
Anonymous No.8625562 [Report] >>8625568
>>8625549
>lpips_avgpool_e4.safetensors
Huh, link?
Anonymous No.8625564 [Report] >>8625569
So in the end it's all just more snake oil...
Anonymous No.8625568 [Report] >>8625598
>>8625562
https://archived.moe/h/search/text/lpips_avgpool_e4/
Anonymous No.8625569 [Report]
>>8625564
Snake oil does nothing. This clearly does something, just not sure if it's better or worse.
Anonymous No.8625598 [Report] >>8625603 >>8625884
>>8625568
It cost more effort to post this link than it would've to just point the guy at the pixeldrain
Anonymous No.8625599 [Report]
>>8625548
cum filled pocky
Anonymous No.8625603 [Report] >>8625609
>>8625598
give a guy a fish, he'll eat for a day
Anonymous No.8625609 [Report]
>>8625603
give a guy a subscription to a fish delivery service and he'll eat forever
Anonymous No.8625629 [Report]
good reflections are hard
Anonymous No.8625700 [Report] >>8625701
black pill me on https://www.youtube.com/watch?v=XEjLoHdbVeE&list=RDXEjLoHdbVeE&start_radio=1
Anonymous No.8625701 [Report] >>8625706
>>8625700
just fine-tune bro
Anonymous No.8625706 [Report]
>>8625701
I did, but finetuneschizo disappeared and I'm out of parameters to mess with
Anonymous No.8625712 [Report] >>8625715 >>8625719
>civitai wants to further censor their dogshit site and they do it by le HAHA YOU ARE DOING GOOD AMBASSADOR,LE HECKIN POWER FOR YOU
goddamn faggots,why are they doing it?
the site is also so fucking dogshit you can't even search by certain filters
Anonymous No.8625715 [Report] >>8625716 >>8625731
>>8625712
Chub has gone this route as well for textgen. No one ever imagined that the cyberpunk dystopia would be a sexless normiefilled existence.
Anonymous No.8625716 [Report]
>>8625715
>Chub has gone this route as well for textgen.
And it is pretty dead now.
Anonymous No.8625719 [Report]
>>8625712
Is it really a mystery? Go look at their financial breakdown for last year, particularly the wages. I'd be sucking mad cock too if my ability to pay myself that much was threatened
Anonymous No.8625729 [Report] >>8625734
Which ControlNet model should I use for a rough MS Paint sketch/scribble?
Anonymous No.8625731 [Report]
>>8625715
What's this?
https://chub.ai/characters/anonaugusproductions/lola-and-lily
Anonymous No.8625733 [Report]
>>8625413
excellent. now do a finetune of flux's vae and send it to lumina's team
Anonymous No.8625734 [Report] >>8625735
>>8625729
I used this one for something like that and pretty much everything else
>https://huggingface.co/xinsir/controlnet-union-sdxl-1.0/blob/main/diffusion_pytorch_model_promax.safetensors
Anonymous No.8625735 [Report] >>8625737
>>8625734
Thank you I already tried this one but my results werent great so far do you have a short guide with settings for it by any chance ?
Anonymous No.8625737 [Report] >>8625743 >>8625763
>>8625735
This should be more than enough, play around with the control weight if you are not getting what you want
Anonymous No.8625738 [Report]
>>8624918
naiXLVpred102d_custom is king
Anonymous No.8625743 [Report]
>>8625737
Works great thanks a lot! :D
Anonymous No.8625744 [Report] >>8625769 >>8625842 >>8626404
i hate belly buttons and nipples
Anonymous No.8625763 [Report] >>8625778
>>8625413
Is this vae only training? it significantly adds more details on my gens but also little white dots every now and then
Raw upscaled gens with my usual settings
>https://files.catbox.moe/9o9zxd.png
>https://files.catbox.moe/1gm2zp.png

>https://files.catbox.moe/tkbx2r.png
>https://files.catbox.moe/4xhfzv.png

>https://files.catbox.moe/tktube.png
>https://files.catbox.moe/ukc1zv.png

>>8625737
you are welcome
Anonymous No.8625769 [Report] >>8625773
>>8625744
why did you prompt for them then?
Anonymous No.8625773 [Report]
>>8625769
i am stupid
Anonymous No.8625776 [Report] >>8625821 >>8625829 >>8625882 >>8625893 >>8625966 >>8626707
1152x2048, 22 steps, euler
1536 ft / 1024 ft+1536 extract / noob v1+1536 extract / 102d+1536 extract / 1024ft / noob v1 / 102d
>>8625383
https://huggingface.co/nblight/noob-ft
>>8625393
nothing you want to concern yourself with since it's mostly experimental stuff "except" edm2
>>8625396
>ratio method
idk, i've never used it
Anonymous No.8625778 [Report] >>8625783
>>8625763
Are you getting errors in the console too? I find it adds more details but makes things a bit blurry.
Anonymous No.8625783 [Report]
>>8625778
No errors or whatsoever when I load it
>I find it adds more details but makes things a bit blurry.
Same here
Anonymous No.8625821 [Report]
>>8625776
thank you for sharing this! indeed hires is much more stable even with the loras I usually use
Anonymous No.8625829 [Report] >>8625834
>>8625776
what kind of settings should I be using to gen with this model?
Anonymous No.8625834 [Report] >>8625844
>>8625829
nothing should be *too* different from your regular noob vpred except you can generate at 1536x1536 right away
Anonymous No.8625842 [Report]
>>8625744
seems like an upscaling issue
Anonymous No.8625844 [Report] >>8625850
>>8625834
I am not getting anything like I usually do so I'll do some tests on it
Anonymous No.8625850 [Report] >>8625899 >>8625915
>>8625844
you'll have to post a catbox
Anonymous No.8625882 [Report] >>8625899
>>8625776
What are your EDM2 training settings? From the weights, I assume you use 256 channels? Good ol’ AdamW?
Anonymous No.8625884 [Report]
>>8625598
It did not. In the first place that is how I found it myself. I just copy and pasted the url of the page I was on, I didn't even check if the pixeldrain link was valid.
Anonymous No.8625893 [Report] >>8625899
>>8625776
Does this just not work on reForge? OOTL
Anonymous No.8625899 [Report] >>8625931
>>8625882
Yes
>>8625893
See >>8625850
Anonymous No.8625909 [Report] >>8625911
is there a way to control character insets reliably with lora/tag? stuff like the shape of the border, background of the inset, forcing them to not be touching the edge of the canvas, whether it has a speech bubble tail.
Anonymous No.8625911 [Report]
>>8625909
not really, your best option as always is to doodle around and then inpaint to blend it into your gen
Anonymous No.8625915 [Report] >>8625925
>>8625850
i'm still testing things around but it's not looking good on my end, out of 6 style mixes so far, only 1 looks okay and that's because that style it's way too minimalist overall, pic and catbox not related
>https://files.catbox.moe/8mwdc4.png
Anonymous No.8625925 [Report] >>8625930 >>8625938 >>8625945 >>8625963
>>8625915
>sho \(sho lwlw\)
>ningen mame
These were not present in the dataset at all, so that's about to be expected. Try using the lora extract on top of your favorite shitmix or even base noob (or you can even extract the difference and merge it to a model yourself) which should tamper with styles far less than the genning on the actual trained checkpoint while still keeping 1536x res.
Anonymous No.8625930 [Report] >>8625963
>>8625925
>Try using the lora extract on top of your favorite shitmix or even base noob
Hmm alright, I'll do that
Anonymous No.8625931 [Report] >>8625951
>>8625899
I guess it kinda helps prevent anatomy melties but the it melts the styles
https://files.catbox.moe/fqvjgf.png
I'd rather just gacha it
Anonymous No.8625938 [Report] >>8625941
>>8625925
that cock? mine.
Anonymous No.8625941 [Report] >>8625944
>>8625938
It's all yours my friend.
Anonymous No.8625944 [Report]
>>8625941
kek'd
Anonymous No.8625945 [Report] >>8625951
>>8625925
>Try using the lora extract on top of your favorite shitmix or even base noob
Ok yeah that's definitely more doable
https://files.catbox.moe/iyld8p.png
Anonymous No.8625951 [Report] >>8625980
>>8625931
>>8625945
>1040x1520
You know it's a way too small of a resolution for a 1536x base res checkpoint, right? You won't see much of an effect and it even may look worse than it should (think genning at 768x768 on noob). Use a rule of thumb:
height = 1536 * 1536 / desired width
width = 1536 * 1536 / desired height
Anonymous No.8625954 [Report]
Anonymous No.8625963 [Report] >>8625976 >>8626138
>>8625925
>>8625930
Yeah using the lora extract on my beloved 102d custom is way better than using that model itself
The gens still need some inpaint here and there but genning on a higher resolution works very well
May I know what kind of black magic is this?
Anonymous No.8625966 [Report]
>>8625776
Combining this with kohya deepshrink seems to make "raw" genning at 2048x2048 reasonable anatomy wise
Anonymous No.8625976 [Report]
>>8625963
>May I know what kind of black magic is this?
copier lora effect. ztsnr plays some role 100%, would be interesting to see a comparison to illustrious at 1536x
Anonymous No.8625977 [Report] >>8625992
>>8625413
Looks really promising. Would appreciate you sharing the sd-scripts modifications if they're simple enough (or just a few pointers even) so I dont have to vibe code shit with claude.
Anonymous No.8625980 [Report] >>8625982 >>8625984
>>8625951
well it's even shitter at 1536 lol https://files.catbox.moe/f0uw71.png
>inb4
Anonymous No.8625982 [Report]
>>8625980
102d my beloved...
Anonymous No.8625984 [Report] >>8626011
>>8625980
wait shit i applied the lora and the model mea culpa
still though, agree with anon that it's better as the lora than the checkpoint
without the errant lora https://files.catbox.moe/tl4uzw.png
Anonymous No.8625991 [Report] >>8625999 >>8626000
>>8625413
Is that only usable with the Cumfy node for now? Minimal difference on forge. Also fucking hell, it really does make you think about how 90% improvement is hindered by people just not really knowing what they're doing if some random anon can bake this and have it work.
Anonymous No.8625992 [Report] >>8626001
>>8625977
it may not be the most elegant way but here: https://pastes.dev/8FLPusLmTg
also the weights are in diffusers format, so create a folder for the vae and rename the model to diffusion_pytorch_model.safetensors and put the config.json from sdxl vae in the folder you created https://huggingface.co/stabilityai/sdxl-vae/blob/main/config.json
load the vae with --vae the_folder_you_put_it_in
also if you have cache_latents_to_disk enabled and there are already cached latents in the folder, it wont check them and will use the old ones, so either delete the npz files in ur dataset folder or use just cache_latents
Anonymous No.8625994 [Report]
uv me beloved
Anonymous No.8625999 [Report]
>>8625991
Ok, actually, how are you supposed to load that node? I don't get it.
Anonymous No.8626000 [Report] >>8626005
>>8625991
its just made as a demonstration for people that might be interested in training with it for now, the example isnt made by genning with it, but purely by encoding and then decoding an image
it is NOT free lunch, its just a way to upgrade sdxl without retraining it completely for 16ch vae, but sdxl is still going to need to have someone to finetune it, it WILL use more vram and be slower during both training and genning, though less than if you were to gen at a high resolution (there is a HUGE amount of vram used during vae decoding depending on the final output res)
the encoded latents are flux level large and even less efficient
Anonymous No.8626001 [Report]
>>8625992
Thanks anon, my endless list of random shit to test grows.
Anonymous No.8626002 [Report] >>8626227
Anonymous No.8626005 [Report] >>8626017
>>8626000
>the encoded latents are flux level large and even less efficient
this is the problem right there, it's 4x more pixels to train, and you probably can't even do a proper unet finetune on consumer hardware
Anonymous No.8626008 [Report] >>8626012
Anonymous No.8626011 [Report] >>8626096
>>8625984
It's still quite strange that noob simply cannot handle 1girl, standing. Wtf happened?
Anonymous No.8626012 [Report] >>8626020
>>8626008
Umm, sweaty? Tentacles are /d/
Anonymous No.8626015 [Report]
>94 seconds
Anonymous No.8626017 [Report] >>8626100
>>8626005
i agree, though the training shouldnt be very extensive, since hopefully it should be """just""" geting sdxl used to genning at higher (latent) resolutions with the base knowledge already there
Anonymous No.8626018 [Report]
lil bro is fighting ghosts again...
Anonymous No.8626020 [Report]
>>8626012
mmmmyeah?
Anonymous No.8626022 [Report]
>download comfyui
>unexplained schizoshit
>delete comfyui
Anonymous No.8626034 [Report]
don't forget to make your reddit post about it bro
Anonymous No.8626054 [Report] >>8626067
I have a plan, but the overtime im currently working right now prevents me from doing it. it's intense and there's just too much on my plate physically until i can finally go back to my regularly scheduled shitposting, editing, ai generating, sauce making disaster of a life before that sudden train wrec-ah yeah im busy as hell for a few more days.

I do check the main boards for more of your images from time to time, as it really is something i enjoy collecting and looking at. so much text to sift through unfortunately.

i do have one request and i was hoping you could uh, maybe gen your miqo like as rebecca from that cyberpunk anime if you can? get the general outfit down for that, maybe that will inspire me to do more stuff once im done with the disaster going on in my life right now. i found rebecca's design to be quite nice.

saddened that i can't provide a pic, it feels wrong to not be able to share an image in your presence. I have lots of draws and stuff i "could" share but i am not confident enough in my skills or time available to me to be able to follow up on such things yet...................
Anonymous No.8626058 [Report] >>8626063
holy, someone really needs his meds
Anonymous No.8626063 [Report]
I tried warning you guys months ago. Moderation is very anti-ai, this is why they ban everything you like and keep everything you hate. /e/ has been hit by a blatant spambot for a while now with nothing done about it, those who report it get banned. >>8626058
Anonymous No.8626067 [Report] >>8626073
>>8626054
is this a copy pasta from treechan from the miqo thread.....
Anonymous No.8626073 [Report]
>>8626067
Ohhhhhhhh it is
Anonymous No.8626075 [Report]
Oopsie :)
Anonymous No.8626080 [Report] >>8626082
>be newfag
>see random looking off-topic posts I don't understand
>just continue on with life
Anonymous No.8626082 [Report]
>>8626080
Based. This is the correct way to browse 4plebs.
Anonymous No.8626096 [Report]
>>8626011
>Wtf happened?
1024x1024 train resolution
Anonymous No.8626100 [Report]
>>8626017
>"""just"""
there's actually a lot of hires knowledge missing, textures are smudgy, eyes, film grain, etc etc, the model should be pretrained at that reso tbqh. similar story with vpred and ztsnr, it works on paper but when you actually try to train it...
Anonymous No.8626117 [Report] >>8626118 >>8626119 >>8626141
>ai is trash
Meanwhile im getting all i can imagine
Anonymous No.8626118 [Report] >>8626121 >>8626128 >>8626137
>>8626117
this looks terrible like all ai videos outside of google veo
Anonymous No.8626119 [Report] >>8626125
>>8626117
Is this huanyuan or whatever?
Anonymous No.8626121 [Report]
>>8626118
>google veo
That shit looks pretty bad too though.
Anonymous No.8626122 [Report]
who is lil bud fighting with?
Anonymous No.8626124 [Report]
:skull:
Anonymous No.8626125 [Report] >>8626127 >>8626141
>>8626119
This one is Wan Vace, i'm still exploring it, there is so much stuff to try with it.
Anonymous No.8626127 [Report]
>>8626125
Nice. I thought wan couldn't do anime at all. I'm too busy drinking snake oil here to try it though.
Anonymous No.8626128 [Report]
>>8626118
An amateur of DEi gens trully an /h/ oldfag
Anonymous No.8626137 [Report]
>>8626118
It's funny you mention that. I was looking at some live2d animations just a second ago and there is some really bad stuff out there, honestly kind of worse than what he genned. People forget that there's a sea of garbage AI or not, and in the end AI is not the worst enemy, it's the people using it and whether they have some sense not to post garbage onto the internet.
Anonymous No.8626138 [Report]
>>8625963
Now that's a Comfy Background.
Anonymous No.8626141 [Report]
>>8626117
>>8626125
Mind sharing a workflow, even if it's borked? Maybe it's time to retry videogen
Anonymous No.8626190 [Report] >>8626204
Anonymous No.8626198 [Report]
some sisterly love for tonight
Anonymous No.8626204 [Report]
>>8626190
Nice thigh gap.
Anonymous No.8626219 [Report] >>8626235
Reforge just started throwing random errors every gen but I haven't pulled in a while...is this it?
Anonymous No.8626227 [Report]
>>8626002
I love her expression
>I was here all dressed up like a whore so I can get some shikikan dick.
>He's busy fucking Taihou. TAIHOU
>I'll have to satisfy myself with Takao's dildo.
Anonymous No.8626228 [Report] >>8626390
r34 comment section has breached containment
Anonymous No.8626230 [Report] >>8626238 >>8626239 >>8626277
Holy shit I just genned at 2048 res using kohya + ft extract and it just werked as if it was a native 2048 model, even with my real world 400 token prompt, with other loras applied, with negpip, with tons of prompt editing hackery.
LFG TO THE MOON BRAHS
Anonymous No.8626235 [Report] >>8626290
>>8626219
have you tried refreshing the webui
you probably just have some random option toggled or forgot you have s/r x/y plots on and no longer have what it's searching for in prompt and it's breaking your shit.
Anonymous No.8626238 [Report]
>>8626230
>I just [snake oil]
Anonymous No.8626239 [Report] >>8626240
>>8626230
Yeah, I am really liking to gen on a higher base gen resolution, is quite handy
Now if we only had a proper smea implementation on local...
Anonymous No.8626240 [Report] >>8626242 >>8626243
>>8626239
>Now if we only had a proper smea implementation on local...
The SMEA implementation on local is the proper one, NAI came out and said that they fucked it up on their own but it still made their model produce the kind of very awa crap asians love so they kept it
Anonymous No.8626242 [Report]
>>8626240
>The SMEA implementation on local is the proper one
LOL
Anonymous No.8626243 [Report]
>>8626240
ggs then, I need another workflow to really take advantage of this hack, I'm not totally happy with my final results
Anonymous No.8626244 [Report] >>8626246 >>8626291
Hopefully someone writes an easy rentry for brainlets. I don't yet see the benefit.
Anonymous No.8626246 [Report]
>>8626244
There isn't any, it's just more pointless tinkertrooning by cumfy autists
Anonymous No.8626267 [Report] >>8626277
cumfy bwos, our genuine response?
Anonymous No.8626277 [Report]
>>8626230
Hmm, ok so maybe I spoke a bit too soon. I just tested it with background/scenery-focused prompts and the image content is quite a bit different from what the model normally generates.
Maybe this isn't suitable for all prompts, art styles, and loras, though I'm surprised it worked so well with my first prompt.

>>8626267
What? I'm not using the new vae that was posted, this is literally just a lora you can load up in reforge.
Anonymous No.8626280 [Report]
recommended training steps for this simple design? its a vrc avatar so mostly 3d data
Anonymous No.8626290 [Report]
>>8626235
I did have x/y plots enabled but that wasn't it. It was just randomly crashing in the middle of doing gens. I cleared the cache and it fixed itself it seems, I guess something there was causing the error.
Anonymous No.8626291 [Report] >>8626374 >>8626444
>>8626244
The only real benefit is to completely skip the upscale step
This is what I wanted RAUNet to be, a way to do extreme resolutions directly while having all the ""diversity"" and ""creativity"" of a regular base gen so I am very happy with it
Anonymous No.8626301 [Report] >>8626436
Yeah, i can't gen with this vae without --highvram. and i can't do that, cause i'm a 8gb vramlet.
Anonymous No.8626342 [Report] >>8626392 >>8629446
>1000's of gooner artists
>stick to about 10-15 that I rotate and mix about in my mixes
>enough is enough!
>spend 30 minutes browsing artists in the booru
>note down a few I like
>go full autismo mixing and weighing
>smile and optimism: restored
Ah. You were at my side all along..
https://files.catbox.moe/s38kxh.png
Anonymous No.8626374 [Report] >>8626815
>>8626291
What are your settings? I'm getting good (generally the same composition, colors, coherency, ,etc) gens with some prompts but very much not others, and it also varies with block size and downscale factor, where some prompts work better with certain block size and downscale factor combinations, but some prompts just never achieve the same quality/coherency as the vanilla setup with no lora. But I haven't messed with the other settings so maybe those help?
Anonymous No.8626378 [Report]
Anonymous No.8626390 [Report]
>>8626228
limitless girl looking for a limitless femboy to ruin :333
Anonymous No.8626392 [Report] >>8626422 >>8626481
>>8626342
for me it's testing 20k artists and realizing how many of them are unremarkable
Anonymous No.8626404 [Report]
>>8625744
funny that right after i complained about upscaling mangling belly buttons again a new snake oil to tackle it comes out
im a vramlet so im just using the lora on an upscale pass instead of genning straight to higher res but it seems to work
https://files.catbox.moe/w08kbf.png
https://files.catbox.moe/empa0r.png
Anonymous No.8626422 [Report] >>8626423 >>8626481
>>8626392
80% of danbooru artists are completely interchangable style wise, and then you find some guy who has some amazing unique style and he has 3 posts on danbooru and a twitter that is just him uploading his gacha pulls
Anonymous No.8626423 [Report]
>>8626422
I love the ones where you see some damn amazing pic and it's either one of three of his on the whole internet or all his other pics don't look as good.
Anonymous No.8626435 [Report] >>8626437
Anonymous No.8626436 [Report]
>>8626301
Buy used 3090 if you can, it's super cheap used right now.
Anonymous No.8626437 [Report] >>8626438
>>8626435
nice composition on this one
care to box it up?
Anonymous No.8626438 [Report] >>8626439
>>8626437
https://files.catbox.moe/4kbhpa.png
inpainting and color correction img2img passes were used later
Anonymous No.8626439 [Report]
>>8626438
cool. thanks bwo
Anonymous No.8626444 [Report] >>8626460 >>8626480
>>8626291
>The only real benefit is to completely skip the upscale step
But you just loosing advantages of the second pass, which are mostly to add alot of details and remove leftover noise. Ideally when model trained like that it should never break anything at both passes by giving better consistency at higher denoise levels of the second pass, like how it could be with some models doubling belly buttons or something like this only with hiresfix, especially with landscape reso. Have you tried to do it like 1216x832 - upscale?
Anonymous No.8626460 [Report] >>8626469 >>8626514
>>8626444
I mean it's not like "second pass" is some kind of magic, you're just generating the same model with a denoise at a higher resolution. A better base res gets you 100% of the potential instead of 10-50% or whatever of the denoise amount.
Of course assuming it works well, which is somewhat debatable.
Anonymous No.8626469 [Report]
>>8626460
I would also like to add that denoising by 0.3 or whatever doesn't actually mean you are changing that many pixels. The RMSE between the base upscale and denoised picture with 0.3 denoise is like 95% similarity, 93% for 0.5.
Anonymous No.8626480 [Report] >>8626514
>>8626444
>and remove leftover noise
There's no noise in a fully denoised picture, anon.
Anonymous No.8626481 [Report]
>>8626392
>>8626422
Yeah I did run into a lot of high quantity artists that shared similar styles. To no surprise, a focus on gacha sluts. Try them on a model and if you're not inputting the artists yourself, you'd swear your results were all the same. But it's fun throwing in the few artists that do stand out into a mix and seeing what happens.
https://files.catbox.moe/sa409h.png
Anonymous No.8626502 [Report] >>8626756
What Cyber-Wifu11 using? Can't replicate his style
Anonymous No.8626514 [Report] >>8626516
>>8626460
>10-50%
Yes, the limitation of a base model is why denoise on the second pass was always low. If multires model works really well, there should be some boost for that, allowing you to raise it higher than 0.5 while preserving consistency while still getting details, just like controlnet or other tools did
>>8626480
Sometimes there is, despite of full denoising first pass, but rarely after the second. But yeah not very actual for latest models
Anonymous No.8626516 [Report] >>8626561
>>8626514
>Sometimes there is
No, there is no noise by definition retard. Here's what the image would look like at 0.09 (de)noise.
Anonymous No.8626518 [Report] >>8626524 >>8626671 >>8626677 >>8626756
I'm tired of having fade colors in my generated pics, is there anything I can do to have nice bright colors (not fried/saturated)?
Anonymous No.8626524 [Report] >>8626525
>>8626518
download photoshop
Anonymous No.8626525 [Report] >>8626527
>>8626524
or krita
Anonymous No.8626527 [Report]
>>8626525
/hdg/ is on the other tab bwo
Anonymous No.8626561 [Report] >>8626755
>>8626516
It's not as pronounced as stopping ~3 steps earlier out of 28. Did you really never got outputs with some noisy parts somewhere on the image?
Anonymous No.8626571 [Report] >>8626572 >>8626573 >>8626594 >>8626715 >>8626756 >>8626758
Does anyone know some cute style (artists or loras) I can use to generate petite/slim girls? (not lolis!)
Anonymous No.8626572 [Report] >>8626578
>>8626571
laserflip
Anonymous No.8626573 [Report] >>8626582 >>8626585
>>8626571
cromachina
Anonymous No.8626575 [Report]
I want Pekomama.
Anonymous No.8626576 [Report]
>>8625413
There is something wrong with the comfy node. It downscales output image by 2 from 1024 to 512 for some reason and tile decode node is just completely fucked when using it
Anonymous No.8626578 [Report] >>8626579 >>8626580 >>8626586
>>8626572
Ugly manfaces.
Anonymous No.8626579 [Report] >>8626581
>>8626578
Are you sure you are not gay?
Anonymous No.8626580 [Report] >>8626581
>>8626578
huh, so you're fine with everything else?
Anonymous No.8626581 [Report]
>>8626579
Yes, I'm sure I like my girls girly and not manly.
>>8626580
You mean realistic hairy genitalia and such? Whatever, that stuff can have it's place, but I would never tolerate those faces.
Anonymous No.8626582 [Report] >>8626590 >>8626715
>>8626573
That's literally loli... No juvenile stuff please
Anonymous No.8626585 [Report]
>>8626573
Yoink!
Anonymous No.8626586 [Report]
>>8626578
yeah i was trolling. try imo-norio or soso
Anonymous No.8626587 [Report]
I unironically like laserflip, he's a staple of grosscore
Anonymous No.8626590 [Report]
>>8626582
no.
Anonymous No.8626594 [Report]
>>8626571
fellatrix
Anonymous No.8626595 [Report]
Does he know?
Anonymous No.8626597 [Report] >>8626605
>google fellatrix
>get some obscure 2005-core portugese trash metal album
cool
Anonymous No.8626605 [Report] >>8626607
>>8626597
if you don't know hentai pillars, you don't belong here
Anonymous No.8626607 [Report] >>8626609
>>8626605
for me it's aaaninja
Anonymous No.8626609 [Report]
>>8626607
i was actually thinking of edithemad but he also fits
Anonymous No.8626615 [Report] >>8626616
Is there a way to do hires without messing anything up? It feels like no matter how I dial the settings, things like straight lines will become wobbly, things and tons of details will be erased, while other unnecessary and nonsensical details will be added.
Anonymous No.8626616 [Report] >>8626621
>>8626615
CN
Anonymous No.8626618 [Report]
>Gonna drink from your usual bottle sir?
https://files.catbox.moe/zlkovc.png
Anonymous No.8626620 [Report] >>8626631
>gen at 1536x1536 without the lora just out of curiosity
>it more or less just works with the particular pose i tried
These newer models are really stable compared to what we had before, if I tried to gen at 1.5x on 1.5 it'd just melt into a blob pancake all over the picture
Granted if you try to do more complicated poses you still get fucked up shit but it's still interesting
Anyway, I am liking that 1536 stabilizer lora, yes there is some style influence but it looks pretty worth it. I gotta try resizing it and seeing what will come out.
Anonymous No.8626621 [Report] >>8626643
>>8626616
What's that?
Anonymous No.8626631 [Report] >>8626645
>>8626620
>I gotta try resizing it
Oh, you can't.
Anonymous No.8626643 [Report] >>8627147
>>8626621
oh nyo nyo nyo~
Controlnet. If you use a noob model, get the epstile controlnet from hf or civit and call it a day.
Anonymous No.8626645 [Report] >>8626694
>>8626631
Oh, you can. Just not using non-dynamic methods?
Anonymous No.8626671 [Report]
>>8626518
gimp pepper tool
Anonymous No.8626676 [Report] >>8626698
Testing the 1536 lora more now without any Kohya stuff at normal resolution and honestly for quite a bunch of my old prompts it is negatively affecting the coherence and prompt following full stop. Probably only going to use it for hires pass.
Anonymous No.8626677 [Report]
>>8626518
Use Vpeed
Anonymous No.8626692 [Report] >>8626701
in swarm is there a way to activate a lora only for a certain step count? like prompt editing
Anonymous No.8626694 [Report]
>>8626645
ogey nevermind they just don't even show up
shame, and i wonder why it works that way
Anonymous No.8626698 [Report]
>>8626676
I'd like to see some examples.
>quite a bunch of my old prompts
Are you comparing cherry picked images to images generated with the lora?
Anonymous No.8626701 [Report] >>8626703
>>8626692
>swarm
getchu hands workin on that comfy backend, gay bro. this nigga trippin
Anonymous No.8626703 [Report]
>>8626701
i'll make you swallow your teeth and poop them into my mouth if you keep talking to me like that lil bro
Anonymous No.8626705 [Report] >>8626711 >>8626762
Now that Civit nuked all Loras for making deep fakes, what's the go-to site for Loras?
Anonymous No.8626707 [Report] >>8626708 >>8626743
>>8625776
>https://huggingface.co/nblight/noob-ft
What did you train this on anyway? And what network type is the extract?
I've been unsuccessfully trying to resize that stuff to test.
Anonymous No.8626708 [Report] >>8626721
>>8626707
That's locon extract in the fixed mode. You can't resize that shit.
Anonymous No.8626711 [Report] >>8626716
>>8626705
Wrong thread, I think you want to check >>>/aco/?
Anonymous No.8626715 [Report]
>>8626582
depends more on your prompt than on the style

>>8626571
I usually end up going for pseudo-chibi stuff at that point, like Harada Takehito
Anonymous No.8626716 [Report] >>8626727 >>8626764
>>8626711
Let me rephrase it, that anon doesn't know how to ask questions properly:
>Now that Civit nuked all Loras for making lolis, what's the go-to site for Loras?
Anonymous No.8626721 [Report]
>>8626708
It's... over!
I tried to get geepeetee to fix it and it did get me further along with the static resizing but yeah it also just won't boot. Too bad.
Anonymous No.8626727 [Report]
>>8626716
Jokes on you, I train my loli loras myself.
Anonymous No.8626735 [Report] >>8626744 >>8626759 >>8626766
Anonymous No.8626743 [Report] >>8626749
>>8626707
>What did you train this on anyway?
4776 images of various booru slop, all of the images were personally checked by myself. I don't think the captions turned out too great though, so as a finetune it's kinda borked.
>And what network type is the extract?
I believe it's a locon.
>I've been unsuccessfully trying to resize that stuff to test.
I extracted it using the script from https://rentry.org/lora-is-not-a-finetune. You can extract the difference yourself by subtracting 1024x ft from 1536x ft and resize it however you want.
Anonymous No.8626744 [Report] >>8626768
>>8626735
box?
Anonymous No.8626749 [Report] >>8626836
>>8626743
I meant the base model. Vpred 1.0?
Also, am I correct in assuming you baked this with the guide? Can you share your script?
It's all interesting stuff, I wonder how well it can be put into place with a bigger dataset and other optimizations.
Anonymous No.8626755 [Report]
>>8626561
Not him but I do with 102d but that's probably a me problem.
Anonymous No.8626756 [Report] >>8627172
>>8626502
Just make a lora.
>>8626518
Use cd tuner.
>>8626571
tiangling duohe fangdongye
Anonymous No.8626758 [Report]
>>8626571
ciloranko unironically
Anonymous No.8626759 [Report]
>>8626735
Nice job
Anonymous No.8626762 [Report] >>8626777
>>8626705
This is literally the reason why you learn to fish. Now you'll starve.
Anonymous No.8626764 [Report] >>8626783 >>8626792
>>8626716
What obscure artists are you trying to prompt that you need a lora?
Anonymous No.8626765 [Report] >>8626770 >>8626771 >>8626772 >>8626783 >>8626789 >>8626794
unironically what loras are you guys still seem to be baking 24/7? Are you going out of your way to find some artist that doesn't work from base noob just to bake a lora or you actually feel like the models are lacking in built-in styles?
Anonymous No.8626766 [Report] >>8626768
>>8626735
nice gen bwo, would you care to box this one up?
i quite like the style
Anonymous No.8626768 [Report] >>8626774
>>8626744
>>8626766
The lora is something I am testing after training using a config shared last thread
<https://files.catbox.moe/74brn8.png>
Anonymous No.8626770 [Report] >>8626781
>>8626765
I just like building datasets and training in general. You can never really run out of things to train.
Anonymous No.8626771 [Report] >>8626781
>>8626765
bwo im just scrolling pixiv, looking for artists that have unique and interesting styles that aren't promptable or are poorly replicated in noob. its more of just collecting styles, i just find baking fun - its the gacha game i play desu
Anonymous No.8626772 [Report] >>8626781
>>8626765
I don’t particularly like most of the backed styles; they "kind of" resemble the artist at best. However, that’s fine if you mix like 10+ of them, I suppose.
Anonymous No.8626774 [Report] >>8626776
>>8626768
thanks bwo thats an interesting style mix
do you mind letting me know which config is it? there was a couple last thread
Anonymous No.8626776 [Report] >>8626779
>>8626774
It's probably faster to reupload it myself
https://files.catbox.moe/a22hr0.toml
Anonymous No.8626777 [Report] >>8626778
>>8626762
I'd rather not redo all that effort if there's a place where someone already did it before me.
Anonymous No.8626778 [Report]
>>8626777
>wasted trips
I mean I like going out to eat too but that doesn't mean I don't know how to cook.
Anonymous No.8626779 [Report]
>>8626776
thanks, i'll give it a try
Anonymous No.8626781 [Report]
>>8626770
>>8626771
>>8626772
alright. I personally see lora baking as a chore that is useful to achieve some other goal, but if you like the process itself guess it makes sense.
>However, that’s fine if you mix like 10+ of them, I suppose.
yeah that's what I almost always do. I agree though, most of built-in noob artist tags aren't great on their own
Anonymous No.8626783 [Report] >>8626786
>>8626764
>>8626765
>teruya 6w6y
>doctor masube
>hantamonn
I like throwing those guys a few bucks for a dataset at least.
Anonymous No.8626786 [Report]
>>8626783
teruya is pretty great to use as one of stabilizer loras for base noob btw, very neutral style and the lora is well-baked imo
Anonymous No.8626789 [Report]
>>8626765
A couple of charas and artists but it's mostly curiosity.
I only use one on the regular, I'm a simple man and Noob is a good model.
Anonymous No.8626792 [Report] >>8626796
>>8626764
Me? I was just trying to help bro ask a question. I don't even use artist loras - for me it's either style loras that aren't artist loras (to spice things up in conjunction with artist mixes) or NAI nowadays.
But any artist with large image count could benefit from a focused dataset of his best or most representative works, so having a place with loras is better than not having one, especially with competent bakers. I'd gladly donwload loras for artists that are already recognized if they were done well.
Anonymous No.8626794 [Report]
>>8626765
cat girl science is never finished
Anonymous No.8626796 [Report]
>>8626792
do we have any competent bakers here?
Anonymous No.8626809 [Report]
Dear /hgg/, today I shall attempt my first bake. Wish me luck.
Anonymous No.8626815 [Report]
>>8626374
>What are your settings?
Pretty much the same as my regular gen settings, I just added the new lora at the beginning of the prompt for mere convenience and set a higher base resolution, nothing else

> some prompts just never achieve the same quality/coherency as the vanilla setup with no lora
This has only happened to me for already hard and very gacha prompts, othrerwise most of the time I get the expected results from my prompts
Anonymous No.8626819 [Report] >>8626822 >>8626823 >>8626856
masturbation is /e/ or /h/?
Anonymous No.8626822 [Report]
>>8626819
without toys /e/ but well you know
Anonymous No.8626823 [Report] >>8626830
>>8626819
that's alot of pussy juice
Anonymous No.8626830 [Report]
>>8626823
that's what happens when she sees you anonie
Anonymous No.8626836 [Report]
>>8626749
>Vpred 1.0?
yes
>Also, am I correct in assuming you baked this with the guide?
no, i just took a lora extraction script from there
>I wonder how well it can be put into place with a bigger dataset and other optimizations.
This already took 2:43 per epoch for 1536x and 1:10 per epoch for 1024x on average. 1024x finetune took about 18 hours, and 1536x one took 42, so the whole thing is about 60 RTX 3090 hours or 2.5 days. Using 4-bit optimizer and bf16 gradients. There's no way to optimize it further unless you're into offloading gradients to RAM.
Anonymous No.8626852 [Report]
>more random tests
>still getting pretty much flawless gens at 1500x1500 without the lora with the right pose
it really is only a problem when you're trying to do like full body or on side where a significant amount of the gen is the torso, I'm surprised at how well Noob handles 1.5x even though I used 1.25x before.
Anonymous No.8626856 [Report]
>>8626819
/e/ as long as it's limited to fingering.
Anonymous No.8626873 [Report] >>8626876 >>8626886 >>8626888 >>8626889 >>8626891 >>8626892 >>8626901
i don't use AI, is this AI?
https://danbooru.donmai.us/posts?tags=eco_376124
Anonymous No.8626876 [Report] >>8626878
>>8626873
yes
Anonymous No.8626878 [Report] >>8626883
>>8626876
Really? I'm not sure if it's assisted but isn't this resolution too high to not make the noise/splotches or whatever?
Anonymous No.8626880 [Report]
any butiful artists similar to melon22?
Anonymous No.8626883 [Report] >>8626892
>>8626878
I mean he has supposed painting vids on his twitter but having a style that looks exactly like shitmix ai slop (including the composition and highres but lowqual) is pretty funny
Anonymous No.8626886 [Report]
>>8626873
I really can't tell lmao, the style is kind of generic but when you zoom in to see the details and lines, everything is well polished and mostly coherent so, Idk
Anonymous No.8626888 [Report]
>>8626873
thumbnails look like some noob shitmix with nyalia lora kek but its way too clean upscaled
probably ai-assisted with painting over a gen? there are plenty of artists who do this.
Anonymous No.8626889 [Report]
>>8626873
Either an artist with an unfortunate generic style or ai-assissted
Anonymous No.8626891 [Report]
>>8626873
Rule thumb is: if this "artist" appeared and used this style after 2023 it is AI
Anonymous No.8626892 [Report] >>8626894
>>8626873
>>8626883
>painting vids
looks like he's making a colored sketch, runs it through img2img or whatever, and then uses it as a reference for little details, shading, etc
https://x.com/Eco_376124/status/1781569033235763537/video/1
Anonymous No.8626894 [Report] >>8626900
>>8626892
yeah real artist then, very fucking weird he takes the slop as reference tho
Anonymous No.8626900 [Report] >>8626916
>>8626894
>yeah real artist then
to be fair he probably uses ai as a reference for colored sketch too
Anonymous No.8626901 [Report]
>>8626873
So is it that he is a bad artist or has AI art gotten so good that people have to zoom in and examine each pixel to tell?
Anonymous No.8626906 [Report] >>8626912
Damn
Is this Novel AI?
https://nhentai.net/g/578788/
Anonymous No.8626910 [Report]
>the ugliest shiniest BBC slop
its local
Anonymous No.8626911 [Report]
probably a pony one, on top of that
Anonymous No.8626912 [Report] >>8626915
>>8626906
Let me guess, is this another nigga ARTIST who has 1,000 subscribers on Patreon?
Anonymous No.8626913 [Report]
Why do you people get mad when I just give the public what they want? Envy, perhaps?
Anonymous No.8626915 [Report]
>>8626912
>nigga ARTIST
yup, he's one of us!
Anonymous No.8626916 [Report]
>>8626900
that would stupid, using the slopmachine to create a sketch to have as reference to do your own to THEN use the slop machine AGAIN to see how the full picture would look like?
I think it would serve as a learning method perhaps but idk broski
Anonymous No.8626920 [Report] >>8626923 >>8626924 >>8626933
>only making ~2,500 USD a month
time to get a real job
Anonymous No.8626923 [Report] >>8626925
>>8626920
What if he lives in Eastern Europe?
Anonymous No.8626924 [Report]
>>8626920
>only making ~2,500 USD a month
>Monthly expenses are 350
Anonymous No.8626925 [Report]
>>8626923
time to retire
Anonymous No.8626928 [Report]
>making <cash> a month
waow... wish that were me
Anonymous No.8626930 [Report] >>8626960
I live in Germany and, being migrants from the Middle East, I do not work. I receive free money from local taxes every month. I spend this money to fuck German girls.
Anonymous No.8626932 [Report]
this but im in sudanese in japan
Anonymous No.8626933 [Report] >>8626937
>>8626920
>Only make 2,500 usd a month
>but live in a 3rd world shithole where income tax is pretty much nonexistent
Time to move out of commiefornia, anon.
Anonymous No.8626937 [Report]
>>8626933
Florida has no income tax.
Anonymous No.8626942 [Report] >>8626948 >>8627150
Why do Americans eat raw cookie dough?
Anonymous No.8626948 [Report]
>>8626942
Not like their "normal" food is any better
Anonymous No.8626957 [Report]
American website
Anonymous No.8626960 [Report] >>8626965
>>8626930
germcuck, is that you?
Anonymous No.8626965 [Report]
>>8626960
I am Abdul. And yesterday I fucked your girlfriend. You will raise my child.
Anonymous No.8626972 [Report] >>8626979 >>8626987 >>8627014
any performance gains on the horizon? or are we buckling down for heavier models with minuscule improvements? is there a way I can set reforge to use less resources and work slower so it doesn't slow me to a crawl while genning?
Anonymous No.8626979 [Report]
>>8626972
right now we can expect gazillion B models that still gen at 1024x1024
but use fp16 accumulation if you can
Anonymous No.8626987 [Report]
>>8626972
There's sage attention too, but I don't if it works with reforge and it's impact on quality is disputed here, though personally I haven't noticed the difference.
Anonymous No.8626997 [Report] >>8627003
https://x.com/anakasakas
https://www.pixiv.net/en/users/16943821
AI-assisted?
Anonymous No.8626998 [Report] >>8627024
this is kinda an odd question but does anyone have nsfw eng onomatopoeia sound effects that work on photoshop? Ive looked around and seen a bunch for sale that looked alright but im suprised I cant find any for free. Mainly looking for like slurp/lick effects.
Anonymous No.8627003 [Report]
>>8626997
yes
Anonymous No.8627005 [Report] >>8627068
Retard that just started. I'm looking for a model with a style like this, any recs?
Anonymous No.8627007 [Report]
yes quite
https://files.catbox.moe/60cfrv.png
i still yearn for models that don't gen at ant resolutions natively
Anonymous No.8627014 [Report] >>8627026
>>8626972
>or are we buckling down for heavier models
Yes
>with minuscule improvements?
Nah I think 16ch vae alone is a huge improvement
but desu this is probably my main fear for local, inference speed is just going to go to shit, hopefully someone figures out some kind of speedboost for lumina
Anonymous No.8627024 [Report] >>8627045
>>8626998
I have some of very questionable quality
>https://files.catbox.moe/6juv4e.svg
>https://files.catbox.moe/i5gyab.png
>https://files.catbox.moe/wappne.svg
>https://files.catbox.moe/dk3us3.svg
Anonymous No.8627026 [Report] >>8627033 >>8627099
>>8627014
I mean there's absolutely room for improvement I'm just doubtful local will get it. Still no where close to nai impainting.
Anonymous No.8627033 [Report]
>>8627026
Essentially no local improvements were anticipated, I wouldn't wait but I wouldn't be surprised if some rando was baking a magic 2048x2048 model in his backyard.
Loras were a minor paper sidenote that got revived by some random fuck that wanted to gen better text or images iirc.
Anonymous No.8627045 [Report] >>8627050
>>8627024
awesome thanks, im pretty new to photoshop so is there an easy way to use these or do you just have to manally lasso them out. maybe im missing something.
Anonymous No.8627050 [Report] >>8627059
>>8627045
If I am being completely honest with you I have no idea lmao, I just happened to have those lying around
Anonymous No.8627059 [Report] >>8627060 >>8627066
>>8627050
I tried it real quick and my idea worked, open your image you want to add it too, then open the svgs in photoshop and lasso them out and drag them to your image, I just need to figure out how to outline them though.
Anonymous No.8627060 [Report]
>>8627059
you stroke em hard (actually, the layer effect is called stroke)
Anonymous No.8627065 [Report]
is there an uncensored img2vid website yet? im trying google framepack but for some reason its taking the prompt image as if its supposed to be the last, not to mention the slow-mo..
Anonymous No.8627066 [Report]
>>8627059
right click layer>blending options>stroke(you can also go from the edit drop down to stroke but that's less flexible and kind of pointless)
or you can do the fancier thing and duplicate the layer, put a color overlay on the one that's below and shift it down and to the right by a few pixels.
there are other ways but those are the easiest.
Anonymous No.8627067 [Report] >>8627075 >>8627078 >>8627093
I'm sure a 5070 ti will be a big jump over my current 3080, but will it give me headroom to play video games on the side while generating 1024s?
Anonymous No.8627068 [Report] >>8627128
>>8627005
Not really about the model but about the artists. Resolution says it's noob anyway. Without metadata you can't know what artists those are. Go through danbooru and try to find someone similar.
Anonymous No.8627075 [Report]
>>8627067
I can play and gen on my 4070ti super so I think you would be fine
Anonymous No.8627078 [Report]
>>8627067
A lot of games don't really vibe with how python handles GPUs desu
I couldn't even play Daggerfall Unity when I was baking lmao
Anonymous No.8627093 [Report]
>>8627067
I can play stuff like Total War Warhammer 3 on a 4070 ti super while genning and it's fine as long as I'm using tiled vae encoding and decoding. Most things are okay as long as I don't hit the 16GB ceiling and drop to 2fps.
Anonymous No.8627099 [Report]
>>8627026
>nai impainting.
Maybe LAX will blow some compute on an inpainting model for v2, who knows
that was one thing they didn't do for v1 so I guess it'd be appropriate for their studies or whatever
Anonymous No.8627128 [Report] >>8627153
>>8627068
It looks like ChatGPT.
Anonymous No.8627147 [Report] >>8627155 >>8627256
>>8626643
The example on the civitai looks washed out compared to the lowres doe... But yeah it's still more accurate but also maybe too much? When I look at it and how little it changes the image, I feel like maybe just a traditional esrgan upscale would work just as well. At which point there's not that much meaning in upscaling.
Anonymous No.8627150 [Report]
>>8626942
I don't usually make cookies but I tried it once and it tasted pretty good so I can see why people would eat that.
Anonymous No.8627153 [Report] >>8627163
>>8627128
I thought all sora gens come with pony-tier sepia?
Anonymous No.8627155 [Report] >>8627256
>>8627147
Give it a try first. In my experience multidiffusion is superior but it hallucinates so I switch to controlnet when that fails. I do 0.8 end step and 1 control weight and I rarely get hallucinations. Without loras you can lower your end step to like 0.6 without issue.
Anonymous No.8627158 [Report] >>8627171
"Male legs" anon was right. Thanks again.
Anonymous No.8627163 [Report]
>>8627153
>>>/v/712654531
>>>/v/712654589
I think they prompt a different style.
Anonymous No.8627171 [Report] >>8627186
>>8627158
What was your experience with it?
Anonymous No.8627172 [Report] >>8627186
>>8626756
>cd t
Thanks for letting me know about this, seems very usefull, though I've been testing values and still can't get accurate colors
Anonymous No.8627186 [Report]
>>8627171
Was trying to gen a pic of me cuddling with my wife and I didn't want to look like a crippled stump.
>>8627172
Try playing with saturation2 in txt2img for the greatest effect.
Anonymous No.8627197 [Report] >>8627453 >>8627899 >>8629033
I'm looking for a style similar to fua1heyvot4ifsr (unfortunately there is no lora and using the artist tag doesn't work much at all). I love the colors of their drawings, like that one: https://files.catbox.moe/54mzb9.jpg
Anonymous No.8627200 [Report]
Anonymous No.8627203 [Report]
One massive upside of NAI is that it makes good hands by default. No painful controlnet inpaint suffering required.
Anonymous No.8627222 [Report] >>8627226
>no ayakon
>no cathag
>no highlights
it's so over it hurts...
Anonymous No.8627223 [Report]
are you lost kid
Anonymous No.8627224 [Report] >>8627228
Anonymous No.8627226 [Report]
>>8627222
you're in the wrong thread
Anonymous No.8627227 [Report]
where is the rest of it, you forgot your box anon
Anonymous No.8627228 [Report]
>>8627224
Nice. Out of frame censoring is my new favorite tag.
Anonymous No.8627230 [Report] >>8627240
Anonymous No.8627233 [Report] >>8627926
Anonymous No.8627236 [Report] >>8627348
>prompt for 3 girls
>get just 1 or 2 most of the time
Anonymous No.8627240 [Report] >>8627241
>>8627230
very sexy
Anonymous No.8627241 [Report] >>8627245
>>8627240
its nai
Anonymous No.8627245 [Report] >>8627247
>>8627241
It's flux
Anonymous No.8627247 [Report]
>>8627245
its nai
Anonymous No.8627250 [Report] >>8627254 >>8627257
Anonymous No.8627252 [Report] >>8627254 >>8627257
Anonymous No.8627254 [Report] >>8627261
>>8627252
>>8627250
Do nerdy girls wax? Would be cuter with pubes.But also still cute.
Anonymous No.8627256 [Report] >>8627260
>>8627147
>The example on the civitai looks washed out compared to the lowres doe
Never go by the examples on civit. Remember the image used by vpred 1.0? That said, it's a crisp af upscale and can really pop details or smooth things out depending on the sampler you use. You can try this >>8627155 but what personally works for me is
weight: 0.5 - 0.6
starting step: default
end step: 0.85 - 0.9
Allows you to crank up denoise to really iron issues out.
Anonymous No.8627257 [Report] >>8627261
>>8627250
>>8627252
Whew... b-box?
Anonymous No.8627260 [Report] >>8627448
>>8627256
What sampler?
Anonymous No.8627261 [Report] >>8627263
>>8627254
Not all nerds are unkempt.

>>8627257
ckpt: https://civitai.com/models/1595884
lora: https://civitai.com/models/1678888/illustrious-swirly-glasses-black-delmo-aika

masterpiece, best quality, high contrast,
1girl, simple background, blush, sitting, facing viewer, wide hips, white panties, sweat, <lora:Swirly_Glasses_Delmo_ilxl_v1.0:0.8>S_G_D, brown hair, short hair, (coke-bottle glasses:1.2), s_clothes, red ascot, black dress, white underwear, white thighhighs, excessive pubic hair, covering face, embarrassed, pussy, clitoris, skirt lift, sitting, pussy focus, from below, panties aside, pussy focus, close-up,
Anonymous No.8627263 [Report]
>>8627261
>comfy
Delicious. Thanks anon.
Anonymous No.8627269 [Report] >>8627281 >>8627698 >>8629005
https://files.catbox.moe/5uakfu.png
https://files.catbox.moe/i9shbm.png
https://files.catbox.moe/uab6y0.png
https://files.catbox.moe/onvjf0.png
Anonymous No.8627271 [Report] >>8627276 >>8627321
Anonymous No.8627276 [Report]
>>8627271
reminds me of the good old days
Anonymous No.8627281 [Report] >>8627448
>>8627269
I don't even like cunny but her thighs in that first pic look good. Must be from fishine.
>konan
Oh you're him aren't you?
Anonymous No.8627321 [Report]
>>8627271
very nice
Anonymous No.8627324 [Report] >>8627685 >>8628825
Anonymous No.8627348 [Report] >>8627406
>>8627236
>don't prompt at all
>gens keep appearing in my folder
what is going on?
Anonymous No.8627406 [Report]
>>8627348
every copy of stable diffusion is personalized
Anonymous No.8627448 [Report]
>>8627260
DDPM normally but if there's a particular style that looks better smoothed out, then I use Kekaku Log.
>>8627281
>Oh you're him aren't you?
Konanbros.. should we take this personally?
Anonymous No.8627453 [Report]
>>8627197
ok bwo, might give it a try, no promises tho
Anonymous No.8627520 [Report] >>8627532
I'm guessing there's no tag for vertical/horizontal variants of inverted nipples is there?
Anonymous No.8627532 [Report] >>8627862
>>8627520
did you try horizontal_inverted_nipples?
Anonymous No.8627577 [Report] >>8627582 >>8627583 >>8627833 >>8627848
A while back someone in this thread that modern photoshop is impossible to pirate, so I didn't even bother to try.
Until today when downloaded PS 2025, installed it with ethernet plugged out as per instructions, then disabled access of PS to internet with firewall and that was it.
Anonymous No.8627582 [Report]
>>8627577
Who told you that, because that's completely retarded
Anonymous No.8627583 [Report]
>>8627577
>retard said modern PS is impossible to pirate
i suspect said retard either didn't follow the instructions or got one of those badly repackaged portable versions that stop working after a while (until you add adobe shit to your hosts file)
ironically it's even easier to pirate on mac
Anonymous No.8627607 [Report] >>8627854
Man, it's incredible to think how far we've gotten and yet we are so behind tech like 4o now. No one other than the megacorps have the money to train such a model because it's also a giant LLM. And if a 4o level model came out it'd probably be too big and expensive to continue pretraining on booru unless you're a giant megacorp too, not to mention the potential issue of catastrophic forgetting if you solely train on booru and don't have anything similar to the original dataset used to train the LLM. The dependence on corpos is truly grim.
Anonymous No.8627637 [Report]
image gen?
Anonymous No.8627639 [Report] >>8627662
are there any coloring controlnets for noob?
Anonymous No.8627662 [Report]
>>8627639
Isn't that just canny edge? What would a "coloring" controlnet do?
Anonymous No.8627683 [Report]
lil bro forgot this isn't /aids/
Anonymous No.8627685 [Report]
>>8627324
box please?
Anonymous No.8627698 [Report]
>>8627269
Thank you for idol cunny anon. I will contribute with my own cunny since I've been slacking.
https://files.catbox.moe/0t2cwr.png
Anonymous No.8627728 [Report]
Anonymous No.8627821 [Report] >>8627827 >>8627831 >>8627856 >>8627864 >>8628477
Man, I went really hard on this one but looks really off, maybe turning groids into slime-like creatures wasn't a good idea after all
Anonymous No.8627827 [Report]
>>8627821
i always thought the blue man group were pretty overrated
Anonymous No.8627831 [Report]
>>8627821
smurfs be wildin' fr :skull:
Anonymous No.8627833 [Report]
>>8627577
Could you step by step it? How did you even install photoshop without a subscription?
Anonymous No.8627848 [Report]
>>8627577
that's a lot of work compared to installing gimp 'nonie.
Anonymous No.8627854 [Report]
>>8627607
That's how it's always been with any technology. The saving grace is that everything being so expensive also means it's not profitable. AI is a massive bubble right now propped up by governments and when it pops we'll see prices become more realistic and someone will try to reduce the cost of compute finally (probably the chinese since they did 48gb 4090s). However the palantir/anthropic developments make this whole issue particularly grim.
Anonymous No.8627856 [Report] >>8627860
>>8627821
I like it. Maybe feet are too small? I mean with how much closer they are to the viewer compared to thighs, it's just weird that they are so much smaller, you know?
Anonymous No.8627860 [Report] >>8627962
>>8627856
Funny, they were a little bigger on the first pass but I thought they should be smaller so I made it that way, I am bad telling things apart with that kind of perspective
Anonymous No.8627862 [Report]
>>8627532
Nope and now I have, doesn't work unfortunately.
Anonymous No.8627864 [Report]
>>8627821
Looks fine to me anon.
Anonymous No.8627899 [Report] >>8627929 >>8632576
>>8627197
https://litter.catbox.moe/2btuysieozl6bxnw.safetensors
Anonymous No.8627926 [Report]
>>8627233
I'd make the stroke a little thicker
Anonymous No.8627929 [Report]
>>8627899
nta, but wow that's fast bwo, mine's still bakin
Anonymous No.8627962 [Report] >>8627984
>>8627860
time to learn fundies
Anonymous No.8627976 [Report] >>8627984 >>8628031 >>8628391
is v1.0+29b and 102d still relevant or are people using better models?
Anonymous No.8627982 [Report]
I have yet to see a better Noob model, so no.
Anonymous No.8627984 [Report] >>8628090 >>8628477
>>8627962
I should really do, there has been a many, many times where I had to discard very interesting ideas or gens just because I couldn't conceptualize how some parts should look like

>>8627976
Those are still relevant and fine but if you really feel like trying out something else, give r3mix or LS Tiro a try
Anonymous No.8628031 [Report] >>8628087 >>8628153
>>8627976
I switched to using base vpred 1.0 exclusively
Anonymous No.8628087 [Report]
>>8628031
pwoof?
Anonymous No.8628090 [Report] >>8628100
>>8627984
NTA but never knew about r3mix, seems to do gens somewhat better than epsilon like it says, considering that's what i was using previously
Anonymous No.8628100 [Report]
>>8628090
yeah, r3mix is solid, I used it for some gens, all my mixes worked well there
Anonymous No.8628153 [Report]
>>8628031
Me too. I thought I was liking 102d at first but after testing a bit more it felt more limited than the base model, even if the base model is more finicky sometimes. It's not a bad model but doesn't match the things I like genning.
Anonymous No.8628169 [Report] >>8628180 >>8628211 >>8628299
i still don't get how people are genning on base
even with loras it just looks stylistically shit
Anonymous No.8628176 [Report]
I use 102d, but not sure if there's something better atm
Anonymous No.8628180 [Report]
>>8628169
must be shit loras then since they're the ones giving you style
Anonymous No.8628211 [Report] >>8628215
>>8628169
>it just looks stylistically shit
? use artist tags
Anonymous No.8628215 [Report] >>8628222
>>8628211
duh, have you seen how they look on 1.0?
Anonymous No.8628222 [Report] >>8628235
>>8628215
Too accurate, I know. Needs a bit of nyalia and very awa slapped on top.
Anonymous No.8628225 [Report] >>8628265
very substantive talk
Anonymous No.8628230 [Report]
Anonymous No.8628235 [Report]
>>8628222
"it's a more accurate model"
Anonymous No.8628265 [Report]
>>8628225
welcome to 4chan
Anonymous No.8628299 [Report] >>8628310
>>8628169
pick better loras? shitmixes are literally base noob with loras merged in it. base loraless noob is atrocious though
Anonymous No.8628310 [Report] >>8628314
>>8628299
loraless base can look good, depends on artists
Anonymous No.8628314 [Report] >>8628372 >>8628764
>>8628310
we probably have different definitions of "good"
i've seen anons post examples of "good" loraless noob gens before and to me it looked awful, melted and fried.
Anonymous No.8628372 [Report] >>8628375 >>8628390
>>8628314
>loraless noob gens before and to me it looked awful, melted and fried.
nta but depending on how you define "loraless", this pic would also be usually counted as one, do you find it super awful, melty or fried?
Anonymous No.8628375 [Report] >>8628390
>>8628372
or this one
Anonymous No.8628390 [Report]
>>8628372
>>8628375
boxo?
Anonymous No.8628391 [Report]
>>8627976
Started searching for new artists to mix so I went back to 291h as it's the best of both worlds from 29+1 and custom. More honest to the artist with just as much control. Ended up just staying with it again. Not sure why I even stopped using it. Probably just new thing autism and got a lucky gacha with custom once.
Anonymous No.8628451 [Report]
i smell a kritty
Anonymous No.8628470 [Report]
Anonymous No.8628477 [Report]
>>8627821
it is a better idea
>>8627984
better than orks too
Anonymous No.8628610 [Report]
Wall of text about negpip prompting.

I did some experimentation since there are multiple ways you could prompt with it. For instance, if the goal is to have the subject wearing a white gothic dress, you could use the following prompts (and more I didn't test).

gothic dress, (black dress,:-1.0)
gothic dress, (black dress,:-1.0) white dress,
(black:-1.0) gothic dress,
(black:-1.0) gothic dress, white dress,
(black:-1.0) gothic dress, white gothic dress,
white gothic dress, (black dress,:-1.0)
white gothic dress, (black gothic dress,:-1.0)
(black:-1.0) white gothic dress,

And then you can also test with different colors as the theme to make sure it's stable. For instance, aqua theme. This is what those prompts give, with the first column being just gothic dress.

Generally and unsurprisingly the model seems to not understand what things mean the longer a comma separated segment is. So if you want to subtract the concept of blackness from the dress, you have can't just subtract black, you have to subtract black dress, and subtracting black gothic dress is not as effective. Though "white gothic dress, (black dress,:-1.0)" interestingly performed the best in terms of making everything white, while "gothic dress, (black dress,:-1.0) white dress," had more bits of the outfit in black. It makes sense why it might do that, since tag segments are sometimes interpreted as applying to different things on the image and may not not necessarily describe the same thing. So she might be wearing both a gothic dress and a white dress that's not black, but not necessarily a white gothic dress, which the model might think is entirely white.
Anonymous No.8628698 [Report]
I'm really sad. When I prompt some artists, they increase banding. Some are fine. Is there like an anti-banding lora or something? Or is there some way to prompt it out without affecting style? Or maybe some kind of ComfyUI snakeoils?
Anonymous No.8628739 [Report] >>8628747
Speaking of lora style stabilization for base Noob, I kinda want to try getting like a thousand random top rated images and baking it to see how well a super diffuse lora would work for that "stabilization".
Anonymous No.8628747 [Report] >>8628751
>>8628739
isn't that just "very awa"
Anonymous No.8628751 [Report]
>>8628747
Maybe but I think loras tend to bake out into generic nothingness with diverse datasets more than something like direct baking in the dataset would.
That's more like well, those already existing stabilization loras, but I just don't trust civit bakers.
Anonymous No.8628754 [Report] >>8628756
Anonymous No.8628756 [Report] >>8628764
>>8628754
That face is kinda Wokadaish, what artist is that?
Anonymous No.8628761 [Report] >>8628791 >>8628796
first day ever here
Anonymous No.8628764 [Report] >>8628769 >>8628801 >>8629106
>>8628756
nanameda kei, (ciloranko, wamudraws:0.5)
https://litter.catbox.moe/8kkflndj2awokktd.png

was gonna post it for >>8628314 since nanameda barely works on any merges, but I'm not really sure where the melted/fried boundaries are
Anonymous No.8628769 [Report]
>>8628764
cool ty
Anonymous No.8628779 [Report] >>8628791 >>8628796
Anonymous No.8628791 [Report] >>8628796 >>8628803 >>8628828
>>8628761
>>8628779
Hoping it's also the last.
Anonymous No.8628796 [Report] >>8628803
>>8628761
>>8628779

>>8628791
His gens are fine, he just needs to take and post them in the right place. Go here, anon:
>>8627272
Anonymous No.8628801 [Report] >>8628877
>>8628764
that certainly looks very blurry. can't read noodleshit so not sure if its due to some wrong settings or the nolora noob is the issue here.
Anonymous No.8628803 [Report] >>8628806
>>8628796
>>8628791
Why don't you provide some constructive feedback instead?
Anonymous No.8628806 [Report] >>8628829
>>8628803
feedback on what
Anonymous No.8628825 [Report]
>>8627324
Box please.
Anonymous No.8628826 [Report]
it's nai
Anonymous No.8628828 [Report]
>>8628791

Made this one for you pal cause you are one massive...
Anonymous No.8628829 [Report] >>8628831
>>8628806
On the gens?
Anonymous No.8628831 [Report]
>>8628829
nigga, anyone that sees 12 fingers and goes "Yeah. This is fine." is beyond feedback.
Anonymous No.8628877 [Report] >>8628878
>>8628801
it's an intentional part of the style in this case, those three aren't exactly known for sharpness

same prompt with "jadf, gin moku" instead
Anonymous No.8628878 [Report] >>8628885 >>8628896
>>8628877
my image
Anonymous No.8628885 [Report] >>8628896
>>8628878
one more
I'm done
Anonymous No.8628887 [Report] >>8628889
why is it so fried
Anonymous No.8628889 [Report] >>8628897 >>8628901
>>8628887
yeah I wonder https://danbooru.donmai.us/posts/9253382
maybe the artist just likes higher contrast
Anonymous No.8628896 [Report]
>>8628878
>>8628885
there's something *off* about those, I can't really describe what exactly. loraless noob's style looks like some sort of withered scan with fucked up contrast, doesn't look clean. i mean if it doesn't bother you it's alright, I just prefer it with a bit of loras mixed in.
Anonymous No.8628897 [Report]
>>8628889
Fried doesn't just refer to high contrast, you know.
Anonymous No.8628900 [Report]
>flat greyish image
>fried
what did 175%srgbmonitorbro mean by that
Anonymous No.8628901 [Report] >>8628915
>>8628889
What about the wobbly linework, melty/artefacted eyes/details and white glow around characters
Anonymous No.8628907 [Report] >>8628917
>>8616235
> https://rentry.org/hgg-lora
thank you
how good is 7s/it for bs3+gradient+memefficient?
Anonymous No.8628913 [Report] >>8628923
sorry, took a while to figure out what you guys want
Anonymous No.8628915 [Report]
>>8628901
Isn't that just from low res? Looks like a raw gen. Other than the glow, but it's not in the other pics.
Anonymous No.8628917 [Report]
>>8628907
how much GA?
Anonymous No.8628919 [Report]
>>8616235
> https://rentry.org/hgg-lora
thank you
how good is 7s/it for bs3+gradient+memefficient?
and how to prevent te training in kohya gui?
with --unetonly it still says te modules: number at the beginning of the training
Anonymous No.8628920 [Report]
new
>>8627272
>>8627272
>>8627272
Anonymous No.8628923 [Report] >>8628929
>>8628913
is that 1.5?
Anonymous No.8628925 [Report] >>8628931
Finally, a melty here in /hgg/
Anonymous No.8628929 [Report]
>>8628923
it's base noob v-pred
(masterpiece, very awa:1.2), absurdres, very aesthetic, ai-generated, shiny skin
Anonymous No.8628931 [Report]
>>8628925
think we're being reasonably civil so far
Anonymous No.8628943 [Report] >>8628950
>having to scour thatpervert for waifu lora pics because they aren't anywhere else
uegh
Anonymous No.8628950 [Report] >>8628955
>>8628943
could always bake your own
Anonymous No.8628955 [Report]
>>8628950
nigga
Anonymous No.8628970 [Report] >>8628972 >>8628976 >>8628981 >>8628989 >>8629052
Scraped 25k top score images
Filtered 1/10 with resolution requirements
Starting to clean it up
It's gonna be interesting
Anonymous No.8628971 [Report] >>8628973 >>8628981
/hgg/ approved stabilizer lora... :prayge:
Anonymous No.8628972 [Report] >>8628973
>>8628970
for what
Anonymous No.8628973 [Report] >>8628978
>>8628972
>>8628971
but i think i gotta start with like 10k, top rated posts aren't THAT good lel
Anonymous No.8628976 [Report]
>>8628970
scrape the top 25k off civitai, that's where it's really at
Anonymous No.8628978 [Report]
>>8628973
To be serious though I think it'd probably turn out worse than manually cherry picking a few dozen images to train on.
Anonymous No.8628981 [Report] >>8628987
>>8628970
>>8628971
didn't some namefag already do this
it was okay style-wise but had heavy bias in compositions and backgrounds
Anonymous No.8628986 [Report] >>8629064 >>8629066 >>8629317
Is there such a thing as RL in the diffusion model world like in LLMs? In text gen, it is usually understood that a model needs to undergo RL before it's usable as a chatbot. But to me it feels like something like vpred never underwent such a step, or was undercooked if it did have that step. In the first place it's weird to call something like Noob a finetune. For LLMs finetunes do not add knowledge, pretraining is what bakes in knowledge.
Anonymous No.8628987 [Report]
>>8628981
a lot of people did it
but i mostly just want to experiment and see how my config does
definitely not today though, i have stuff to bake
Anonymous No.8628989 [Report] >>8628995
>>8628970
Hope you like big tits.
Anonymous No.8628995 [Report]
>>8628989
you don't?
Anonymous No.8629005 [Report]
>>8627269
missed you anon
Anonymous No.8629006 [Report] >>8629010 >>8629029
>browsing danbooru for inspo
>see images like https://danbooru.donmai.us/posts/9479769
>tfw we are still far away from a model that can do something like that without heavy guidance and handholding using various methods
Anonymous No.8629010 [Report] >>8629036
>>8629006
>muh butiful background
for me it's poses more complicated than 1girl standing, especially regarding stuff like feet not getting mashed into garbage, and context
Anonymous No.8629029 [Report] >>8629039
>>8629006
That's a dall-e 3 gen
Anonymous No.8629033 [Report] >>8632795
>>8627197
had my fun with this one, but would like to test it more to see if it needs a rebake
Anonymous No.8629036 [Report] >>8629040 >>8629044
>>8629010
You can actually prompt that stuff though and get lucky with your gens. But it is literally impossible to pure prompt a complex scene like that booru post. If you try to do multiple people, buildings, the cityscape, the heat haze effect (which does in fact have a tag), the perspective, you will never ever get an image like it unless you god's chosen one with seeds.
Anonymous No.8629039 [Report]
>>8629029
I was a launch user of dalle 3. It could never do that, though perhaps it could come close with a lot of work, but it definitely won't be as coherent still. Maybe today's 4o could, idk, haven't tried that much.
Anonymous No.8629040 [Report] >>8629042
>>8629036
OK but why would I use an anime porn model trained on a dataset that is 90% monochrome backgrounds to make that kind of picture
Anonymous No.8629042 [Report]
>>8629040
You miss the point of my post. The point was the complexity and coherency. If I saw an incredibly complex porn image I might've posted it but I just happened to see that one and posted that instead.
Anonymous No.8629044 [Report]
>>8629036
Get a procedural generative cityscape addon for unreal engine and i2i or CN that shit. Cityscapes suck for AI anyway and will suck for a long time, because buildings are too manmade: precise and straight and mathematical. Nature is much more of an AI thing.
Anonymous No.8629052 [Report] >>8629255
>>8628970
>saving a bunch of fun looking artists along the way
hey that's nice
Anonymous No.8629061 [Report] >>8629065
>go on danbooru
>order by score
>see tons of garbage
Man.
Anonymous No.8629064 [Report]
>>8628986
i think people with resources that might know how to do it just dont give a shit about image gen beyond grifting for research funds with training on a imagenet tier dataset + whatever synthetic MJ garbage and claiming +1% improvement
in image gen the base models are so cucked that pretty much all "finetunes" have to be retrains, with the hope that training off them transfers at least some good knowledge, and while noob had quite a bit of gpus relative to everyone else they were also just amateur enthusiasts
Anonymous No.8629065 [Report]
>>8629061
I'm on pic 600 of the filtered dataset and I selected 96 for baking so far so it is how it is
Anonymous No.8629066 [Report]
>>8628986
>For LLMs finetunes do not add knowledge
That's just a retarded saying when it's the exact same process as finetuning, just more focused. "RLHF" is just overbaking on a particularly formatted (usually corposlop) dataset, same as any other finetuning.
Anonymous No.8629095 [Report] >>8629098
>https://civitai.com/models/1555532
>makes 3 loras and uses them together to try and stabilize vpred's colors
Jesus.
Anonymous No.8629097 [Report]
what's unusual about that
Anonymous No.8629098 [Report] >>8629106
>>8629095
Why don't people just use CFG++ samplers, it's not that hard.
Anonymous No.8629099 [Report]
rmao
Anonymous No.8629106 [Report]
>>8629098
Or just use nothing like >>8628764
Anonymous No.8629130 [Report] >>8629227 >>8629234
>4200 images
>only saved 317
lmao i overestimated danbooru
there's so much acoshit in the top scores
Anonymous No.8629227 [Report] >>8629231
>>8629130
lmao, i wanted to warn you about that but you was so confident i assumed you realized that
Anonymous No.8629231 [Report] >>8629250
>>8629227
hey i still have some dataset
i'll bake it and see
Anonymous No.8629234 [Report]
>>8629130
https://konachan.com/
Anonymous No.8629250 [Report]
>>8629231
actually i think danbooru's "rank" algo is much better than just score, but it's changing over time and i don't think you can go in the past
https://danbooru.donmai.us/posts?d=1&tags=order%3Arank
you can also do this and slide through time yourself:
order:score age:>9day age:<12day
Anonymous No.8629255 [Report] >>8629267
>>8629052
>looking at the styles via saucenao
>half of them are like 6 image artists
why is it always like that
Anonymous No.8629267 [Report] >>8629270
>>8629255
>turns out i saved a bunch of nyantcha and ratatat
oh god oh hell...
Anonymous No.8629269 [Report] >>8629289 >>8629326
am I retarded or are both regional prompter and forgecouple horribly broken on reforge
Anonymous No.8629270 [Report]
>>8629267
tasty tasty bbc?
Anonymous No.8629289 [Report]
>>8629269
Comfy here but I read some reForge complaints before about prompts leaking and stuff, starting about three months ago. Not sure if people don't use it enough to make a big deal out of it, or they noticed and stopped using that stuff because of it.
Anonymous No.8629317 [Report]
>>8628986
There's tons of papers on this. Just make sure you have your 4xH100s ready to go.
https://arxiv.org/abs/2401.12244v1
https://arxiv.org/abs/2311.13231
Anonymous No.8629326 [Report] >>8629333 >>8629335
>>8629269
They never worked to begin with and nobody ever actually used them effectively. People would rather generate generic 2girls and inpaint the character onto them than deal with the shit that is regional prompter. It's only being brought up as a cope after NovelAI solved the multi-character issue. Local needs a better solution.
Anonymous No.8629330 [Report]
>obscure ass artist i can't find anywhere but cripplebooru and r34
weird
Anonymous No.8629333 [Report]
>>8629326
works fine on comfy
Anonymous No.8629335 [Report]
>>8629326
regional prompter (didnt try forge couple, from what I understand it's sorta similar?) does work for very basic compositions/poses where it's easy to assign a character to one region of the canvas, for anything other than that it's practically unusable
Anonymous No.8629336 [Report] >>8629337 >>8629338
>the first epoch of the stabilizer immediately makes artist on base go from complete garbage to mostly working
What the FUCK is wrong with base vpred lmao
Anonymous No.8629337 [Report]
>>8629336
>What the FUCK is wrong with base vpred lmao
everything or so I have read
Anonymous No.8629338 [Report]
>>8629336
teh fuck is a stabilizer
Anonymous No.8629340 [Report]
anyway naaah it might help some but it still doesn't look fully proper to the artists, i prefer my shitmixes
i'll let the memers keep base vpred and go onto baking more shit, i have like 25 new datasets
Anonymous No.8629343 [Report]
Anonymous No.8629404 [Report] >>8629423
gyod damn kukumomo had like 63 styles in total
that's why loras are useful
Anonymous No.8629420 [Report]
Anonymous No.8629423 [Report]
>>8629404
Roropull also has two main styles, and the Noob version of him is fucked altogether. I'm baking one later to stabilize.
Anonymous No.8629446 [Report]
>>8626342
nice
Anonymous No.8629488 [Report]
Alright, I'll go back yet once again to base noobvpred and do some nice gens to share later
Anonymous No.8629504 [Report] >>8629536
>new dataset and the better lora setup completely fails to bake a chara i baked before properly
Huh.
Anonymous No.8629536 [Report] >>8629539
>>8629504
black magic
Anonymous No.8629539 [Report] >>8629645
>>8629536
Nah the character is white.
Anonymous No.8629645 [Report]
>>8629539
white magic
Anonymous No.8629653 [Report]
ancient chinese secret
Anonymous No.8629688 [Report]
black powder
Anonymous No.8629718 [Report] >>8629785 >>8629848
Knife ears are made for ojisans.
https://files.catbox.moe/h4s569.png
Anonymous No.8629785 [Report] >>8629832 >>8629835
>>8629718
>mature female in catbox
... >:(
Good quality as always, bro >:(
Anonymous No.8629787 [Report] >>8629822 >>8629848
yeah this is a young male thread keep it moving
https://files.catbox.moe/v2t2cw.png
Anonymous No.8629801 [Report] >>8629806 >>8629807 >>8629808
/hgg/ noob vpred stabilizer lora where, sirs... please...
Anonymous No.8629806 [Report]
>>8629801
First you must prove that you are able to do a nice good looking gen without any lora or snake oil at all
Anonymous No.8629807 [Report]
>>8629801
+1 sir
Anonymous No.8629808 [Report]
>>8629801
https://civitai.com/models/918037/artist-nyalianoob-10v-pred-05
Anonymous No.8629817 [Report] >>8629820
is booru down?
Anonymous No.8629820 [Report]
>>8629817
yes
the internet is down in israel
Anonymous No.8629822 [Report] >>8629823
>>8629787
is that a stomach bulge? does he have a massive dildo stuck back there?
Anonymous No.8629823 [Report]
>>8629822
he's pregnant
Anonymous No.8629832 [Report] >>8629841
>>8629785
A-anon, about that..
Anonymous No.8629835 [Report] >>8629841
>>8629785
Does he know?
Anonymous No.8629841 [Report] >>8629854
>>8629832
>>8629835
Oh yeah, I tend to forget. If it's a mature male, it's all good, keep at it bros.
Anonymous No.8629848 [Report] >>8629849 >>8629850 >>8629854 >>8629859 >>8629866
>>8629787
>>8629718
What do we think about the recent change of the otoko no ko tag for trap? Is it based or cringe?
Anonymous No.8629849 [Report] >>8629854 >>8629864
>>8629848
shoulda been femboy no one uses trap anymore
Anonymous No.8629850 [Report] >>8629853
>>8629848
don't really care
the admin also wanting to change paizuri to titfuck is stupid though
makes me think hes trying to make the site more mainstream friendly
Anonymous No.8629853 [Report]
>>8629850
>paizuri to titfuck
lmao fucking retarded
Anonymous No.8629854 [Report] >>8629864
>>8629841
Based.
>>8629848
>>8629849
Yeah it's dogshit. Kinda sprung up on me when I was searching for some tags and I saw "trap" in the side suggestion with sooo many entries. Thought I was going nuts and never knew it existed and then realized it changed from otoko. I agree with, anon. If you had/wanted to change it, femboy would have been better. Trap is ol' timey boomer terminology, speaking as a boomer.
Anonymous No.8629859 [Report]
>>8629848
how did this go through when more people disliked it then liked it
https://danbooru.donmai.us/bulk_update_requests/40541
Anonymous No.8629864 [Report]
>>8629849
>>8629854
I'm an oldfag and I still remember when we used to use trap to refer to them and I still like it. Besides that, it pisses me off, a little, to even think it was changed (unilaterally, 11 years ago) just because it offended a certain group of people. Also i think femboy is mostly associated with 3DPD so I don't like it. For me is either trap or otokonoko. Let's see how this ends, I'll be lurking that thread on danbooru for a while. Good night anons.
Anonymous No.8629866 [Report] >>8629874
>>8629848
changing a jap term to some tranny-tainted westoid le meme shit is meh
Anonymous No.8629874 [Report] >>8629875
>>8629866
>tranny-tainted
The term predates woke culture by at least twenty years
Anonymous No.8629875 [Report] >>8629885
>>8629874
reading comprehension
Anonymous No.8629885 [Report] >>8629889
>>8629875
Communication goes both ways, if your message is not understood you can try rephrasing it.
Anonymous No.8629889 [Report] >>8630056
>>8629885
nta but even as an esl i understand what >tainted means in this context
Anonymous No.8629959 [Report] >>8630029 >>8630058
re: loras for already "working" artists, picrel's kukumomo base on the left and one of the lora epochs on the right
some of these inbuilt artists really are unfocused by default, even if they more or less work
i'll confirm with the roropull i already baked, and i might just go on danbooru and bake a bunch of these...
Anonymous No.8629967 [Report]
Anonymous No.8630029 [Report] >>8630058 >>8630121
>>8629959
Maybe that's less visible here stylewise but am I schizoing out when I say the lora look higher res regarding lines and details than base in both these examples? Is that just the 512 training on the base model coming out?
Anonymous No.8630056 [Report]
>>8629889
The problem is that this logic is as stupid as getting banned on tv for using the okay symbol or getting hate because you like looking at rainbows after rain. I will NOT concede my language to retards who use it for their own shitty purposes, both English and Japanese.
Anonymous No.8630058 [Report] >>8630073 >>8630079
>>8629959
>>8630029
Just to be clear this is artist tag as the activation tag correct? I started noticing anatomy mistakes when I did this in my current bake and I'm wondering if that was the problem. The dogma was always to avoid retraining artist tags...
Anonymous No.8630073 [Report] >>8630075
>>8630058
Yeah, I baked them with their appropriate artist tag.
Anonymous No.8630075 [Report]
>>8630073
Hmm maybe it's just v/double v shitting on me as usual? I don't want to have to look through each epoch. What a pain.
Anonymous No.8630079 [Report]
>>8630058
Depends on the existing knowledge, it can make things better or worse. And either way makes the training go way faster.
Anonymous No.8630120 [Report] >>8630159 >>8630166
Taking the 102d training wheels off and swapping to noob1.0+29b has me all kinds of filtered, but the unpredictability of it has also been great.
I know it's been asked a thousand times and I'm sorry for asking again, but are there any loras I should be using, particularly for the weird contrast, that won't sterilize it back to being 102d again?
Anonymous No.8630121 [Report]
>>8630029
>Is that just the 512 training on the base model coming out?
That's a chroma thing, not a noob thing afaik? Probably just poor training settings for v-pred on their part
Anonymous No.8630159 [Report]
>>8630120
>particularly for the weird contrast
Use literally any good lora that was trained on vpred.
Anonymous No.8630166 [Report]
>>8630120
That's what loras do, the only difference is you get to choose which one to apply and get to limit its strength to the minimum required. Ideally pick one that somewhat matches the style you're going for.

Some artist prompts also stabilize in a similar manner, so if you have a mix you might not even need it.
Anonymous No.8630227 [Report] >>8630237 >>8630371 >>8630548
Styles

kukumomo - https://files.catbox.moe/txcwbb.safetensors
tedain - https://files.catbox.moe/odgdzu.safetensors
bee haji - https://files.catbox.moe/4tn7u3.safetensors
haiki (tegusu) - https://files.catbox.moe/nwu28y.safetensors
kei myre - https://files.catbox.moe/qqesvw.safetensors
roropull - https://files.catbox.moe/5nqfyo.safetensors
Anonymous No.8630237 [Report]
>>8630227
I just looked up haiki on danbooru. What a surprise.
Anonymous No.8630262 [Report] >>8630272 >>8630339
Also re:re:re:re: on choosing baking by steps or epochs.
A couple of graphs on the bunch of stuff I just baked, these are values for the epochs I chose as best (and converged).
Pick what you think looks stabler
Anonymous No.8630272 [Report] >>8630289 >>8630296
>>8630262
I guess you could go by steps per image but oh gee that graph is identical to epochs just use the damn things
Anonymous No.8630289 [Report] >>8630291
>>8630272
Well yeah, as long as your image counts are similar. Step count is a product of dataset size * repeats * epochs.
Anonymous No.8630291 [Report] >>8630296
>>8630289
You forgot about batch size.
Anonymous No.8630295 [Report]
>removed a couple of images from the dataset
>chara goes from total gigafailbake to kinda working???
It's still not as good as the old lora despite a bigger dataset but I'm beginning to think maybe this character in particular doesn't benefit from shuffling captions
Anonymous No.8630296 [Report] >>8630306
>>8630291
Batch size doesn't increase actual step count, just processes two or more images in a single step. But it's another thing to consider.

>>8630272
Train one style on 50 images and another on 500. In this case step count will tell you how much you're actually baking, while epochs will make the latter lora take ten times as long.
Anonymous No.8630306 [Report] >>8630325
>>8630296
Anon what the fuck do you think these loras are
It literally shows you the random inconsistent step count jumps that make it a shit metric that aren't there in epochs
What else do you need if not data
Anonymous No.8630320 [Report] >>8630326 >>8630387
sloppa dump time again
https://files.catbox.moe/al7nhk.png
not so lewd
https://files.catbox.moe/k9mb4i.jpg
https://files.catbox.moe/ibig97.jpg
>both threads dead or consumed by schizophrenia
did it just die out or people moved elsewhere?
Anonymous No.8630325 [Report] >>8630629
>>8630306
I don't have the time or willpower to examine and explain in depth exactly how or why you're retarded.
Look at a loss chart. A regular loss chart, granulated by steps.
Your epochs? Those are all at fixed step intervals. You can actually figure out where and when those epochs exist on that loss chart. Now, if you take a real good look, you'll realize that there are tons of peaks and valleys on that loss chart that occur on steps that are NOT shared by epochs starting/ending. If you take an even closer look, you might even notice that you can even predict spikes/dropoffs by step count.
How very incredibly curious that is.
Anonymous No.8630326 [Report]
>>8630320
I'm doing /u/ gens at the moment
Anonymous No.8630328 [Report]
>i don't have the time to look at a simple chart
so shut up nigga lol
Anonymous No.8630339 [Report]
>>8630262
>Also re:re:re:re: on choosing baking by steps or epochs.
what are you people doing in this thread
Anonymous No.8630340 [Report]
>just look at this chart!
>the chart is meaningless and completely misses the fucking point
waow
Anonymous No.8630344 [Report] >>8630350
You are both retarded. Neither the epochs nor steps reflect how ready the lora is.
Anonymous No.8630345 [Report] >>8630360
>a consistent patter is meaningless because <schizoshit>
Anonymous No.8630346 [Report]
Heeeeeere we goooooooo
Anonymous No.8630350 [Report] >>8630360 >>8630435
>>8630344
the only thing loss is even useful for is checking for NaNs
we need to bring gans back...
Anonymous No.8630360 [Report] >>8630366 >>8630435 >>8630642
>>8630345
that you think occurrence is relevant while completely ignoring the metric that matters speaks volumes of how retarded you are.
Gradients don't give a fuck about occurrence. Gradients do whatever the fuck gradients want whenever the fuck they want. And they operate on steps.
>>8630350
I want civit shitbakers to fucking leave.
Loss is only harped as a meaningless value because it varies by model, dataset and what you're doing. Which is to say the precise numbers of it is meaningless without a lot of context. But when you contextualize it within a chart and by its values along it, it's no longer meaningless. You can actually see what is happening.
Anonymous No.8630362 [Report]
>headcanon
>nooo hard data is not relevant
It's okay bro, speaking big words will make you a big man.
Anonymous No.8630366 [Report]
>>8630360
>I want civit shitbakers to fucking leave.
what's stopping you?
Anonymous No.8630368 [Report] >>8630446 >>8630476
what is the best updated model right now?
Anonymous No.8630371 [Report] >>8630374
>>8630227
>kukumomo 473
the AI already know the style tho?
Anonymous No.8630374 [Report] >>8630382
>>8630371
read up bwo
Anonymous No.8630382 [Report]
>>8630374
umm nyo

}
Anonymous No.8630387 [Report] >>8630486
>>8630320
stay in /vn/ bucko
Anonymous No.8630435 [Report]
>>8630350
On big pretrain runs when you can't actually overfit the model, loss, besides being an indicator of training going smoothly (pic related), may be a pretty useful metric. If you are seeing shit like this, you can immediately tell that something is fucked up.
One shouldn't really use it as a metric for tiny diffusion training runs at all, in any way.
>>8630360
>Gradients
Why don't you actually try to look at them instead?
Anonymous No.8630446 [Report]
>>8630368
helps sars
Anonymous No.8630454 [Report]
wow i hate inbuilt artists now!
Anonymous No.8630461 [Report] >>8630467 >>8630470 >>8630484 >>8630489 >>8630513 >>8630563 >>8630695 >>8630917
uoh cunny
https://files.catbox.moe/cj1okn.webp
decided to train a new version of this lora
lora and toml is here
https://mega.nz/folder/47Yj3ZIS#klaoBwVZI_u5DbjmCjkqRQ/folder/Fm4RUbiT
alk didn't train at all i'll probably need to inspect my dataset for that artist
i was thinking of trying out that lora finetune extract but this lora took six hours on my normal settings for 1 epoch with the amount of images and i cant imagine how slow it would be with everything i need to do to make finetuning work on 8gb of vram without crashing immediately
Anonymous No.8630467 [Report] >>8630474
>>8630461
cunnychad.. what is the best model right now? I am using naiXLVpred102d_custom
Anonymous No.8630470 [Report] >>8630474
>>8630461
>finetuning work on 8gb
you probably won't make it without modifying the code
Anonymous No.8630473 [Report]
What's the meta for finetuning anyway? Last I tried it on ezscripts it just spat out a buncha errors, fork or no fork.
Anonymous No.8630474 [Report] >>8630483
>>8630467
i switch between r3mix and my custom shitmixes
for this x\y plot i used 102d_final because i forgot to switch models but they're similar enough it doesn't matter
>>8630470
yeah i'll probably need to do some esoteric shit so im not planning on it any time soon
Anonymous No.8630476 [Report]
>>8630368
>best updated model
nai v4.5
>local
noob v-pred 1.0 and shitmixes
Anonymous No.8630483 [Report] >>8630488 >>8630566
>>8630474
if you share the dataset, i'm willing to let it bake for a few hours on my 3090
Anonymous No.8630484 [Report] >>8630488
>>8630461
.... bakariso?
Anonymous No.8630486 [Report] >>8630490
>>8630387
>no image
you can stay here and I'll go back, deal?
Anonymous No.8630488 [Report]
>>8630483
sure let me zip it up
>>8630484
who?
Anonymous No.8630489 [Report] >>8630505
>>8630461
Uh I just baked a ohgnokuni lora myself, but I guess that yours is more efficient
Anonymous No.8630490 [Report] >>8630495
>>8630486
sounds like a win win to me!
Anonymous No.8630495 [Report] >>8630499
>>8630490
>trapped myself in /vn/
wiat fuck NOOOOO
Anonymous No.8630499 [Report]
>>8630495
oh myonyonyo
Anonymous No.8630500 [Report] >>8630503 >>8630507
>higher res makes weird necks
>but regular res doesn't really make the artist look right
it's...
Anonymous No.8630503 [Report]
>>8630500
over.
Anonymous No.8630505 [Report]
>>8630489
i'm pretty happy with the style replication this go around but a more focused lora would probably still be better
Anonymous No.8630507 [Report]
>>8630500
nai.
Anonymous No.8630511 [Report]
uvh i guess i could finally try going back to highresmeme
Anonymous No.8630513 [Report] >>8630517
>>8630461
Why bake all of them into a single lora? I will not remember what's included.
Anonymous No.8630517 [Report] >>8630522
>>8630513
i will
also doing retarded shit is fun
Anonymous No.8630522 [Report] >>8630526
>>8630517
Whatever, I'll just copy it six times and name them after each artist trigger. Good job on making it so small.
Anonymous No.8630526 [Report] >>8630536
>>8630522
thats because it's only 8dim
you don't really need more for 99% of lora applications
Anonymous No.8630528 [Report] >>8630541
>double gen time
>with less detail
ugh maybe that stabilizer lora from anon needs another go
Anonymous No.8630536 [Report]
>>8630526
Not for styles anyway, just for overly-detailed detailed gachaslut clothing, guns, cars, etc. I know, but most people didn't agree last time it was brought up.
Anonymous No.8630541 [Report] >>8630554
>>8630528
gen time cannot be the lora's fault beyond the extra vram cost, equal to its filesize
Anonymous No.8630548 [Report] >>8630554
>>8630227
do you use any tags for tedain?
Anonymous No.8630554 [Report]
>>8630541
nyo i mean that highres is that, maybe i should try the lora instead
it's still not that good though, eh, i'll have to experiment a bit with the highres
>>8630548
ye the tags on the left is what i trained with
u can see the top tags in webui too btw
that tedain is mostly just a stabilizer though, base tedain is okayish enough
Anonymous No.8630563 [Report] >>8630564
>>8630461
Thanks. I will be using this exclusively for hags, fyi.
Anonymous No.8630564 [Report]
>>8630563
i remember liking feral lemma for hags back in the day
have fun
Anonymous No.8630566 [Report] >>8630594
>>8630483
https://mega.nz/file/o7pDUSDR#vdl2j9aPy257eVBHOMBUqCu9crck0NBzYyXR9b2ocHI
Anonymous No.8630577 [Report] >>8630587
You know it's kinda funny but the weird anatomy melties from highres mostly happen in the most basic prompts like 1girl standing and portraits, less so in actual sex gens
I kinda wonder why
Anonymous No.8630587 [Report] >>8630599
>>8630577
There is probably more "stuff" for the model to fill the picture with without needing to hallucinate bifurcated torsos.
Anonymous No.8630594 [Report] >>8630597
>>8630566
sweet, but you didn't need to include .npz files
also that's a lot of lowres images
Anonymous No.8630597 [Report] >>8630734
>>8630594
i knew i was forgetting something
i will blame it on not having had breakfast yet
also yeah i pretty much just didn't bother with filesize other then with lokulo who i did put in the effort to upscale
there's probably some shitty 200x500 images in there and also some old bad tagging experiments from months ago
Anonymous No.8630599 [Report]
>>8630587
Most likely, I mostly saw that with necks and torsos.
Anonymous No.8630612 [Report] >>8630614
eh naah highresfix is not that good for my purposes
oh well
i'll just gacha more
Anonymous No.8630614 [Report] >>8630617
>>8630612
>eh naah highresfix is not that good for my purposes
what are you trying to do?
Anonymous No.8630617 [Report] >>8630621
>>8630614
Well, I just don't think the clarity is nearly the same as just genning higher res. I can stand the occasional body weirdness for not having to fix the entire image.
Anonymous No.8630621 [Report]
>>8630617
Oh, definitely agree
Anonymous No.8630625 [Report] >>8630632 >>8630647
Scrapin'
Anonymous No.8630629 [Report]
>>8630325
The hallmark of all skilled people is making a complicated thing look/sound simple.
Anonymous No.8630632 [Report] >>8630633
>>8630625
you could get it much faster from huggingface with this https://github.com/deepghs/cheesechaser
Anonymous No.8630633 [Report] >>8630635 >>8630653
>>8630632
This is for scraping top images from >image artists, I don't think that'd be doable on that?
Anonymous No.8630635 [Report]
>>8630633
you can scrape just the image ids and then pass the ids to the downloader
Anonymous No.8630642 [Report] >>8630732
>>8630360
>you are too stupid to understand
>you are just dumb since you don't agree
>you're wrong and I won't elaborate further
>I'm the smartest person in the room
>if you disagree you must be [boogeyman]
When the fuck will you retards grow up? I'm so sick of this boring tripe every fucking thread on every fucking board. Just once I'd like someone to actual expand upon their knowledge and teach someone something rather than insist upon their superiority without proof. Fucking hell.
Anonymous No.8630647 [Report] >>8630653
>>8630625
I made a lora for him. Do you really need 20k? 100 hand picked images was fine but my lora might be shit.
Anonymous No.8630653 [Report] >>8630658
>>8630647
lmao >>8630633
this is just my retarded way of not having to click on 20000 artists on danbooru to see what i want to bake
different artist images for previews
Anonymous No.8630658 [Report] >>8630662
>>8630653
feels like #1 top image is bad for that because what if the image is old as shit and you prefer their newer style or vice-versa
Anonymous No.8630662 [Report]
>>8630658
i mean yeah but it beats clicking a gazillion images or scraping and having to go through multiples
it's not like i'm gonna fomo artistidontknow4233 if i bake artistidontknow9564 instead
i guess it could be a filter like <check latest 25 images and take the one with the highest score> but eh that'd probably add overhead and shit
Anonymous No.8630664 [Report] >>8630667 >>8630669 >>8630670
Uhh, that link is down bwo. Which scraper do you guys use?
Anonymous No.8630667 [Report] >>8630671
>>8630664
grabber
Anonymous No.8630669 [Report] >>8630671
>>8630664
picrel was something gpt cooked but for regular stuff i've always used grabber (with a lot of filters)
and czkawka for first round cleaning
Anonymous No.8630670 [Report] >>8630671
>>8630664
I still use Grabber. Seems to struggle with Danbooru lately so I grab Gelbooru instead.
Anonymous No.8630671 [Report]
>>8630667
>>8630669
>>8630670
Hmm alright thanks.
Anonymous No.8630691 [Report] >>8630705
I need a lora that's capable of removing banding without affecting style...
Anonymous No.8630695 [Report] >>8630701
>>8630461
Based Harada give them extremely wide hips.
Anonymous No.8630701 [Report]
>>8630695
could probably punt a soccer ball through there
Anonymous No.8630705 [Report] >>8630746
>>8630691
>banding
i call that sovl
unironically tho do show an example, i'm wondering how you're getting that
Anonymous No.8630709 [Report] >>8630746
There will be no picture. He is a schizo.
Anonymous No.8630712 [Report]
what is edm2?
Anonymous No.8630732 [Report]
>>8630642
there's not really anything to prove, it's just how loss works on a conceptual level. it's measuring the pixel space difference between the ai's denoised training image and the original training image at whatever timestep. that's great if you're training a model from the ground up and you're starting off with esoteric blobs of colors because lower loss is gonna be better. i think that that does also apply to styles to a certain degree. but if you're training something like a character lora then it's almost entirely useless as a metric because it's still measuring the pixel space difference. so the loss might be going down because it's learning your character, or it might be going down because it's learning whatever skewed style is in your character's training data, or it might be going down because it's learning to associate certain words in your prompt with specific compositions and poses, etc. It just doesn't mean anything at that point.
Anonymous No.8630734 [Report] >>8630736
>>8630597
alright i forgot to change lr scheduler settings
Anonymous No.8630736 [Report]
>>8630734
sovl
Anonymous No.8630737 [Report]
>image with longneck due to highres
>lassoed the entire head
>moved it down hard, mild hand painting
>denoise at 0.5
>it just werks
I forget these models aren't as shit as 1.5
Anonymous No.8630746 [Report] >>8630751
>>8630705
https://files.catbox.moe/t9xqks.png
Ok, here is a minimal example with no loras or snake oils, it's pretty egregious here though it's visible on the other seeds too. Loras do help. But there are some inbuilt styles I actually want to use which loras mess with so I really just want a stabilizer.

>>8630709
Bro we all know noob has issues, this is just one of the lesser talked about.
Anonymous No.8630750 [Report]
ctrl+f stabilizer
Anonymous No.8630751 [Report] >>8630762
>>8630746
I was just trying to bait you into telling me wtf banding is.
Anonymous No.8630762 [Report] >>8630771
>>8630751
I mean you could google it. Not some made up nuterm. It's just the artifact where shading manifests as visible bands, you can tell from the image I posted it's pretty visible there.
Anonymous No.8630771 [Report] >>8630772
>>8630762
Are you telling those kino lines on some of my gens aren't meant to be there?
Anonymous No.8630772 [Report] >>8630778
>>8630771
Yeah, depending on the style. Wouldn't you agree that it'd be nice if you could have control over effects like these just by prompting? Actually, it's interesting that there is a tag for "banding", but it doesn't really work.
Anonymous No.8630778 [Report]
>>8630772
>Wouldn't you agree that it'd be nice if you could have control over effects like these just by prompting?
Now that you mention, yes, sometimes I like them, some others I hate them. I wasn't even aware that was a thing
Anonymous No.8630781 [Report] >>8630790
>stable diffusion
>isn't stable
Anonymous No.8630790 [Report]
>>8630781
unstable diffusion
Anonymous No.8630794 [Report]
Alright, time to move

>>>8630793
>>>8630793
>>>8630793
Anonymous No.8630917 [Report]
>>8630461
TOT
Anonymous No.8632576 [Report]
>>8627899
Sorry i'm late, I only saw your message now and the download isn't available anymore D:
Anonymous No.8632639 [Report]
PAGE SKIBIDI BUMP
Anonymous No.8632795 [Report] >>8633364
>>8629033
box?
Anonymous No.8633364 [Report] >>8633615
>>8632795
it has stealth metadata
Anonymous No.8633615 [Report]
>>8633364
I see, so it's a vpred model with a non vpred lora, or am I tripping?
Anonymous No.8633827 [Report] >>8633834
PAGE 11 BUMP
Anonymous No.8633834 [Report]
>>8633827
STOP POSTING HERE YOU RETARDS