Not this shit again edition
Previous Thread: >>
>>8613148>LOCAL UIreForge: https://github.com/Panchovix/stable-diffusion-webui-reForge
Comfy: https://github.com/comfyanonymous/ComfyUI
>RESOURCESWiki: https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki | https://comfyanonymous.github.io/ComfyUI_examples
Training: https://rentry.org/59xed3 | https://github.com/derrian-distro/LoRA_Easy_Training_Scripts | https://github.com/bmaltais/kohya_ss | https://github.com/Nerogar/OneTrainer
Tags: https://danbooru.donmai.us/wiki_pages/tag_groups | https://danbooru.donmai.us/related_tag
ControlNet: https://rentry.org/dummycontrolnet | https://civitai.com/models/136070
IOPaint (LamaCleaner): https://www.iopaint.com/install
Upscalers: https://openmodeldb.info
Booru: https://aibooru.online
4chanX Catbox/NAI prompt userscript: https://rentry.org/hdgcb
Illustrious-related: https://rentry.org/illustrious_loras_n_stuff
Useful Nodes/Extensions: https://rentry.org/8csaevw5
OP Template/Logo: https://rentry.org/hgg-op/edit | https://files.catbox.moe/om5a99.png
>>8624388hey bwo, what resolutions do you train on? saw in the last thread some anons saying you recommend more than 1mp
>>8624386 (OP)Could've just let it die, there doesn't seem to be much difference to /hdg/ these past few days.
>>8624386 (OP)Thread moves faster at page 10 than it does at page 1.
>>8624388>[masterpiece, best quality::0.6]Out of curiousity, what's this for? Do you think those tags negatively impact finer details?
>>8624401I blame all of you for this
Anyone have a plan of attack on more consistent and better backgrounds. I want to make a sequence of images of 2 characters on a bed and have the camera move from shot to shot. Is there a way to keep the windows facing the right way, the nightstand to stay to the right of of the bed, the mattress to stay the same color, etc.
I'm open to any bat-shit theories or even the use of 3d modeling to solve it.
>>8624403It's placebo that he can't explain. Everyone's doing something different with quality tags anyway.
>>8624411The best way is to sketch and inpaint, anon. There's no getting around the fact that you should be learning to draw at this point.
>>8624411I have tried many things to get consistent backgrounds, the most reliable way to do it is controlnet but, due to the nature of all of this, while having some sort of consistency on the objects, the shading and overall colours will inevitable vary, you'll need to correct them using an external tool like PS
>>8624399nvm I take it back
>>8624411>backgroundsishiggydiggy
>>8624493Catbox? Did you add the camera effect after? Very rarely does it come out that clean.
>>8624411Even if you get the locations right, you're unlikely to get the exact same design of every piece of furniture. Maybe if your checkpoint/loras are really overfit, or if you give each object a long and detailed prompt. You can use regional prompter with very fine masks, tell it exactly where you want every piece of furniture. Combine with controlnet of some very rough geometry, edges of the room, window frame, a box for the bedside table, etc.
Just guessing here, I've only done this for characters not backgrounds. Might give it a try later.
file
md5: 07b80d14f45d845f10072015744bb78d
🔍
>>8624493>apsesad it doesn't say arse
>>8624605That's a surprisingly human-looking black person.
>>8624540The spam has been quite constant over there for some reason no one even remembers. Annoying but funny how the lack of care from any noderation is the only thing really going for the trolls.
>>8624607I got tired of self insert pov and the 1boys look less rapey when you don't prompt ntr or giga penis.
>>8624598I added those on post and then inpaint them a little, sadly
Here is the box anyway if you want it
>https://files.catbox.moe/dw0ht9.png>>8624606Got a little lazy fixing the text on both images
>>8624612yeah but there's literally no point having sex without stomach bulge
>>8624605based contrast enjoyer
>>8624386 (OP)isn't that pic a bit too risky?
Alright, I've got a new AI rig all setup and ready to train some Loras. I have some datasets ready to go. What's a good VPRED config I could start with and which trainer do people use these days?
>>8624637It is fine and there is nothing wrong with it saar
>>8624656ez scripts. There's a couple configs posted last thread I think.
file
md5: ad791a462a006ff8563d3f96563ffbd3
🔍
In order to get the best possible lora, you'll have to use sdg and then autistically rebake until you get lr and training time just right
file
md5: 3f33046fd60f04be686a652bc8445d49
🔍
>>8624720>sdgstochastic descent gradient?
Can someone tell me why when using Regional prompter, I have a prompt works well but whenever I erase a tag or two from one of the regions, the image composition just breaks entirely and gives me anatomic horrors instead until I put back those tags?
Additionally, can anyone give me tips for reg prompter I feel like the original repo itself is kinda shit at explaining things out
Holy kek, anyone tried FreSca node in comfy? If you set scale_low to something between 0.7-0.8, it completely gets rid of fried colors on noob
file
md5: 508def998420b7c4a17244dbf7e3eacf
🔍
>>8624788Exact same seed, euler a, cfg 5
cloudflare is down; the end is nigh
>>8624797Huh, explains why half of the sites are down for me...
https://files.catbox.moe/a19c3q.png
file
md5: 1c2787110e9f2bf2e9f965486b9563d6
🔍
it's starting to get good at epoch 8 i guess, this is a 22-step 1152x2048 base res gen
First stuff i prompted, it's super vanilla but i kinda like.Any ideas or suggestion to make better stuff ?
>>8624841And... Do we have to guess what model is this?
>>86248421toehoe, 1boy, dark skin, very dark skin, huge penis, large penis, sagging testicles, veiny penis, penis over eyes, squatting, spread legs
file
md5: 39cc8852fefd2cf47379273939b6b807
🔍
>>8624841noob vpred 1.0 for comparison
>>8624851see
>>8624232
>>8624842Just do what you want to do man, the entire point of making your own porn is that you can make your own porn.
>>8624868nu-uh, the real point is putting increasingly larger penises inside toehoes
I made a small userscript to save and retrieve prompts or artists combos on-the-fly.
>>8624892isn't that feature already built in a1111
If this is the real non schizo thread, can anyone check
>>8624866 and
>>8624910 ?
It just doesn't seem right, im not using a lora or anything. looks specially suspicious when furry models are doing better
https://files.catbox.moe/ircmls.png
>>8624892What do you mean? What about this is different than what infinite image browsing can do?
file
md5: 44dfb1fc777a8faef3fc32a5e8874f1d
🔍
>>8624918Is his how it's supposed to look?
>>8624932he asked about noob vpred, not 102d shitmix
>>8624947What newfag is genning on noob vpred without a lora? He should just pick up the shitmix if he's mad.
>>8624951he wants to check if his setup is correct retard
>duty calls
>>8624895Nope, otherwise I wouldn't have made it.
>>8624925Just a fast way to quickly store prompts, or any text you want really, and retrieve it. It's always there ready for when you need it.
>>8624965>he should use 102d>he wants to check if his setup is correctHow are these things mutually exclusive you brainlet? Good luck trying to get anyone to help you though.
>tfw the power of generalization means you can use the \(cosplay\) token with any character the model knows and it kind of works, even if the real character_(cosplay) tag doesn't exist or has low amount of samples
>you can also create tons of pokemon cosplay with this and pokemon_ears, pokemon_bodysuit kinds of tags
>>8624932That looks way, way better. Is vpred just unusable without loras, that's the meme? I legitimately don't know anon, im pretty new thats why I prefaced it like that
>>8624967Oh okay thanks for the demo. How bloated does this get when you have lots of artists combos and such? I like infinite image browsing because I can just search for a pic in my folder then copy the metadata easily.
>>8624976The point of my post was to say don't use negatives and use negpip extension for things you really need. Then to point out that new people shouldn't be using base noob since it's hard to use. 102d is far simpler and if that's what those artists actually look like then you're better off using 102d. Of course this simple logic attracts console war retards, unfortunately.
>>8624980Well you'd have to scroll through a list of all the prompts you saved, but I don't plan on saving every single prompt.
You just gave me a really good idea though: a search bar.
>>8624436don't use them :3
>>8624985>Of course this simple logic attracts console war retards, unfortunatelynow this is a strawman, you just gave him a gen which is mostly unrelated to his request without much of an explanation, doubling down on not doing what he asked when questioned directly. pointing this out doesn't make anyone a console war retard.
>>8624976>Is vpred just unusable without loras, that's the meme?it is ughh usable but like the other anon said it requires some wrangling
>>8624999No it's not a strawman but your intense desire to start a fight where there was none. You are the one trying to defend yourself since you butted in and fucked up, an unforced error.
for me it's um hmmm not genning
>>8624985>negpipInteresting. What else do the proompters on the cutting edge use these days?
>>8625008I like using cd tuner with saturation2 set to 1 on my img2img pass for color corrections.
>>8625005i will gen when i am good and ready.
file
md5: 323af98c6eb1a4321bfc801ce83d9bd6
🔍
>>8624985I'm sorry I didnt realize there was metadata to the picture so that all went over my head. Also what makes base noob hard to use?
desu I followed the prompt format and sampler thing on the page so i figured it would be fine, this issue wasnt present with eps when I tried that so I really figured something was broken.
I'd still like someone to use the catbox and generate that same image in vpred just to see if its fucked or not
>>8625013desu half the time i can be bothered to gen nodaways it's non /h/ stuff
regular sexo is the most boring stuff to gen
file
md5: 36ea7061021f057df94a31515244fcf1
🔍
>haven't pulled in ages, like literally since Flux came out
>pull
>try a card just to see if anything was messed up
>the gen comes out exactly the same
Sweet.
>>8625020are you training with cloud gpus or locally?
file
md5: fba5042b9aabd491ef1f8fe1628d0cdc
🔍
>>8625023locally on a 3090
>>8625025tempted to train on high res now myself. gonna prep some in my new dataset
>>8625020>>8625025what artist/s are these?
>>8624806the girl looks like fate testarossa
00195
md5: 28a15370a30d18de66f410e23a286dae
🔍
>>8625017>regular sexo is the most boring stuff to genthat's why I gen girls kissing girls
file
md5: caa8baf92804982de108fce3a2cd5269
🔍
pic uses zero negative but fore some reason the style wildly varies between seeds, also there are no tponynai images in the dataset i swear
>>8625029gonna be tough without a second gpu to test things on
>>8625032>what artist/s are these?doesn't matter because they aren't recognized lol, like i said for some reason the style changes a lot if you change the seed
So is there any news about whatever the fuck the NoobAI guys are doing or is everyone still stuck using base V-pred/EPS model and shitmerges? I tried the v29 shitmerge but don't fuck with it much.
>>8624985I just realized my workflow already has negpip and I just never took advantage of it because I stole it and didn't bother looking into what everything did kek.
>>8625037>doesn't matter because they aren't recognizedwhy hide the artist name? maybe i just want to see their original work
>>8625037>gonna be tough without a second gpu to test things onluckily I got 2
file
md5: f4f8d2db8ae9f4d7e028b5575d6fd4bb
🔍
>>8625037I think it may have started overfitting on the train set...Regardless, I'll try to extract a 1536x lora, maybe it'll be useful for upscaling.
>>8625044if you really want it then it's gishiki_(gshk) and for the second one it's arsenixc, void_0 plus a bunch of lewd artists who don't draw bgs
>mfw base res gen gives this: file.png: File too large (file: 4.11 MB, max: 4 MB).
>>8625052thanks bwo
also are you the lion optimizer anon from pony days? i noticed you posted a sparkle picture
Just tried out negpip. The example with the gothic dress really works. With that said, when I tried converting my negative prompt from a real world complex gen I had, the outputs were worse and adhered to the prompt less. Maybe the weights need to be adjusted. Will experiment more.
you may be shocked if you learnt of all of my identities...
>>8625058The last time some anon tried to sell on negpip failed miserably, if I were you I wouldn't bother with it, just keep your regular negatives to minimum
>>8625059It's good that you've at least toned down the bullshit elitist shitposting from pony days.
>>8625058The point is that you are not supposed to be using any negatives at all and negpip only the specific things that you don't want to show up but are "embedded" into other tags.
>>8624319Increiblemente Basado.
>>8625059plot twist: you're gay.
>>8625060I mean, the fact that it can do something normally impossible like subtract concepts that were previously impossible to subtract, it would seem like it has potential, but there may be a learning curve to how to use it for highly complicated prompts when you are used to using the negative prompt.
>>8625064Yeah that's what I used negatives for, so now I am testing moving the negative to the positive with negative weight as instructed. In my negative is both quality tags as well as specific things I was trying to subtract. I.e. latex, shiny clothes from bodysuit (in the positive).
If you have a problem with the idea of using negative quality tags, I do still use them because some of the artists I use are (likely) associated with their old art which looks bad, and my AB testing shows me that those tags have a clear good effect on the model and prompts I use.
>>8625071I'm not sure how similar negpip and NAI's negative emphasis are, but for the latter, trying to take my normal negs and putting them in the prompt with negative weight just causes a mess, but it works very well for removing things in a targeted way.
>>8625063well, you never know
>>8625067>>8625070i'm surprised no one connected at least 2-3 of my identities (out of maybe 10), actually. even though i've been called names multiple times.
>>8625074give us a hint schizonon. what do the numbers mean?
>>8625074shut up birdschizo/momoura
>>8625063Elitist was (is) me. I just don't bother with your shit general(s) anymore, swim in your diarrhea yourselves...
>>8625097>and god bless naiThis tells about you more than it does about me, do you realize that?
oh sorry please dont spam the big lips character again...
Ma'am, I believe /hdg/ is what you were looking for. This is /hgg/.
file
md5: 9e8e52757345954d8f9fd891563898d8
🔍
post gens
file
md5: c13cc70e884e1aab30c7a011e961c7dc
🔍
haven't been doing much nsfw lately
>>8624605catbox? like the style here
>>8625131It's teruya (6w6y)
After genning a ton, I feel like my perspective on traditional art has changed. Now whenever I look at most art, I can't help but feel how shitty they are, how off the proportions are, how inconsistent a ton of artists are, while I've become more appreciative of the artists that have better standards.
more like lora baking general
why are we thriving, bros?
>>8625183Shitposters see this place as high effort, low reward.
>>8624863are you planning on uploading it or will it be a private finetune?
>>8624863I'd rather just have your tips on how to finetune. Learning to fish and all that.
>>8624393bwo i've been training on 1536, found that finer details and textures are replicated better (empirical) with less artifacts, would need lesser face crops to get better looking eyes for example.
also noted that it led to the losses converging in tighter groups & at lower minimas. i have not tested for training on noob, but so far i do find it beneficial when training on base illu0.1.
>>8624403Like
>>8624436 said, it could be placebo, but I do that to reduce the effect of the quality tags on the style.
>[masterpiece, best quality::0.6]>Out of curiousity, what's this for?this will apply the quality tags for the first 60% of the steps only.
>Do you think those tags negatively impact finer details?quality tags tend to be biased towards a certain style and might detract from the style you might be going for. i.e. scratchier lines of a style you are using might become smoother due to quality tags.
0.6 is just an arbitrary value that i selected to 'give the image good enough quality' before letting the other style tags / loras have 'more effect' (honestly the effect is quite minor - see picrel, outlines slightly more emphasized with quality tags)
>>8624863seconding
>>8625207, i'm interested to know how your are going about your finetuning; i've got some questions too
1) do you have a custom training script or are you using an existing one?
2) what is the training config you have setup for your finetuning, and is there any particular factors that made you consider those hyperparameters?
3) in terms of data preparation, is the prep for finetuning different from training loras? do you do anything special with the dataset?
4) i too am using a 3090
>>8625025, how much vram usage are you running at when performing a finetune at your current batch size?
>>8625223I train on noob, but the other anon was also recommending training at higher res, so I'll give it a go
>>8625281still not an excuse for ruining what could have otherwise been a good gen
i came here for 'da ork cantent.
Has anyone experimented with putting negpip stuff in the negative prompt? What happens if you do that?
>>8625223Thanks for explaining and the comparison image, and don't worry, as an aficionado of fine snake oils I can appreciate the finer methods that are sometimes hard to see. I've been doing similar with scheduling artists late into upscale for finer details like blush lines, prompt scheduling is a great tool.
file
md5: caa00fd54b9427e6f236c73fa9f13292
🔍
e12
>>8625205I'll upload base 1024x checkpoint, a 1536x checkpoint and a lora extract between the two. I'll also probably upload a merge of the last two epochs if it turns out to be good.
>>8625229>do you have a custom training script or are you using an existing one?I'm using a modified naifu script
>what is the training config you have setup for your finetuning, and is there any particular factors that made you consider those hyperparameters?Full bf16, AdamW4bit + bf16_sr, bs=12 lr=5e-6 for 1024x, bs=4*3 lr=7e-6 for 1536x, 15 epochs, cosine schedule with warmup, pretrained edm2 weights, captions are shuffled with 0.6 probability, the first token is kept (for artists), captions are replaced with zeros with 0.1 probability (for cfg). I settled on these empirically.
>in terms of data preparation, is the prep for finetuning different from training loras? do you do anything special with the dataset?Yes and no. You should tag what you see and give it enough room for contrastive learning in general. Obviously no contradicting shit should be present. Multi-level dropout rules like described in illustrious 0.1 tech report will also help with short prompts but a good implementation would require implementing more complicated processing pipeline, so I'm not using it.
>how much vram usage are you running at when performing a finetune at your current batch size?23.0 gb at batch size 4 with gradient accumulation.
file
md5: df134711515140eec833a625da50aefe
🔍
>>8625331I'm testing it right now and it feels like it does have some use. You can't add or subtract large things to an image using this method, but you can nudge, mostly, colors, without affecting composition or other things in the image, whereas, for instance, if you prompted "red theme" in the positive like normal, it might turn a forest autumn or something. But doing negpip in the negative prompt makes it look like the original gen but with a more red tint to it.
This makes sense as the negative and positive prompts do not pay attention to each other's context.
I was also able to make the sky more clearly visible through the leaves in a forest gen, while not altering the composition of the image much. So I think this is what it (negpip in the neg) could be useful for. Nudges to existing gens without changing composition or subject matter, which might happen in pure positive prompting.
>>8624386 (OP)this isn't ai. Artist name?
>>8625338thanks for sharing!
i still have a couple of (ml noob) questions that i'd like to ask if you don't mind...
>I'm using a modified naifu scriptwas any part of naifu lacking in any way such that you had to make modification? or was there a custom feature that you required specific to the finetuning?
>captions are replaced with zeros with 0.1 probability (for cfg)would you care to explain why the approach where captions are replaced with zeros is used for cfg? what impact does this make to the cfg, is it for the color blow out?
>bs=12 lr=5e-6 for 1024x, bs=4*3>batch size 4 with gradient accumulationi saw that your target batch size is 12 (GA (3) * BS (4))
is there any hard and fast rule as to how large a batch should be when training a diffusion model? i noted that many models are baked with a high bs (>100), e.g. illustrious 0.1 was baked with a bs of 192. should batch size be scaled relative to the size of the training dataset?
>>8625338have you tried data augmentation like flips and color shifts?
>>8625345>this isn't ai???
>>8625359flip aug is not only bad it's actively harmful to training. it unnecessarily uses your parameters and fucks everything up since it's, more or less, forcing training to do something twice that it's already effectively doing without you telling it to.
have a paper
https://arxiv.org/abs/2304.02628
>>8625338>pretrained edm2 weightsHuh, you can reuse edm2 between different runs?
>>8625338>pretrained edm2 weightscould you share those? I already have some, but wouldn't hurt to see if I could be training with better ones
>>8625352>or was there a custom feature that you required specific to the finetuning?Mostly this, I've been using naifu since sd-scripts suck too much
>would you care to explain why the approach where captions are replaced with zeros is used for cfg?it's used to train uncond for the main cfg equation which is
>guidance = uncond + guidance_scale * (cond - uncond)cfg will work regardless, but it will work better (for guiding purposes) if you train uncond. in general you shouldn't drop captions on small lora-style datasets.
>is it for the color blow out? it has absolutely nothing to do with it
>is there any hard and fast rule as to how large a batch should be when training a diffusion model?no, but if you are training clip you would never want to have batch size < 500 since clip uses batchnorm. large batch sizes will help the model not to encounter catastrophic forgetting due to unstable gradients, and since sdxl is such a deep model you basically never enter local minima because there is always a dimension to improve upon, as long as your lr is sufficiently high.
however, if you are relying first on gradient checkpointing, then on gradient accumulation to achieve larger batches, having very large batches may quickly become very expensive compute-wise.
>>8625359don't you realize this is harmful, especially if you want to train asymmetrical features
>>8625364of course, you'd not want to start from scratch every time, would you?
>>8625373i'll upload the weights next to checkpoint then
>>8625381>i'll upload the weights next to checkpoint thendid you share before? I missed that if you did
>>8625381nta but what did you modify in naifu?
>>8625381Just curious - how do you extract locons from full_bf16-trained checkpoints?
Tried ratio method from the rentry - and for some reason it gives me huge extracts - like 2-2.5GB. Perhaps it has something to do with weights decomposition after training in full_bf16 mode.
The ratio works fine for checkpoints trained in full_fp16, but I didn't managed to get good results from fp16 trains...
>>8625396nta, I use 64 dimensions and 24 conv dimension (for locon) and that gives me 430mb and 350 mb if you don't train the te
>>8625406Yeah, fixed works just fine. I was curious about the ratio one.
retrained the modded vae and now it is actually kinda usable, unlike the garbage before: https://pixeldrain com/l/FpB4R8sa
though i think it still needs more training for use in anything large scale
also updated the node: https://pixeldrain com/l/9AS19nrf
a comparison of 1(one) image enc+dec test, though this is not fair as the modded vae has a much larger latent space (for the same res) compared to base sdxl vae: https://slow.pics/s/5Kc8RkPa
the practical effect of it is basically that you dont have to damage the images by upscaling to 2048 to get the equivalent quality level of a 16ch vae
i tried training a lora with it and it was slow as balls, like 3-4x
if someone wants to give it a try training, i can walk you through how to modify sd-scripts (its just applying the vae modification at one point)
>>8625413im assuming that node is how you can load the vae? wouldnt be able to use this vae on reforge?
>>8625201good looking titties,, I want to squeeze and kiss and suck on them
>>8625453you can load it, but it will still have the downscaling making it incompatible with the trained weights, someone would have to modify forge to support it yeah
but ultimately you need to train sdxl with the vae on the higher latent resolution or you will have the same body horrors like if you try to raise genning res too high
>>8625413What makes this VAE different and why does it need its own node?
>>8625399What model is this?
>>8625413>needs its own nodeDoes it work on reforge? I'm getting errors in the console but I do see a difference. Might just be placebo.
>>8625413comparison looks nice, almost too good
>>8625461nta, but I think he said that he removed some downscaling layer which in theory, if the vae is trained enough to adapt, would lead to sharper outputs.
>>8625413gotta ask, but wouldn't increasing the latent space require a full retrain of sdxl?
>>8625476>almost too goodtoo good to be true
>>8625476the latent the decoder can work with is much larger, which also leads to much larger training costs, but better performance
>>8625480this is basically training at a much higher res, rather than changing the entire dimensionality, it wont require full retrain, but it will require training for sdxl to learn to work with bigger latent sizes (very similar if you wanted to train it to not shit itself at generating 2048x2048 images)
the training and generation is also gonna be slower (though peak vram usage at the vae decoder output should be the same), but you can downscale the images and train at 512x512 if you want standard sdxl compression
>>8625413How many epochs and what's the training set?
It looks much better than sdxl one for sure, but still a bit blurry, especially in background details.
>>862548910 epochs for encoder and 3 for decoder with 3k images, the adaptation is very fast
it will probably still improve in terms of adapting but still there is a limit to what can be done in terms of small details, the improvement is not really from the training, but rather from the larger encoded latents
>>8625488so you basically doubled the internal resolution
>>8625499think is Cascade could do this on the fly and it didn't work well. You still had to gen small then upscale, and manually adjust the compression level to match what you were doing. Low compression would break anatomy and high compression would kill details.
I think this may just be taking advantage of the fact that illustrious/noob are unusually stable at higher resolutions, compared to other SDXL models.
>>8625413Is this just for trainers? I tried just replacing my vae and gens are coming out at half the resolution, so I doubled the latent dimensions but then the image becomes incoherent.
is finetuneschizo here? anything i can do in kohya for better hands?
>>8625521yes, the body horrors stopped after i trained a lora with it on illust 0.1, but it would require a larger tune to truly settle in
comfy has hardcoded 8x compression for emptylatentimage so yeah you gotta put in 2x
>>8625345>this isn't aikekmao
>Artist name?Me, your favorite slopper
>>8625546this isn't hentai. Cum splotch?
>>8625413I tested genning with this and I feel like it's a bit muddier in how it renders textures compared to noob's normal vae (I guess that's just standard sdxl vae?), though it does seem slightly sharper for linework. And both feel slightly worse than lpips_avgpool_e4.safetensors.
>>8625549>lpips_avgpool_e4.safetensorsHuh, link?
So in the end it's all just more snake oil...
>>8625562https://archived.moe/h/search/text/lpips_avgpool_e4/
>>8625564Snake oil does nothing. This clearly does something, just not sure if it's better or worse.
>>8625568It cost more effort to post this link than it would've to just point the guy at the pixeldrain
>>8625548cum filled pocky
>>8625598give a guy a fish, he'll eat for a day
>>8625603give a guy a subscription to a fish delivery service and he'll eat forever
good reflections are hard
black pill me on https://www.youtube.com/watch?v=XEjLoHdbVeE&list=RDXEjLoHdbVeE&start_radio=1
>>8625700just fine-tune bro
>>8625701I did, but finetuneschizo disappeared and I'm out of parameters to mess with
>civitai wants to further censor their dogshit site and they do it by le HAHA YOU ARE DOING GOOD AMBASSADOR,LE HECKIN POWER FOR YOU
goddamn faggots,why are they doing it?
the site is also so fucking dogshit you can't even search by certain filters
>>8625712Chub has gone this route as well for textgen. No one ever imagined that the cyberpunk dystopia would be a sexless normiefilled existence.
>>8625715>Chub has gone this route as well for textgen.And it is pretty dead now.
>>8625712Is it really a mystery? Go look at their financial breakdown for last year, particularly the wages. I'd be sucking mad cock too if my ability to pay myself that much was threatened
Which ControlNet model should I use for a rough MS Paint sketch/scribble?
>>8625715What's this?
https://chub.ai/characters/anonaugusproductions/lola-and-lily
>>8625413excellent. now do a finetune of flux's vae and send it to lumina's team
>>8625729I used this one for something like that and pretty much everything else
>https://huggingface.co/xinsir/controlnet-union-sdxl-1.0/blob/main/diffusion_pytorch_model_promax.safetensors
>>8625734Thank you I already tried this one but my results werent great so far do you have a short guide with settings for it by any chance ?
file
md5: e084f13b2f3d1e41465ac1e80e08fb04
🔍
>>8625735This should be more than enough, play around with the control weight if you are not getting what you want
>>8624918naiXLVpred102d_custom is king
>>8625737Works great thanks a lot! :D
i hate belly buttons and nipples
>>8625413Is this vae only training? it significantly adds more details on my gens but also little white dots every now and then
Raw upscaled gens with my usual settings
>https://files.catbox.moe/9o9zxd.png>https://files.catbox.moe/1gm2zp.png>https://files.catbox.moe/tkbx2r.png>https://files.catbox.moe/4xhfzv.png>https://files.catbox.moe/tktube.png>https://files.catbox.moe/ukc1zv.png>>8625737you are welcome
>>8625744why did you prompt for them then?
out
md5: 6e42aa972016a8f7ea5000afab65841e
🔍
1152x2048, 22 steps, euler
1536 ft / 1024 ft+1536 extract / noob v1+1536 extract / 102d+1536 extract / 1024ft / noob v1 / 102d
>>8625383https://huggingface.co/nblight/noob-ft
>>8625393nothing you want to concern yourself with since it's mostly experimental stuff "except" edm2
>>8625396>ratio methodidk, i've never used it
>>8625763Are you getting errors in the console too? I find it adds more details but makes things a bit blurry.
file
md5: 2d09d834ddf2067ec0745653537b2fe9
🔍
>>8625778No errors or whatsoever when I load it
>I find it adds more details but makes things a bit blurry.Same here
>>8625776thank you for sharing this! indeed hires is much more stable even with the loras I usually use
>>8625776what kind of settings should I be using to gen with this model?
>>8625829nothing should be *too* different from your regular noob vpred except you can generate at 1536x1536 right away
>>8625744seems like an upscaling issue
>>8625834I am not getting anything like I usually do so I'll do some tests on it
>>8625844you'll have to post a catbox
>>8625776What are your EDM2 training settings? From the weights, I assume you use 256 channels? Good ol’ AdamW?
>>8625598It did not. In the first place that is how I found it myself. I just copy and pasted the url of the page I was on, I didn't even check if the pixeldrain link was valid.
>>8625776Does this just not work on reForge? OOTL
is there a way to control character insets reliably with lora/tag? stuff like the shape of the border, background of the inset, forcing them to not be touching the edge of the canvas, whether it has a speech bubble tail.
>>8625909not really, your best option as always is to doodle around and then inpaint to blend it into your gen
>>8625850i'm still testing things around but it's not looking good on my end, out of 6 style mixes so far, only 1 looks okay and that's because that style it's way too minimalist overall, pic and catbox not related
>https://files.catbox.moe/8mwdc4.png
file
md5: 16bb4096de793e64370333aa5fd86a7e
🔍
>>8625915>sho \(sho lwlw\)>ningen mameThese were not present in the dataset at all, so that's about to be expected. Try using the lora extract on top of your favorite shitmix or even base noob (or you can even extract the difference and merge it to a model yourself) which should tamper with styles far less than the genning on the actual trained checkpoint while still keeping 1536x res.
>>8625925>Try using the lora extract on top of your favorite shitmix or even base noobHmm alright, I'll do that
>>8625899I guess it kinda helps prevent anatomy melties but the it melts the styles
https://files.catbox.moe/fqvjgf.png
I'd rather just gacha it
>>8625925that cock? mine.
>>8625938It's all yours my friend.
>>8625925>Try using the lora extract on top of your favorite shitmix or even base noobOk yeah that's definitely more doable
https://files.catbox.moe/iyld8p.png
>>8625931>>8625945>1040x1520You know it's a way too small of a resolution for a 1536x base res checkpoint, right? You won't see much of an effect and it even may look worse than it should (think genning at 768x768 on noob). Use a rule of thumb:
height = 1536 * 1536 / desired width
width = 1536 * 1536 / desired height
>>8625925>>8625930Yeah using the lora extract on my beloved 102d custom is way better than using that model itself
The gens still need some inpaint here and there but genning on a higher resolution works very well
May I know what kind of black magic is this?
>>8625776Combining this with kohya deepshrink seems to make "raw" genning at 2048x2048 reasonable anatomy wise
>>8625963>May I know what kind of black magic is this?copier lora effect. ztsnr plays some role 100%, would be interesting to see a comparison to illustrious at 1536x
>>8625413Looks really promising. Would appreciate you sharing the sd-scripts modifications if they're simple enough (or just a few pointers even) so I dont have to vibe code shit with claude.
>>8625951well it's even shitter at 1536 lol https://files.catbox.moe/f0uw71.png
>inb4
>>8625980102d my beloved...
>>8625980wait shit i applied the lora and the model mea culpa
still though, agree with anon that it's better as the lora than the checkpoint
without the errant lora https://files.catbox.moe/tl4uzw.png
>>8625413Is that only usable with the Cumfy node for now? Minimal difference on forge. Also fucking hell, it really does make you think about how 90% improvement is hindered by people just not really knowing what they're doing if some random anon can bake this and have it work.
>>8625977it may not be the most elegant way but here: https://pastes.dev/8FLPusLmTg
also the weights are in diffusers format, so create a folder for the vae and rename the model to diffusion_pytorch_model.safetensors and put the config.json from sdxl vae in the folder you created https://huggingface.co/stabilityai/sdxl-vae/blob/main/config.json
load the vae with --vae the_folder_you_put_it_in
also if you have cache_latents_to_disk enabled and there are already cached latents in the folder, it wont check them and will use the old ones, so either delete the npz files in ur dataset folder or use just cache_latents
>>8625991Ok, actually, how are you supposed to load that node? I don't get it.
>>8625991its just made as a demonstration for people that might be interested in training with it for now, the example isnt made by genning with it, but purely by encoding and then decoding an image
it is NOT free lunch, its just a way to upgrade sdxl without retraining it completely for 16ch vae, but sdxl is still going to need to have someone to finetune it, it WILL use more vram and be slower during both training and genning, though less than if you were to gen at a high resolution (there is a HUGE amount of vram used during vae decoding depending on the final output res)
the encoded latents are flux level large and even less efficient
>>8625992Thanks anon, my endless list of random shit to test grows.
>>8626000>the encoded latents are flux level large and even less efficientthis is the problem right there, it's 4x more pixels to train, and you probably can't even do a proper unet finetune on consumer hardware
>>8625984It's still quite strange that noob simply cannot handle 1girl, standing. Wtf happened?
>>8626008Umm, sweaty? Tentacles are /d/
>>8626005i agree, though the training shouldnt be very extensive, since hopefully it should be """just""" geting sdxl used to genning at higher (latent) resolutions with the base knowledge already there
lil bro is fighting ghosts again...
>download comfyui
>unexplained schizoshit
>delete comfyui
don't forget to make your reddit post about it bro
I have a plan, but the overtime im currently working right now prevents me from doing it. it's intense and there's just too much on my plate physically until i can finally go back to my regularly scheduled shitposting, editing, ai generating, sauce making disaster of a life before that sudden train wrec-ah yeah im busy as hell for a few more days.
I do check the main boards for more of your images from time to time, as it really is something i enjoy collecting and looking at. so much text to sift through unfortunately.
i do have one request and i was hoping you could uh, maybe gen your miqo like as rebecca from that cyberpunk anime if you can? get the general outfit down for that, maybe that will inspire me to do more stuff once im done with the disaster going on in my life right now. i found rebecca's design to be quite nice.
saddened that i can't provide a pic, it feels wrong to not be able to share an image in your presence. I have lots of draws and stuff i "could" share but i am not confident enough in my skills or time available to me to be able to follow up on such things yet...................
holy, someone really needs his meds
I tried warning you guys months ago. Moderation is very anti-ai, this is why they ban everything you like and keep everything you hate. /e/ has been hit by a blatant spambot for a while now with nothing done about it, those who report it get banned.
>>8626058
>>8626054is this a copy pasta from treechan from the miqo thread.....
file
md5: 5683adc2ad56d12ceb1e1dc2af567dd0
🔍
>be newfag
>see random looking off-topic posts I don't understand
>just continue on with life
>>8626080Based. This is the correct way to browse 4plebs.
>>8626011>Wtf happened?1024x1024 train resolution
>>8626017>"""just"""there's actually a lot of hires knowledge missing, textures are smudgy, eyes, film grain, etc etc, the model should be pretrained at that reso tbqh. similar story with vpred and ztsnr, it works on paper but when you actually try to train it...
>ai is trash
Meanwhile im getting all i can imagine
>>8626117this looks terrible like all ai videos outside of google veo
>>8626117Is this huanyuan or whatever?
>>8626118>google veoThat shit looks pretty bad too though.
who is lil bud fighting with?
>>8626119This one is Wan Vace, i'm still exploring it, there is so much stuff to try with it.
>>8626125Nice. I thought wan couldn't do anime at all. I'm too busy drinking snake oil here to try it though.
>>8626118An amateur of DEi gens trully an /h/ oldfag
>>8626118It's funny you mention that. I was looking at some live2d animations just a second ago and there is some really bad stuff out there, honestly kind of worse than what he genned. People forget that there's a sea of garbage AI or not, and in the end AI is not the worst enemy, it's the people using it and whether they have some sense not to post garbage onto the internet.
>>8625963Now that's a Comfy Background.
>>8626117>>8626125Mind sharing a workflow, even if it's borked? Maybe it's time to retry videogen
some sisterly love for tonight
Reforge just started throwing random errors every gen but I haven't pulled in a while...is this it?
>>8626002I love her expression
>I was here all dressed up like a whore so I can get some shikikan dick.>He's busy fucking Taihou. TAIHOU>I'll have to satisfy myself with Takao's dildo.
r34 comment section has breached containment
Holy shit I just genned at 2048 res using kohya + ft extract and it just werked as if it was a native 2048 model, even with my real world 400 token prompt, with other loras applied, with negpip, with tons of prompt editing hackery.
LFG TO THE MOON BRAHS
>>8626219have you tried refreshing the webui
you probably just have some random option toggled or forgot you have s/r x/y plots on and no longer have what it's searching for in prompt and it's breaking your shit.
>>8626230>I just [snake oil]
>>8626230Yeah, I am really liking to gen on a higher base gen resolution, is quite handy
Now if we only had a proper smea implementation on local...
>>8626239>Now if we only had a proper smea implementation on local...The SMEA implementation on local is the proper one, NAI came out and said that they fucked it up on their own but it still made their model produce the kind of very awa crap asians love so they kept it
>>8626240>The SMEA implementation on local is the proper oneLOL
>>8626240ggs then, I need another workflow to really take advantage of this hack, I'm not totally happy with my final results
Hopefully someone writes an easy rentry for brainlets. I don't yet see the benefit.
>>8626244There isn't any, it's just more pointless tinkertrooning by cumfy autists
cumfy bwos, our genuine response?
>>8626230Hmm, ok so maybe I spoke a bit too soon. I just tested it with background/scenery-focused prompts and the image content is quite a bit different from what the model normally generates.
Maybe this isn't suitable for all prompts, art styles, and loras, though I'm surprised it worked so well with my first prompt.
>>8626267What? I'm not using the new vae that was posted, this is literally just a lora you can load up in reforge.
recommended training steps for this simple design? its a vrc avatar so mostly 3d data
>>8626235I did have x/y plots enabled but that wasn't it. It was just randomly crashing in the middle of doing gens. I cleared the cache and it fixed itself it seems, I guess something there was causing the error.
>>8626244The only real benefit is to completely skip the upscale step
This is what I wanted RAUNet to be, a way to do extreme resolutions directly while having all the ""diversity"" and ""creativity"" of a regular base gen so I am very happy with it
Yeah, i can't gen with this vae without --highvram. and i can't do that, cause i'm a 8gb vramlet.
>1000's of gooner artists
>stick to about 10-15 that I rotate and mix about in my mixes
>enough is enough!
>spend 30 minutes browsing artists in the booru
>note down a few I like
>go full autismo mixing and weighing
>smile and optimism: restored
Ah. You were at my side all along..
https://files.catbox.moe/s38kxh.png
>>8626291What are your settings? I'm getting good (generally the same composition, colors, coherency, ,etc) gens with some prompts but very much not others, and it also varies with block size and downscale factor, where some prompts work better with certain block size and downscale factor combinations, but some prompts just never achieve the same quality/coherency as the vanilla setup with no lora. But I haven't messed with the other settings so maybe those help?
>>8626228limitless girl looking for a limitless femboy to ruin :333
>>8626342for me it's testing 20k artists and realizing how many of them are unremarkable
>>8625744funny that right after i complained about upscaling mangling belly buttons again a new snake oil to tackle it comes out
im a vramlet so im just using the lora on an upscale pass instead of genning straight to higher res but it seems to work
https://files.catbox.moe/w08kbf.png
https://files.catbox.moe/empa0r.png
>>862639280% of danbooru artists are completely interchangable style wise, and then you find some guy who has some amazing unique style and he has 3 posts on danbooru and a twitter that is just him uploading his gacha pulls
>>8626422I love the ones where you see some damn amazing pic and it's either one of three of his on the whole internet or all his other pics don't look as good.
>>8626301Buy used 3090 if you can, it's super cheap used right now.
>>8626435nice composition on this one
care to box it up?
>>8626437https://files.catbox.moe/4kbhpa.png
inpainting and color correction img2img passes were used later
>>8626438cool. thanks bwo
>>8626291>The only real benefit is to completely skip the upscale stepBut you just loosing advantages of the second pass, which are mostly to add alot of details and remove leftover noise. Ideally when model trained like that it should never break anything at both passes by giving better consistency at higher denoise levels of the second pass, like how it could be with some models doubling belly buttons or something like this only with hiresfix, especially with landscape reso. Have you tried to do it like 1216x832 - upscale?
>>8626444I mean it's not like "second pass" is some kind of magic, you're just generating the same model with a denoise at a higher resolution. A better base res gets you 100% of the potential instead of 10-50% or whatever of the denoise amount.
Of course assuming it works well, which is somewhat debatable.
>>8626460I would also like to add that denoising by 0.3 or whatever doesn't actually mean you are changing that many pixels. The RMSE between the base upscale and denoised picture with 0.3 denoise is like 95% similarity, 93% for 0.5.
>>8626444>and remove leftover noiseThere's no noise in a fully denoised picture, anon.
>>8626392>>8626422Yeah I did run into a lot of high quantity artists that shared similar styles. To no surprise, a focus on gacha sluts. Try them on a model and if you're not inputting the artists yourself, you'd swear your results were all the same. But it's fun throwing in the few artists that do stand out into a mix and seeing what happens.
https://files.catbox.moe/sa409h.png
004_3con
md5: 45ba1b14d19a1a2c61e3fd153e8c4fd6
🔍
What Cyber-Wifu11 using? Can't replicate his style
>>8626460>10-50%Yes, the limitation of a base model is why denoise on the second pass was always low. If multires model works really well, there should be some boost for that, allowing you to raise it higher than 0.5 while preserving consistency while still getting details, just like controlnet or other tools did
>>8626480Sometimes there is, despite of full denoising first pass, but rarely after the second. But yeah not very actual for latest models
file
md5: b151456a5a6093b00d28a52968018615
🔍
>>8626514>Sometimes there isNo, there is no noise by definition retard. Here's what the image would look like at 0.09 (de)noise.
I'm tired of having fade colors in my generated pics, is there anything I can do to have nice bright colors (not fried/saturated)?
>>8626518download photoshop
>>8626525/hdg/ is on the other tab bwo
>>8626516It's not as pronounced as stopping ~3 steps earlier out of 28. Did you really never got outputs with some noisy parts somewhere on the image?
Does anyone know some cute style (artists or loras) I can use to generate petite/slim girls? (not lolis!)
>>8625413There is something wrong with the comfy node. It downscales output image by 2 from 1024 to 512 for some reason and tile decode node is just completely fucked when using it
>>8626578Are you sure you are not gay?
>>8626578huh, so you're fine with everything else?
>>8626579Yes, I'm sure I like my girls girly and not manly.
>>8626580You mean realistic hairy genitalia and such? Whatever, that stuff can have it's place, but I would never tolerate those faces.
>>8626573That's literally loli... No juvenile stuff please
>>8626578yeah i was trolling. try imo-norio or soso
I unironically like laserflip, he's a staple of grosscore
>google fellatrix
>get some obscure 2005-core portugese trash metal album
cool
>>8626597if you don't know hentai pillars, you don't belong here
>>8626605for me it's aaaninja
>>8626607i was actually thinking of edithemad but he also fits
Is there a way to do hires without messing anything up? It feels like no matter how I dial the settings, things like straight lines will become wobbly, things and tons of details will be erased, while other unnecessary and nonsensical details will be added.
>Gonna drink from your usual bottle sir?
https://files.catbox.moe/zlkovc.png
>gen at 1536x1536 without the lora just out of curiosity
>it more or less just works with the particular pose i tried
These newer models are really stable compared to what we had before, if I tried to gen at 1.5x on 1.5 it'd just melt into a blob pancake all over the picture
Granted if you try to do more complicated poses you still get fucked up shit but it's still interesting
Anyway, I am liking that 1536 stabilizer lora, yes there is some style influence but it looks pretty worth it. I gotta try resizing it and seeing what will come out.
>>8626620>I gotta try resizing itOh, you can't.
>>8626621oh nyo nyo nyo~
Controlnet. If you use a noob model, get the epstile controlnet from hf or civit and call it a day.
>>8626631Oh, you can. Just not using non-dynamic methods?
>>8626518gimp pepper tool
Testing the 1536 lora more now without any Kohya stuff at normal resolution and honestly for quite a bunch of my old prompts it is negatively affecting the coherence and prompt following full stop. Probably only going to use it for hires pass.
in swarm is there a way to activate a lora only for a certain step count? like prompt editing
>>8626645ogey nevermind they just don't even show up
shame, and i wonder why it works that way
>>8626676I'd like to see some examples.
>quite a bunch of my old promptsAre you comparing cherry picked images to images generated with the lora?
>>8626692>swarmgetchu hands workin on that comfy backend, gay bro. this nigga trippin
>>8626701i'll make you swallow your teeth and poop them into my mouth if you keep talking to me like that lil bro
Now that Civit nuked all Loras for making deep fakes, what's the go-to site for Loras?
>>8625776>https://huggingface.co/nblight/noob-ftWhat did you train this on anyway? And what network type is the extract?
I've been unsuccessfully trying to resize that stuff to test.
>>8626707That's locon extract in the fixed mode. You can't resize that shit.
>>8626705Wrong thread, I think you want to check >>>/aco/?
>>8626582depends more on your prompt than on the style
>>8626571I usually end up going for pseudo-chibi stuff at that point, like Harada Takehito
>>8626711Let me rephrase it, that anon doesn't know how to ask questions properly:
>Now that Civit nuked all Loras for making lolis, what's the go-to site for Loras?
>>8626708It's... over!
I tried to get geepeetee to fix it and it did get me further along with the static resizing but yeah it also just won't boot. Too bad.
>>8626716Jokes on you, I train my loli loras myself.
>>8626707>What did you train this on anyway?4776 images of various booru slop, all of the images were personally checked by myself. I don't think the captions turned out too great though, so as a finetune it's kinda borked.
>And what network type is the extract?I believe it's a locon.
>I've been unsuccessfully trying to resize that stuff to test.I extracted it using the script from https://rentry.org/lora-is-not-a-finetune. You can extract the difference yourself by subtracting 1024x ft from 1536x ft and resize it however you want.
>>8626743I meant the base model. Vpred 1.0?
Also, am I correct in assuming you baked this with the guide? Can you share your script?
It's all interesting stuff, I wonder how well it can be put into place with a bigger dataset and other optimizations.
>>8626561Not him but I do with 102d but that's probably a me problem.
>>8626502Just make a lora.
>>8626518Use cd tuner.
>>8626571tiangling duohe fangdongye
>>8626571ciloranko unironically
>>8626705This is literally the reason why you learn to fish. Now you'll starve.
>>8626716What obscure artists are you trying to prompt that you need a lora?
unironically what loras are you guys still seem to be baking 24/7? Are you going out of your way to find some artist that doesn't work from base noob just to bake a lora or you actually feel like the models are lacking in built-in styles?
>>8626735nice gen bwo, would you care to box this one up?
i quite like the style
>>8626744>>8626766The lora is something I am testing after training using a config shared last thread
<https://files.catbox.moe/74brn8.png>
>>8626765I just like building datasets and training in general. You can never really run out of things to train.
>>8626765bwo im just scrolling pixiv, looking for artists that have unique and interesting styles that aren't promptable or are poorly replicated in noob. its more of just collecting styles, i just find baking fun - its the gacha game i play desu
>>8626765I don’t particularly like most of the backed styles; they "kind of" resemble the artist at best. However, that’s fine if you mix like 10+ of them, I suppose.
>>8626768thanks bwo thats an interesting style mix
do you mind letting me know which config is it? there was a couple last thread
>>8626774It's probably faster to reupload it myself
https://files.catbox.moe/a22hr0.toml
>>8626762I'd rather not redo all that effort if there's a place where someone already did it before me.
>>8626777>wasted tripsI mean I like going out to eat too but that doesn't mean I don't know how to cook.
>>8626776thanks, i'll give it a try
>>8626770>>8626771>>8626772alright. I personally see lora baking as a chore that is useful to achieve some other goal, but if you like the process itself guess it makes sense.
>However, that’s fine if you mix like 10+ of them, I suppose.yeah that's what I almost always do. I agree though, most of built-in noob artist tags aren't great on their own
>>8626764>>8626765>teruya 6w6y>doctor masube>hantamonnI like throwing those guys a few bucks for a dataset at least.
>>8626783teruya is pretty great to use as one of stabilizer loras for base noob btw, very neutral style and the lora is well-baked imo
>>8626765A couple of charas and artists but it's mostly curiosity.
I only use one on the regular, I'm a simple man and Noob is a good model.
>>8626764Me? I was just trying to help bro ask a question. I don't even use artist loras - for me it's either style loras that aren't artist loras (to spice things up in conjunction with artist mixes) or NAI nowadays.
But any artist with large image count could benefit from a focused dataset of his best or most representative works, so having a place with loras is better than not having one, especially with competent bakers. I'd gladly donwload loras for artists that are already recognized if they were done well.
>>8626765cat girl science is never finished
>>8626792do we have any competent bakers here?
Dear /hgg/, today I shall attempt my first bake. Wish me luck.
>>8626374>What are your settings?Pretty much the same as my regular gen settings, I just added the new lora at the beginning of the prompt for mere convenience and set a higher base resolution, nothing else
> some prompts just never achieve the same quality/coherency as the vanilla setup with no loraThis has only happened to me for already hard and very gacha prompts, othrerwise most of the time I get the expected results from my prompts
masturbation is /e/ or /h/?
>>8626819without toys /e/ but well you know
>>8626819that's alot of pussy juice
>>8626823that's what happens when she sees you anonie
>>8626749>Vpred 1.0?yes
>Also, am I correct in assuming you baked this with the guide?no, i just took a lora extraction script from there
>I wonder how well it can be put into place with a bigger dataset and other optimizations.This already took 2:43 per epoch for 1536x and 1:10 per epoch for 1024x on average. 1024x finetune took about 18 hours, and 1536x one took 42, so the whole thing is about 60 RTX 3090 hours or 2.5 days. Using 4-bit optimizer and bf16 gradients. There's no way to optimize it further unless you're into offloading gradients to RAM.
>more random tests
>still getting pretty much flawless gens at 1500x1500 without the lora with the right pose
it really is only a problem when you're trying to do like full body or on side where a significant amount of the gen is the torso, I'm surprised at how well Noob handles 1.5x even though I used 1.25x before.
>>8626819/e/ as long as it's limited to fingering.
i don't use AI, is this AI?
https://danbooru.donmai.us/posts?tags=eco_376124
>>8626876Really? I'm not sure if it's assisted but isn't this resolution too high to not make the noise/splotches or whatever?
any butiful artists similar to melon22?
>>8626878I mean he has supposed painting vids on his twitter but having a style that looks exactly like shitmix ai slop (including the composition and highres but lowqual) is pretty funny
>>8626873I really can't tell lmao, the style is kind of generic but when you zoom in to see the details and lines, everything is well polished and mostly coherent so, Idk
>>8626873thumbnails look like some noob shitmix with nyalia lora kek but its way too clean upscaled
probably ai-assisted with painting over a gen? there are plenty of artists who do this.
>>8626873Either an artist with an unfortunate generic style or ai-assissted
>>8626873Rule thumb is: if this "artist" appeared and used this style after 2023 it is AI
>>8626873>>8626883>painting vidslooks like he's making a colored sketch, runs it through img2img or whatever, and then uses it as a reference for little details, shading, etc
https://x.com/Eco_376124/status/1781569033235763537/video/1
>>8626892yeah real artist then, very fucking weird he takes the slop as reference tho
>>8626894>yeah real artist thento be fair he probably uses ai as a reference for colored sketch too
>>8626873So is it that he is a bad artist or has AI art gotten so good that people have to zoom in and examine each pixel to tell?
Damn
Is this Novel AI?
https://nhentai.net/g/578788/
>the ugliest shiniest BBC slop
its local
probably a pony one, on top of that
>>8626906Let me guess, is this another nigga ARTIST who has 1,000 subscribers on Patreon?
Why do you people get mad when I just give the public what they want? Envy, perhaps?
>>8626912>nigga ARTISTyup, he's one of us!
>>8626900that would stupid, using the slopmachine to create a sketch to have as reference to do your own to THEN use the slop machine AGAIN to see how the full picture would look like?
I think it would serve as a learning method perhaps but idk broski
>only making ~2,500 USD a month
time to get a real job
>>8626920What if he lives in Eastern Europe?
>>8626920>only making ~2,500 USD a month>Monthly expenses are 350
>making <cash> a month
waow... wish that were me
I live in Germany and, being migrants from the Middle East, I do not work. I receive free money from local taxes every month. I spend this money to fuck German girls.
this but im in sudanese in japan
>>8626920>Only make 2,500 usd a month>but live in a 3rd world shithole where income tax is pretty much nonexistentTime to move out of commiefornia, anon.
>>8626933Florida has no income tax.
Why do Americans eat raw cookie dough?
>>8626942Not like their "normal" food is any better
>>8626930germcuck, is that you?
>>8626960I am Abdul. And yesterday I fucked your girlfriend. You will raise my child.
any performance gains on the horizon? or are we buckling down for heavier models with minuscule improvements? is there a way I can set reforge to use less resources and work slower so it doesn't slow me to a crawl while genning?
>>8626972right now we can expect gazillion B models that still gen at 1024x1024
but use fp16 accumulation if you can
>>8626972There's sage attention too, but I don't if it works with reforge and it's impact on quality is disputed here, though personally I haven't noticed the difference.
https://x.com/anakasakas
https://www.pixiv.net/en/users/16943821
AI-assisted?
this is kinda an odd question but does anyone have nsfw eng onomatopoeia sound effects that work on photoshop? Ive looked around and seen a bunch for sale that looked alright but im suprised I cant find any for free. Mainly looking for like slurp/lick effects.
Retard that just started. I'm looking for a model with a style like this, any recs?
yes quite
https://files.catbox.moe/60cfrv.png
i still yearn for models that don't gen at ant resolutions natively
>>8626972>or are we buckling down for heavier modelsYes
>with minuscule improvements?Nah I think 16ch vae alone is a huge improvement
but desu this is probably my main fear for local, inference speed is just going to go to shit, hopefully someone figures out some kind of speedboost for lumina
>>8626998I have some of very questionable quality
>https://files.catbox.moe/6juv4e.svg>https://files.catbox.moe/i5gyab.png>https://files.catbox.moe/wappne.svg>https://files.catbox.moe/dk3us3.svg
>>8627014I mean there's absolutely room for improvement I'm just doubtful local will get it. Still no where close to nai impainting.
>>8627026Essentially no local improvements were anticipated, I wouldn't wait but I wouldn't be surprised if some rando was baking a magic 2048x2048 model in his backyard.
Loras were a minor paper sidenote that got revived by some random fuck that wanted to gen better text or images iirc.
>>8627024awesome thanks, im pretty new to photoshop so is there an easy way to use these or do you just have to manally lasso them out. maybe im missing something.
>>8627045If I am being completely honest with you I have no idea lmao, I just happened to have those lying around
>>8627050I tried it real quick and my idea worked, open your image you want to add it too, then open the svgs in photoshop and lasso them out and drag them to your image, I just need to figure out how to outline them though.
>>8627059you stroke em hard (actually, the layer effect is called stroke)
is there an uncensored img2vid website yet? im trying google framepack but for some reason its taking the prompt image as if its supposed to be the last, not to mention the slow-mo..
>>8627059right click layer>blending options>stroke(you can also go from the edit drop down to stroke but that's less flexible and kind of pointless)
or you can do the fancier thing and duplicate the layer, put a color overlay on the one that's below and shift it down and to the right by a few pixels.
there are other ways but those are the easiest.
I'm sure a 5070 ti will be a big jump over my current 3080, but will it give me headroom to play video games on the side while generating 1024s?
>>8627005Not really about the model but about the artists. Resolution says it's noob anyway. Without metadata you can't know what artists those are. Go through danbooru and try to find someone similar.
>>8627067I can play and gen on my 4070ti super so I think you would be fine
>>8627067A lot of games don't really vibe with how python handles GPUs desu
I couldn't even play Daggerfall Unity when I was baking lmao
>>8627067I can play stuff like Total War Warhammer 3 on a 4070 ti super while genning and it's fine as long as I'm using tiled vae encoding and decoding. Most things are okay as long as I don't hit the 16GB ceiling and drop to 2fps.
>>8627026>nai impainting.Maybe LAX will blow some compute on an inpainting model for v2, who knows
that was one thing they didn't do for v1 so I guess it'd be appropriate for their studies or whatever
>>8627068It looks like ChatGPT.
>>8626643The example on the civitai looks washed out compared to the lowres doe... But yeah it's still more accurate but also maybe too much? When I look at it and how little it changes the image, I feel like maybe just a traditional esrgan upscale would work just as well. At which point there's not that much meaning in upscaling.
>>8626942I don't usually make cookies but I tried it once and it tasted pretty good so I can see why people would eat that.
>>8627128I thought all sora gens come with pony-tier sepia?
>>8627147Give it a try first. In my experience multidiffusion is superior but it hallucinates so I switch to controlnet when that fails. I do 0.8 end step and 1 control weight and I rarely get hallucinations. Without loras you can lower your end step to like 0.6 without issue.
"Male legs" anon was right. Thanks again.
>>8627153>>>/v/712654531>>>/v/712654589I think they prompt a different style.
>>8627158What was your experience with it?
>>8626756>cd tThanks for letting me know about this, seems very usefull, though I've been testing values and still can't get accurate colors
>>8627171Was trying to gen a pic of me cuddling with my wife and I didn't want to look like a crippled stump.
>>8627172Try playing with saturation2 in txt2img for the greatest effect.
I'm looking for a style similar to fua1heyvot4ifsr (unfortunately there is no lora and using the artist tag doesn't work much at all). I love the colors of their drawings, like that one: https://files.catbox.moe/54mzb9.jpg
One massive upside of NAI is that it makes good hands by default. No painful controlnet inpaint suffering required.
>no ayakon
>no cathag
>no highlights
it's so over it hurts...
file
md5: 457ef0d1529a4ae5c80ab2df95d2a6fd
🔍
>>8627222you're in the wrong thread
where is the rest of it, you forgot your box anon
>>8627224Nice. Out of frame censoring is my new favorite tag.
>prompt for 3 girls
>get just 1 or 2 most of the time
>>8627252>>8627250Do nerdy girls wax? Would be cuter with pubes.But also still cute.
>>8627147>The example on the civitai looks washed out compared to the lowres doeNever go by the examples on civit. Remember the image used by vpred 1.0? That said, it's a crisp af upscale and can really pop details or smooth things out depending on the sampler you use. You can try this
>>8627155 but what personally works for me is
weight: 0.5 - 0.6
starting step: default
end step: 0.85 - 0.9
Allows you to crank up denoise to really iron issues out.
>>8627254Not all nerds are unkempt.
>>8627257ckpt: https://civitai.com/models/1595884
lora: https://civitai.com/models/1678888/illustrious-swirly-glasses-black-delmo-aika
masterpiece, best quality, high contrast,
1girl, simple background, blush, sitting, facing viewer, wide hips, white panties, sweat, <lora:Swirly_Glasses_Delmo_ilxl_v1.0:0.8>S_G_D, brown hair, short hair, (coke-bottle glasses:1.2), s_clothes, red ascot, black dress, white underwear, white thighhighs, excessive pubic hair, covering face, embarrassed, pussy, clitoris, skirt lift, sitting, pussy focus, from below, panties aside, pussy focus, close-up,
>>8627261>comfyDelicious. Thanks anon.
https://files.catbox.moe/5uakfu.png
https://files.catbox.moe/i9shbm.png
https://files.catbox.moe/uab6y0.png
https://files.catbox.moe/onvjf0.png
>>8627271reminds me of the good old days
>>8627269I don't even like cunny but her thighs in that first pic look good. Must be from fishine.
>konanOh you're him aren't you?
>>8627236>don't prompt at all>gens keep appearing in my folderwhat is going on?
>>8627348every copy of stable diffusion is personalized
>>8627260DDPM normally but if there's a particular style that looks better smoothed out, then I use Kekaku Log.
>>8627281>Oh you're him aren't you?Konanbros.. should we take this personally?
>>8627197ok bwo, might give it a try, no promises tho
I'm guessing there's no tag for vertical/horizontal variants of inverted nipples is there?
>>8627520did you try horizontal_inverted_nipples?
A while back someone in this thread that modern photoshop is impossible to pirate, so I didn't even bother to try.
Until today when downloaded PS 2025, installed it with ethernet plugged out as per instructions, then disabled access of PS to internet with firewall and that was it.
>>8627577Who told you that, because that's completely retarded
>>8627577>retard said modern PS is impossible to pirate i suspect said retard either didn't follow the instructions or got one of those badly repackaged portable versions that stop working after a while (until you add adobe shit to your hosts file)
ironically it's even easier to pirate on mac
Man, it's incredible to think how far we've gotten and yet we are so behind tech like 4o now. No one other than the megacorps have the money to train such a model because it's also a giant LLM. And if a 4o level model came out it'd probably be too big and expensive to continue pretraining on booru unless you're a giant megacorp too, not to mention the potential issue of catastrophic forgetting if you solely train on booru and don't have anything similar to the original dataset used to train the LLM. The dependence on corpos is truly grim.
are there any coloring controlnets for noob?
>>8627639Isn't that just canny edge? What would a "coloring" controlnet do?
lil bro forgot this isn't /aids/
>>8627269Thank you for idol cunny anon. I will contribute with my own cunny since I've been slacking.
https://files.catbox.moe/0t2cwr.png
Man, I went really hard on this one but looks really off, maybe turning groids into slime-like creatures wasn't a good idea after all
>>8627821i always thought the blue man group were pretty overrated
>>8627821smurfs be wildin' fr :skull:
>>8627577Could you step by step it? How did you even install photoshop without a subscription?
>>8627577that's a lot of work compared to installing gimp 'nonie.
>>8627607That's how it's always been with any technology. The saving grace is that everything being so expensive also means it's not profitable. AI is a massive bubble right now propped up by governments and when it pops we'll see prices become more realistic and someone will try to reduce the cost of compute finally (probably the chinese since they did 48gb 4090s). However the palantir/anthropic developments make this whole issue particularly grim.
>>8627821I like it. Maybe feet are too small? I mean with how much closer they are to the viewer compared to thighs, it's just weird that they are so much smaller, you know?
>>8627856Funny, they were a little bigger on the first pass but I thought they should be smaller so I made it that way, I am bad telling things apart with that kind of perspective
>>8627532Nope and now I have, doesn't work unfortunately.
>>8627821Looks fine to me anon.
>>8627197https://litter.catbox.moe/2btuysieozl6bxnw.safetensors
>>8627233I'd make the stroke a little thicker
>>8627899nta, but wow that's fast bwo, mine's still bakin
>>8627860time to learn fundies
is v1.0+29b and 102d still relevant or are people using better models?
I have yet to see a better Noob model, so no.
>>8627962I should really do, there has been a many, many times where I had to discard very interesting ideas or gens just because I couldn't conceptualize how some parts should look like
>>8627976Those are still relevant and fine but if you really feel like trying out something else, give r3mix or LS Tiro a try
>>8627976I switched to using base vpred 1.0 exclusively
>>8627984NTA but never knew about r3mix, seems to do gens somewhat better than epsilon like it says, considering that's what i was using previously
>>8628090yeah, r3mix is solid, I used it for some gens, all my mixes worked well there
>>8628031Me too. I thought I was liking 102d at first but after testing a bit more it felt more limited than the base model, even if the base model is more finicky sometimes. It's not a bad model but doesn't match the things I like genning.
i still don't get how people are genning on base
even with loras it just looks stylistically shit
I use 102d, but not sure if there's something better atm
>>8628169must be shit loras then since they're the ones giving you style
>>8628169>it just looks stylistically shit? use artist tags
>>8628211duh, have you seen how they look on 1.0?
>>8628215Too accurate, I know. Needs a bit of nyalia and very awa slapped on top.
>>8628222"it's a more accurate model"
>>8628225welcome to 4chan
>>8628169pick better loras? shitmixes are literally base noob with loras merged in it. base loraless noob is atrocious though
>>8628299loraless base can look good, depends on artists
>>8628310we probably have different definitions of "good"
i've seen anons post examples of "good" loraless noob gens before and to me it looked awful, melted and fried.
file
md5: e6311874529786ad74469f820966a1b2
🔍
>>8628314>loraless noob gens before and to me it looked awful, melted and fried.nta but depending on how you define "loraless", this pic would also be usually counted as one, do you find it super awful, melty or fried?
file
md5: 0af5bcb6d6bc963bcff99cba297c2ae8
🔍
>>8627976Started searching for new artists to mix so I went back to 291h as it's the best of both worlds from 29+1 and custom. More honest to the artist with just as much control. Ended up just staying with it again. Not sure why I even stopped using it. Probably just new thing autism and got a lucky gacha with custom once.
>>8627821it is a better idea
>>8627984better than orks too
collage
md5: ba6b8c716d1b9fa4c22a68ad93b4b816
🔍
Wall of text about negpip prompting.
I did some experimentation since there are multiple ways you could prompt with it. For instance, if the goal is to have the subject wearing a white gothic dress, you could use the following prompts (and more I didn't test).
gothic dress, (black dress,:-1.0)
gothic dress, (black dress,:-1.0) white dress,
(black:-1.0) gothic dress,
(black:-1.0) gothic dress, white dress,
(black:-1.0) gothic dress, white gothic dress,
white gothic dress, (black dress,:-1.0)
white gothic dress, (black gothic dress,:-1.0)
(black:-1.0) white gothic dress,
And then you can also test with different colors as the theme to make sure it's stable. For instance, aqua theme. This is what those prompts give, with the first column being just gothic dress.
Generally and unsurprisingly the model seems to not understand what things mean the longer a comma separated segment is. So if you want to subtract the concept of blackness from the dress, you have can't just subtract black, you have to subtract black dress, and subtracting black gothic dress is not as effective. Though "white gothic dress, (black dress,:-1.0)" interestingly performed the best in terms of making everything white, while "gothic dress, (black dress,:-1.0) white dress," had more bits of the outfit in black. It makes sense why it might do that, since tag segments are sometimes interpreted as applying to different things on the image and may not not necessarily describe the same thing. So she might be wearing both a gothic dress and a white dress that's not black, but not necessarily a white gothic dress, which the model might think is entirely white.
I'm really sad. When I prompt some artists, they increase banding. Some are fine. Is there like an anti-banding lora or something? Or is there some way to prompt it out without affecting style? Or maybe some kind of ComfyUI snakeoils?
Speaking of lora style stabilization for base Noob, I kinda want to try getting like a thousand random top rated images and baking it to see how well a super diffuse lora would work for that "stabilization".
>>8628739isn't that just "very awa"
>>8628747Maybe but I think loras tend to bake out into generic nothingness with diverse datasets more than something like direct baking in the dataset would.
That's more like well, those already existing stabilization loras, but I just don't trust civit bakers.
>>8628754That face is kinda Wokadaish, what artist is that?
>>8628756nanameda kei, (ciloranko, wamudraws:0.5)
https://litter.catbox.moe/8kkflndj2awokktd.png
was gonna post it for
>>8628314 since nanameda barely works on any merges, but I'm not really sure where the melted/fried boundaries are
>>8628761>>8628779Hoping it's also the last.
>>8628761>>8628779>>8628791His gens are fine, he just needs to take and post them in the right place. Go here, anon:
>>8627272
>>8628764that certainly looks very blurry. can't read noodleshit so not sure if its due to some wrong settings or the nolora noob is the issue here.
>>8628796>>8628791Why don't you provide some constructive feedback instead?
>>8628803feedback on what
>>8628791Made this one for you pal cause you are one massive...
>>8628829nigga, anyone that sees 12 fingers and goes "Yeah. This is fine." is beyond feedback.
>>8628801it's an intentional part of the style in this case, those three aren't exactly known for sharpness
same prompt with "jadf, gin moku" instead
>>8628878one more
I'm done
>>8628887yeah I wonder https://danbooru.donmai.us/posts/9253382
maybe the artist just likes higher contrast
>>8628878>>8628885there's something *off* about those, I can't really describe what exactly. loraless noob's style looks like some sort of withered scan with fucked up contrast, doesn't look clean. i mean if it doesn't bother you it's alright, I just prefer it with a bit of loras mixed in.
>>8628889Fried doesn't just refer to high contrast, you know.
>flat greyish image
>fried
what did 175%srgbmonitorbro mean by that
>>8628889What about the wobbly linework, melty/artefacted eyes/details and white glow around characters
>>8616235> https://rentry.org/hgg-lorathank you
how good is 7s/it for bs3+gradient+memefficient?
sorry, took a while to figure out what you guys want
>>8628901Isn't that just from low res? Looks like a raw gen. Other than the glow, but it's not in the other pics.
>>8616235> https://rentry.org/hgg-lorathank you
how good is 7s/it for bs3+gradient+memefficient?
and how to prevent te training in kohya gui?
with --unetonly it still says te modules: number at the beginning of the training
Finally, a melty here in /hgg/
>>8628923it's base noob v-pred
(masterpiece, very awa:1.2), absurdres, very aesthetic, ai-generated, shiny skin
>>8628925think we're being reasonably civil so far
>having to scour thatpervert for waifu lora pics because they aren't anywhere else
uegh
>>8628943could always bake your own
Scraped 25k top score images
Filtered 1/10 with resolution requirements
Starting to clean it up
It's gonna be interesting
/hgg/ approved stabilizer lora... :prayge:
>>8628972>>8628971but i think i gotta start with like 10k, top rated posts aren't THAT good lel
>>8628970scrape the top 25k off civitai, that's where it's really at
>>8628973To be serious though I think it'd probably turn out worse than manually cherry picking a few dozen images to train on.
>>8628970>>8628971didn't some namefag already do this
it was okay style-wise but had heavy bias in compositions and backgrounds
Is there such a thing as RL in the diffusion model world like in LLMs? In text gen, it is usually understood that a model needs to undergo RL before it's usable as a chatbot. But to me it feels like something like vpred never underwent such a step, or was undercooked if it did have that step. In the first place it's weird to call something like Noob a finetune. For LLMs finetunes do not add knowledge, pretraining is what bakes in knowledge.
>>8628981a lot of people did it
but i mostly just want to experiment and see how my config does
definitely not today though, i have stuff to bake
>>8628970Hope you like big tits.
>browsing danbooru for inspo
>see images like https://danbooru.donmai.us/posts/9479769
>tfw we are still far away from a model that can do something like that without heavy guidance and handholding using various methods
>>8629006>muh butiful backgroundfor me it's poses more complicated than 1girl standing, especially regarding stuff like feet not getting mashed into garbage, and context
>>8629006That's a dall-e 3 gen
>>8627197had my fun with this one, but would like to test it more to see if it needs a rebake
>>8629010You can actually prompt that stuff though and get lucky with your gens. But it is literally impossible to pure prompt a complex scene like that booru post. If you try to do multiple people, buildings, the cityscape, the heat haze effect (which does in fact have a tag), the perspective, you will never ever get an image like it unless you god's chosen one with seeds.
>>8629029I was a launch user of dalle 3. It could never do that, though perhaps it could come close with a lot of work, but it definitely won't be as coherent still. Maybe today's 4o could, idk, haven't tried that much.
>>8629036OK but why would I use an anime porn model trained on a dataset that is 90% monochrome backgrounds to make that kind of picture
>>8629040You miss the point of my post. The point was the complexity and coherency. If I saw an incredibly complex porn image I might've posted it but I just happened to see that one and posted that instead.
>>8629036Get a procedural generative cityscape addon for unreal engine and i2i or CN that shit. Cityscapes suck for AI anyway and will suck for a long time, because buildings are too manmade: precise and straight and mathematical. Nature is much more of an AI thing.
>>8628970>saving a bunch of fun looking artists along the wayhey that's nice
>go on danbooru
>order by score
>see tons of garbage
Man.
>>8628986i think people with resources that might know how to do it just dont give a shit about image gen beyond grifting for research funds with training on a imagenet tier dataset + whatever synthetic MJ garbage and claiming +1% improvement
in image gen the base models are so cucked that pretty much all "finetunes" have to be retrains, with the hope that training off them transfers at least some good knowledge, and while noob had quite a bit of gpus relative to everyone else they were also just amateur enthusiasts
>>8629061I'm on pic 600 of the filtered dataset and I selected 96 for baking so far so it is how it is
>>8628986>For LLMs finetunes do not add knowledgeThat's just a retarded saying when it's the exact same process as finetuning, just more focused. "RLHF" is just overbaking on a particularly formatted (usually corposlop) dataset, same as any other finetuning.
>https://civitai.com/models/1555532
>makes 3 loras and uses them together to try and stabilize vpred's colors
Jesus.
what's unusual about that
>>8629095Why don't people just use CFG++ samplers, it's not that hard.
>>8629098Or just use nothing like
>>8628764
>4200 images
>only saved 317
lmao i overestimated danbooru
there's so much acoshit in the top scores
>>8629130lmao, i wanted to warn you about that but you was so confident i assumed you realized that
>>8629227hey i still have some dataset
i'll bake it and see
>>8629130https://konachan.com/
>>8629231actually i think danbooru's "rank" algo is much better than just score, but it's changing over time and i don't think you can go in the past
https://danbooru.donmai.us/posts?d=1&tags=order%3Arank
you can also do this and slide through time yourself:
order:score age:>9day age:<12day
>>8629052>looking at the styles via saucenao>half of them are like 6 image artistswhy is it always like that
>>8629255>turns out i saved a bunch of nyantcha and ratatatoh god oh hell...
am I retarded or are both regional prompter and forgecouple horribly broken on reforge
>>8629267tasty tasty bbc?
>>8629269Comfy here but I read some reForge complaints before about prompts leaking and stuff, starting about three months ago. Not sure if people don't use it enough to make a big deal out of it, or they noticed and stopped using that stuff because of it.
>>8628986There's tons of papers on this. Just make sure you have your 4xH100s ready to go.
https://arxiv.org/abs/2401.12244v1
https://arxiv.org/abs/2311.13231
>>8629269They never worked to begin with and nobody ever actually used them effectively. People would rather generate generic 2girls and inpaint the character onto them than deal with the shit that is regional prompter. It's only being brought up as a cope after NovelAI solved the multi-character issue. Local needs a better solution.
>obscure ass artist i can't find anywhere but cripplebooru and r34
weird
>>8629326works fine on comfy
>>8629326regional prompter (didnt try forge couple, from what I understand it's sorta similar?) does work for very basic compositions/poses where it's easy to assign a character to one region of the canvas, for anything other than that it's practically unusable
>the first epoch of the stabilizer immediately makes artist on base go from complete garbage to mostly working
What the FUCK is wrong with base vpred lmao
>>8629336>What the FUCK is wrong with base vpred lmaoeverything or so I have read
>>8629336teh fuck is a stabilizer
anyway naaah it might help some but it still doesn't look fully proper to the artists, i prefer my shitmixes
i'll let the memers keep base vpred and go onto baking more shit, i have like 25 new datasets
gyod damn kukumomo had like 63 styles in total
that's why loras are useful
>>8629404Roropull also has two main styles, and the Noob version of him is fucked altogether. I'm baking one later to stabilize.
Alright, I'll go back yet once again to base noobvpred and do some nice gens to share later
>new dataset and the better lora setup completely fails to bake a chara i baked before properly
Huh.
>>8629536Nah the character is white.
Knife ears are made for ojisans.
https://files.catbox.moe/h4s569.png
>>8629718>mature female in catbox... >:(
Good quality as always, bro >:(
yeah this is a young male thread keep it moving
https://files.catbox.moe/v2t2cw.png
/hgg/ noob vpred stabilizer lora where, sirs... please...
>>8629801First you must prove that you are able to do a nice good looking gen without any lora or snake oil at all
>>8629801https://civitai.com/models/918037/artist-nyalianoob-10v-pred-05
>>8629817yes
the internet is down in israel
>>8629787is that a stomach bulge? does he have a massive dildo stuck back there?
>>8629785A-anon, about that..
>>8629832>>8629835Oh yeah, I tend to forget. If it's a mature male, it's all good, keep at it bros.
>>8629787>>8629718What do we think about the recent change of the otoko no ko tag for trap? Is it based or cringe?
>>8629848shoulda been femboy no one uses trap anymore
>>8629848don't really care
the admin also wanting to change paizuri to titfuck is stupid though
makes me think hes trying to make the site more mainstream friendly
>>8629850>paizuri to titfucklmao fucking retarded
>>8629841Based.
>>8629848>>8629849Yeah it's dogshit. Kinda sprung up on me when I was searching for some tags and I saw "trap" in the side suggestion with sooo many entries. Thought I was going nuts and never knew it existed and then realized it changed from otoko. I agree with, anon. If you had/wanted to change it, femboy would have been better. Trap is ol' timey boomer terminology, speaking as a boomer.
>>8629848how did this go through when more people disliked it then liked it
https://danbooru.donmai.us/bulk_update_requests/40541
>>8629849>>8629854I'm an oldfag and I still remember when we used to use trap to refer to them and I still like it. Besides that, it pisses me off, a little, to even think it was changed (unilaterally, 11 years ago) just because it offended a certain group of people. Also i think femboy is mostly associated with 3DPD so I don't like it. For me is either trap or otokonoko. Let's see how this ends, I'll be lurking that thread on danbooru for a while. Good night anons.
>>8629848changing a jap term to some tranny-tainted westoid le meme shit is meh
>>8629866>tranny-taintedThe term predates woke culture by at least twenty years
>>8629874reading comprehension
>>8629875Communication goes both ways, if your message is not understood you can try rephrasing it.
>>8629885nta but even as an esl i understand what >tainted means in this context
re: loras for already "working" artists, picrel's kukumomo base on the left and one of the lora epochs on the right
some of these inbuilt artists really are unfocused by default, even if they more or less work
i'll confirm with the roropull i already baked, and i might just go on danbooru and bake a bunch of these...
>>8629959Maybe that's less visible here stylewise but am I schizoing out when I say the lora look higher res regarding lines and details than base in both these examples? Is that just the 512 training on the base model coming out?
>>8629889The problem is that this logic is as stupid as getting banned on tv for using the okay symbol or getting hate because you like looking at rainbows after rain. I will NOT concede my language to retards who use it for their own shitty purposes, both English and Japanese.
>>8629959>>8630029Just to be clear this is artist tag as the activation tag correct? I started noticing anatomy mistakes when I did this in my current bake and I'm wondering if that was the problem. The dogma was always to avoid retraining artist tags...
>>8630058Yeah, I baked them with their appropriate artist tag.
>>8630073Hmm maybe it's just v/double v shitting on me as usual? I don't want to have to look through each epoch. What a pain.
>>8630058Depends on the existing knowledge, it can make things better or worse. And either way makes the training go way faster.
Taking the 102d training wheels off and swapping to noob1.0+29b has me all kinds of filtered, but the unpredictability of it has also been great.
I know it's been asked a thousand times and I'm sorry for asking again, but are there any loras I should be using, particularly for the weird contrast, that won't sterilize it back to being 102d again?
>>8630029>Is that just the 512 training on the base model coming out?That's a chroma thing, not a noob thing afaik? Probably just poor training settings for v-pred on their part
>>8630120>particularly for the weird contrastUse literally any good lora that was trained on vpred.
>>8630120That's what loras do, the only difference is you get to choose which one to apply and get to limit its strength to the minimum required. Ideally pick one that somewhat matches the style you're going for.
Some artist prompts also stabilize in a similar manner, so if you have a mix you might not even need it.
Styles
kukumomo - https://files.catbox.moe/txcwbb.safetensors
tedain - https://files.catbox.moe/odgdzu.safetensors
bee haji - https://files.catbox.moe/4tn7u3.safetensors
haiki (tegusu) - https://files.catbox.moe/nwu28y.safetensors
kei myre - https://files.catbox.moe/qqesvw.safetensors
roropull - https://files.catbox.moe/5nqfyo.safetensors
>>8630227I just looked up haiki on danbooru. What a surprise.
1
md5: 0388476ed3d0db6a12593d875d06b2ac
🔍
Also re:re:re:re: on choosing baking by steps or epochs.
A couple of graphs on the bunch of stuff I just baked, these are values for the epochs I chose as best (and converged).
Pick what you think looks stabler
2
md5: f314e7dd29f3ac3a7c72cfe165640ec7
🔍
>>8630262I guess you could go by steps per image but oh gee that graph is identical to epochs just use the damn things
>>8630272Well yeah, as long as your image counts are similar. Step count is a product of dataset size * repeats * epochs.
>>8630289You forgot about batch size.
>removed a couple of images from the dataset
>chara goes from total gigafailbake to kinda working???
It's still not as good as the old lora despite a bigger dataset but I'm beginning to think maybe this character in particular doesn't benefit from shuffling captions
>>8630291Batch size doesn't increase actual step count, just processes two or more images in a single step. But it's another thing to consider.
>>8630272Train one style on 50 images and another on 500. In this case step count will tell you how much you're actually baking, while epochs will make the latter lora take ten times as long.
>>8630296Anon what the fuck do you think these loras are
It literally shows you the random inconsistent step count jumps that make it a shit metric that aren't there in epochs
What else do you need if not data
file
md5: 9446fec0ce45807e96678e31a139cd78
🔍
sloppa dump time again
https://files.catbox.moe/al7nhk.png
not so lewd
https://files.catbox.moe/k9mb4i.jpg
https://files.catbox.moe/ibig97.jpg
>both threads dead or consumed by schizophrenia
did it just die out or people moved elsewhere?
>>8630306I don't have the time or willpower to examine and explain in depth exactly how or why you're retarded.
Look at a loss chart. A regular loss chart, granulated by steps.
Your epochs? Those are all at fixed step intervals. You can actually figure out where and when those epochs exist on that loss chart. Now, if you take a real good look, you'll realize that there are tons of peaks and valleys on that loss chart that occur on steps that are NOT shared by epochs starting/ending. If you take an even closer look, you might even notice that you can even predict spikes/dropoffs by step count.
How very incredibly curious that is.
>>8630320I'm doing /u/ gens at the moment
>i don't have the time to look at a simple chart
so shut up nigga lol
>>8630262>Also re:re:re:re: on choosing baking by steps or epochs.what are you people doing in this thread
>just look at this chart!
>the chart is meaningless and completely misses the fucking point
waow
You are both retarded. Neither the epochs nor steps reflect how ready the lora is.
>a consistent patter is meaningless because <schizoshit>
>>8630344the only thing loss is even useful for is checking for NaNs
we need to bring gans back...
>>8630345that you think occurrence is relevant while completely ignoring the metric that matters speaks volumes of how retarded you are.
Gradients don't give a fuck about occurrence. Gradients do whatever the fuck gradients want whenever the fuck they want. And they operate on steps.
>>8630350I want civit shitbakers to fucking leave.
Loss is only harped as a meaningless value because it varies by model, dataset and what you're doing. Which is to say the precise numbers of it is meaningless without a lot of context. But when you contextualize it within a chart and by its values along it, it's no longer meaningless. You can actually see what is happening.
>headcanon
>nooo hard data is not relevant
It's okay bro, speaking big words will make you a big man.
>>8630360>I want civit shitbakers to fucking leave.what's stopping you?
what is the best updated model right now?
>>8630227>kukumomo 473the AI already know the style tho?
>>8630320stay in /vn/ bucko
file
md5: b9b9b79fc32981969ef43eac37ee66c2
🔍
>>8630350On big pretrain runs when you can't actually overfit the model, loss, besides being an indicator of training going smoothly (pic related), may be a pretty useful metric. If you are seeing shit like this, you can immediately tell that something is fucked up.
One shouldn't really use it as a metric for tiny diffusion training runs at all, in any way.
>>8630360>GradientsWhy don't you actually try to look at them instead?
wow i hate inbuilt artists now!
uoh cunny
https://files.catbox.moe/cj1okn.webp
decided to train a new version of this lora
lora and toml is here
https://mega.nz/folder/47Yj3ZIS#klaoBwVZI_u5DbjmCjkqRQ/folder/Fm4RUbiT
alk didn't train at all i'll probably need to inspect my dataset for that artist
i was thinking of trying out that lora finetune extract but this lora took six hours on my normal settings for 1 epoch with the amount of images and i cant imagine how slow it would be with everything i need to do to make finetuning work on 8gb of vram without crashing immediately
>>8630461cunnychad.. what is the best model right now? I am using naiXLVpred102d_custom
>>8630461>finetuning work on 8gbyou probably won't make it without modifying the code
What's the meta for finetuning anyway? Last I tried it on ezscripts it just spat out a buncha errors, fork or no fork.
>>8630467i switch between r3mix and my custom shitmixes
for this x\y plot i used 102d_final because i forgot to switch models but they're similar enough it doesn't matter
>>8630470yeah i'll probably need to do some esoteric shit so im not planning on it any time soon
>>8630368>best updated modelnai v4.5
>localnoob v-pred 1.0 and shitmixes
>>8630474if you share the dataset, i'm willing to let it bake for a few hours on my 3090
>>8630387>no imageyou can stay here and I'll go back, deal?
>>8630483sure let me zip it up
>>8630484who?
>>8630461Uh I just baked a ohgnokuni lora myself, but I guess that yours is more efficient
>>8630486sounds like a win win to me!
>>8630490>trapped myself in /vn/wiat fuck NOOOOO
>higher res makes weird necks
>but regular res doesn't really make the artist look right
it's...
>>8630489i'm pretty happy with the style replication this go around but a more focused lora would probably still be better
uvh i guess i could finally try going back to highresmeme
>>8630461Why bake all of them into a single lora? I will not remember what's included.
>>8630513i will
also doing retarded shit is fun
>>8630517Whatever, I'll just copy it six times and name them after each artist trigger. Good job on making it so small.
>>8630522thats because it's only 8dim
you don't really need more for 99% of lora applications
>double gen time
>with less detail
ugh maybe that stabilizer lora from anon needs another go
>>8630526Not for styles anyway, just for overly-detailed detailed gachaslut clothing, guns, cars, etc. I know, but most people didn't agree last time it was brought up.
>>8630528gen time cannot be the lora's fault beyond the extra vram cost, equal to its filesize
>>8630227do you use any tags for tedain?
>>8630541nyo i mean that highres is that, maybe i should try the lora instead
it's still not that good though, eh, i'll have to experiment a bit with the highres
>>8630548ye the tags on the left is what i trained with
u can see the top tags in webui too btw
that tedain is mostly just a stabilizer though, base tedain is okayish enough
>>8630461Thanks. I will be using this exclusively for hags, fyi.
>>8630563i remember liking feral lemma for hags back in the day
have fun
>>8630483https://mega.nz/file/o7pDUSDR#vdl2j9aPy257eVBHOMBUqCu9crck0NBzYyXR9b2ocHI
You know it's kinda funny but the weird anatomy melties from highres mostly happen in the most basic prompts like 1girl standing and portraits, less so in actual sex gens
I kinda wonder why
>>8630577There is probably more "stuff" for the model to fill the picture with without needing to hallucinate bifurcated torsos.
>>8630566sweet, but you didn't need to include .npz files
also that's a lot of lowres images
>>8630594i knew i was forgetting something
i will blame it on not having had breakfast yet
also yeah i pretty much just didn't bother with filesize other then with lokulo who i did put in the effort to upscale
there's probably some shitty 200x500 images in there and also some old bad tagging experiments from months ago
>>8630587Most likely, I mostly saw that with necks and torsos.
eh naah highresfix is not that good for my purposes
oh well
i'll just gacha more
>>8630612>eh naah highresfix is not that good for my purposeswhat are you trying to do?
>>8630614Well, I just don't think the clarity is nearly the same as just genning higher res. I can stand the occasional body weirdness for not having to fix the entire image.
>>8630617Oh, definitely agree
32424
md5: b395d950a8d25cb66bc08eada2c29bc7
🔍
Scrapin'
>>8630325The hallmark of all skilled people is making a complicated thing look/sound simple.
>>8630625you could get it much faster from huggingface with this https://github.com/deepghs/cheesechaser
>>8630632This is for scraping top images from >image artists, I don't think that'd be doable on that?
>>8630633you can scrape just the image ids and then pass the ids to the downloader
>>8630360>you are too stupid to understand>you are just dumb since you don't agree>you're wrong and I won't elaborate further>I'm the smartest person in the room>if you disagree you must be [boogeyman]When the fuck will you retards grow up? I'm so sick of this boring tripe every fucking thread on every fucking board. Just once I'd like someone to actual expand upon their knowledge and teach someone something rather than insist upon their superiority without proof. Fucking hell.
>>8630625I made a lora for him. Do you really need 20k? 100 hand picked images was fine but my lora might be shit.
>>8630647lmao
>>8630633this is just my retarded way of not having to click on 20000 artists on danbooru to see what i want to bake
different artist images for previews
>>8630653feels like #1 top image is bad for that because what if the image is old as shit and you prefer their newer style or vice-versa
>>8630658i mean yeah but it beats clicking a gazillion images or scraping and having to go through multiples
it's not like i'm gonna fomo artistidontknow4233 if i bake artistidontknow9564 instead
i guess it could be a filter like <check latest 25 images and take the one with the highest score> but eh that'd probably add overhead and shit
Uhh, that link is down bwo. Which scraper do you guys use?
>>8630664picrel was something gpt cooked but for regular stuff i've always used grabber (with a lot of filters)
and czkawka for first round cleaning
>>8630664I still use Grabber. Seems to struggle with Danbooru lately so I grab Gelbooru instead.
I need a lora that's capable of removing banding without affecting style...
>>8630461Based Harada give them extremely wide hips.
>>8630695could probably punt a soccer ball through there
>>8630691>bandingi call that sovl
unironically tho do show an example, i'm wondering how you're getting that
There will be no picture. He is a schizo.
>>8630642there's not really anything to prove, it's just how loss works on a conceptual level. it's measuring the pixel space difference between the ai's denoised training image and the original training image at whatever timestep. that's great if you're training a model from the ground up and you're starting off with esoteric blobs of colors because lower loss is gonna be better. i think that that does also apply to styles to a certain degree. but if you're training something like a character lora then it's almost entirely useless as a metric because it's still measuring the pixel space difference. so the loss might be going down because it's learning your character, or it might be going down because it's learning whatever skewed style is in your character's training data, or it might be going down because it's learning to associate certain words in your prompt with specific compositions and poses, etc. It just doesn't mean anything at that point.
file
md5: e7fa817821f644941151075cfb55eeb7
🔍
>>8630597alright i forgot to change lr scheduler settings
>image with longneck due to highres
>lassoed the entire head
>moved it down hard, mild hand painting
>denoise at 0.5
>it just werks
I forget these models aren't as shit as 1.5
>>8630705https://files.catbox.moe/t9xqks.png
Ok, here is a minimal example with no loras or snake oils, it's pretty egregious here though it's visible on the other seeds too. Loras do help. But there are some inbuilt styles I actually want to use which loras mess with so I really just want a stabilizer.
>>8630709Bro we all know noob has issues, this is just one of the lesser talked about.
>>8630746I was just trying to bait you into telling me wtf banding is.
>>8630751I mean you could google it. Not some made up nuterm. It's just the artifact where shading manifests as visible bands, you can tell from the image I posted it's pretty visible there.
>>8630762Are you telling those kino lines on some of my gens aren't meant to be there?
>>8630771Yeah, depending on the style. Wouldn't you agree that it'd be nice if you could have control over effects like these just by prompting? Actually, it's interesting that there is a tag for "banding", but it doesn't really work.
>>8630772>Wouldn't you agree that it'd be nice if you could have control over effects like these just by prompting?Now that you mention, yes, sometimes I like them, some others I hate them. I wasn't even aware that was a thing
>stable diffusion
>isn't stable
>>8630781unstable diffusion
>>8627899Sorry i'm late, I only saw your message now and the download isn't available anymore D:
>>8632795it has stealth metadata
>>8633364I see, so it's a vpred model with a non vpred lora, or am I tripping?
>>8633827STOP POSTING HERE YOU RETARDS