← Home ← Back to /g/

Thread 107058480

354 posts 156 images /g/
Anonymous No.107058480 [Report] >>107058488 >>107062029
/ldg/ - Local Diffusion General
Discussion of Free and Open Source Text-to-Image/Video Models

Prev: >>107054044

https://rentry.org/ldg-lazy-getting-started-guide

>UI
ComfyUI: https://github.com/comfyanonymous/ComfyUI
SwarmUI: https://github.com/mcmonkeyprojects/SwarmUI
re/Forge/Classic/Neo: https://rentry.org/ldg-lazy-getting-started-guide#reforgeclassicneo
SD.Next: https://github.com/vladmandic/sdnext
Wan2GP: https://github.com/deepbeepmeep/Wan2GP

>Checkpoints, LoRAs, Upscalers, & Workflows
https://civitai.com
https://civitaiarchive.com/
https://openmodeldb.info
https://openart.ai/workflows

>Tuning
https://github.com/spacepxl/demystifying-sd-finetuning
https://github.com/Nerogar/OneTrainer
https://github.com/kohya-ss/sd-scripts
https://github.com/tdrussell/diffusion-pipe

>WanX
https://comfyanonymous.github.io/ComfyUI_examples/wan22/
https://github.com/Wan-Video

>Neta Yume (Lumina 2)
https://civitai.com/models/1790792?modelVersionId=2298660
https://nieta-art.feishu.cn/wiki/RY3GwpT59icIQlkWXEfcCqIMnQd
https://gumgum10.github.io/gumgum.github.io/
https://neta-lumina-style.tz03.xyz/
https://huggingface.co/neta-art/Neta-Lumina

>Chroma
https://huggingface.co/lodestones/Chroma1-Base
Training: https://rentry.org/mvu52t46

>Illustrious
1girl and Beyond: https://rentry.org/comfyui_guide_1girl
Tag Explorer: https://tagexplorer.github.io/

>Misc
Local Model Meta: https://rentry.org/localmodelsmeta
Share Metadata: https://catbox.moe | https://litterbox.catbox.moe/
GPU Benchmarks: https://chimolog.co/bto-gpu-stable-diffusion-specs/
Img2Prompt: https://huggingface.co/spaces/fancyfeast/joy-caption-beta-one
Txt2Img Plugin: https://github.com/Acly/krita-ai-diffusion
Archive: https://rentry.org/sdg-link
Bakery: https://rentry.org/ldgcollage

>Neighbors
>>>/aco/csdg
>>>/b/degen
>>>/b/realistic+parody
>>>/gif/vdg
>>>/d/ddg
>>>/e/edg
>>>/h/hdg
>>>/trash/slop
>>>/vt/vtai
>>>/u/udg

>Local Text
>>>/g/lmg

>Maintain Thread Quality
https://rentry.org/debo
Anonymous No.107058488 [Report] >>107058506 >>107058742
>>107058480 (OP)
ur a faggot ran. including yourself in the collage is a joke because otherwise you'd never get in it because your shit sucks
Anonymous No.107058494 [Report]
Blessed thread of frenship
Anonymous No.107058501 [Report] >>107058673
no more video in collage?
Anonymous No.107058506 [Report] >>107058514 >>107058517 >>107058527
Hey you included my pic thats cool

>>107058488
Who is ran? Is he a netayume fag?
Anonymous No.107058514 [Report]
>>107058506
>Hey you included my pic thats cool
nta but nah thats the other anons gen
Anonymous No.107058517 [Report]
>>107058506
Debo uses this as a dog whistle because he blames one anon for being exiled.
Anonymous No.107058527 [Report] >>107058544
>>107058506
>Who is ran?
a nigger

>Is he a netayume fag?
yes
Anonymous No.107058538 [Report]
the_drot and xixxix from civitai are the best posters in this Long Dick General
Anonymous No.107058544 [Report]
>>107058527
Its a shame a netayume fag is the OP, since I appreciate him using my pic
Anonymous No.107058546 [Report] >>107058700
>>107058499
I think they just did a quick and dirty change so it's the right moment to download anything before it gets completely locked down
Anonymous No.107058551 [Report] >>107058645 >>107058727 >>107058748
babe wake up, nvdia made an edit model
https://huggingface.co/nvidia/ChronoEdit-14B-Diffusers
Anonymous No.107058552 [Report]
>caring about the faggollage
Anonymous No.107058645 [Report] >>107058668 >>107058671
>>107058551
>only trained on synthetic data
useless

Now this one though: https://github.com/baaivision/Emu3.5
claims to be better than nano bannana
Anonymous No.107058665 [Report] >>107058699 >>107058724 >>107058735
examples
Anonymous No.107058668 [Report]
>>107058645
LEAVE THE MULTITRILLION COMPANY ALONE!
also is anyone else getting 1-2min loading times on captchas?
Anonymous No.107058671 [Report]
>>107058645
>Now this one though: https://github.com/baaivision/Emu3.5
>claims to be better than nano bannana
it's a big motherfucker though (32b), maybe it could be run if we go for RamTorch or that Nunchaku thing
Anonymous No.107058673 [Report]
>>107058501
not the op but putting video in severely degrades the quality so it's not worth putting in poo poo vid gens
Anonymous No.107058691 [Report]
Anonymous No.107058699 [Report] >>107058711
>>107058665
That's fucking horrible. Text isn't even perspective correct, really looks like a terrible Photoshop job, he's also holding the gigantic marker like a retard. I wish they focused more on plausible scenes rather than a paragraph of written text. What I want is to write a simple prompt and have the model generate a complete image with the accuracy and care of a human where the light switches and doorways are in the right place.
Anonymous No.107058700 [Report]
>>107058546
Yep, haven't noticed any patterns and it seems the audio streams are not entirely synced.
Anonymous No.107058708 [Report]
I feel bad for the spergs from /sdg/ seething in here day in and day out
Anonymous No.107058711 [Report]
>>107058699
No, this is wonderful, because it hightlights how lazy the model devs are. A guy writing stuff on a board has to be the easiest thing that ClosedAI has ever had to train. Why use synthetic data for that? Literally, just never leave their office and keep writing on the board. Have another model possibly swap out boards and subjects and that's about it, get to training.
Anonymous No.107058724 [Report]
>>107058665
that's a weird ass looking pencil
Anonymous No.107058727 [Report]
>>107058551
Tried ChronoEdit, its absolute trash. Maybe it has some random prompts where it's OK but it's worse than both Kontext and QIE, failing basic prompts and edits
Anonymous No.107058735 [Report]
>>107058665
>the skin is so smooth because of the synthetic data
>the face is so chinky because of the huge amount of chink data they trained it
>the text is paint.exe tier
why are the chinks so sovless? is OpenAI the only sovl company in the world or something?
Anonymous No.107058742 [Report] >>107058908
>>107058488
can you tell me which one is the "ran" image? i'm not that deep into schizo lore
Anonymous No.107058746 [Report]
Anonymous No.107058748 [Report] >>107058825
>>107058551
nothingburger
Anonymous No.107058755 [Report]
lotta niggas talkin like theyve trained anything more than loras
Anonymous No.107058768 [Report]
Anonymous No.107058776 [Report] >>107058791
Thinking of training a good lora on all the dall-e 3 hot 1girls i've collected since the start to make more of them in that style, thinking of trying it on Chroma HD first, then maybe Qwen Image with ostris/ai-toolkit, any tips?
Anonymous No.107058778 [Report]
What is the coomer's choice of image models?
Anonymous No.107058786 [Report] >>107058790 >>107058815
Give me ONE good reason synthetic data training is a bad thing
Anonymous No.107058790 [Report] >>107058807
>>107058786
Because it's currently not as realistic as real world data, making the model worse, turbonewnigger
Anonymous No.107058791 [Report]
>>107058776
>Dall-e 3 style
Just tint your images yellow
Anonymous No.107058807 [Report] >>107058980
>>107058790
this is factually wrong btw.
Anonymous No.107058815 [Report] >>107058856 >>107058873
>>107058786
>why going for images that have 70% accuracy (synthetic shit) is worse than real images (100% accuracy)
are you retarded or something?
https://www.nature.com/articles/s41586-024-07566-y
Anonymous No.107058825 [Report]
>>107058748
me on the right
Anonymous No.107058853 [Report] >>107058868
Finally got Wan2.2 working, me so happy
Anonymous No.107058856 [Report] >>107062145
>>107058815
It turns dogs into chocolate chip cookies?
Anonymous No.107058864 [Report] >>107058867
>drops the best image model to date
>leaves
Anonymous No.107058867 [Report] >>107058901
>>107058864
>drops the best image model to date
didn't know StabilityAI made Flux dev
Anonymous No.107058868 [Report] >>107058923
>>107058853
Anonymous No.107058873 [Report] >>107058889 >>107058901 >>107058907 >>107058924
>>107058815
This honestly doesn't feel very scientific and it actually just boils down to "repetition artifacts may appear in outputs". But anyone that has trained a LoRA with a 10 image dataset could tell you that. None of this is conclusion and in fact their experiments are bare bones at best, but anything to get a headline right?
Anonymous No.107058889 [Report] >>107058900
>>107058873
The scientific literature is always years behind both corporate science as well as indie programmers
Anonymous No.107058900 [Report]
>>107058889
Scientific literature like anything else is full of grifters and nothing gets a headline like a contrarian with a punchy headline and a doomsday prophecy.
Anonymous No.107058901 [Report] >>107058927 >>107059145 >>107059334 >>107059726
>>107058867
>>107058873
https://scitechdaily.com/could-ai-eat-itself-to-death-synthetic-data-could-lead-to-model-collapse/
basically synthetic data is a poison and if you put too much of it the artifacts can be seen, and the model gets more and more biased, it's basically the inbreed process on models
Anonymous No.107058906 [Report]
My model can have a little synthetic data as a treat
Anonymous No.107058907 [Report] >>107058920
>>107058873
yeah, let's believe that random anon instead of one of the most prestigious science journal (nature) in the world, that'll work!
Anonymous No.107058908 [Report]
>>107058742
Pay debo no mind he's been obsessed for years. He's just a bitter disabled goblin that can't get any traction in his thread. He's mostly upset because of the rentry that he spent all of his energy to not have in the OP only for anons to move to the thread with it as OP because everyone got sick of him.
Anonymous No.107058920 [Report] >>107058928 >>107059014
>>107058907
Yeah totally believe that paper that's works cited replication code is literally nothing and contains nothing you can use to replicate their findings. They don't even include their dataset.
Anonymous No.107058923 [Report]
>>107058868
>maid removes her uniform from the top down
cheeky little wench
Anonymous No.107058924 [Report] >>107058937
>>107058873
training on synthetic data is lazy, and synthetic images will never be as accurate as real images so you're always losing more by going that route, only retards shill synthetic data training
Anonymous No.107058927 [Report] >>107058938 >>107059835
>>107058901
This is no different than use a dataset with blurry images or low quality jpegs.
Anonymous No.107058928 [Report] >>107058942 >>107058944
>>107058920
>and contains nothing you can use to replicate their findings.
the last year of slopped models haven't taught you anything about synthetic data? my god... this place is surrounded by litteral retards
Anonymous No.107058937 [Report] >>107058945
>>107058924
Training on synthetic data blindly is the same as using any data blindly. Have seen the gems in "real" data? Have you seen LAION?
Anonymous No.107058938 [Report] >>107059835
>>107058927
yeah, don't do that either, only train your model with high quality data, which is real high res images
Anonymous No.107058942 [Report]
>>107058928
>this place is surrounded by litteral retards
I agree, all the other diffusion threads are horrible.
Anonymous No.107058944 [Report]
>>107058928
Yeah you know for a fact it's synthetic data and not lazy ass researches with a retarded post-training aesthetics pass? I've seen what you people vote high as "quality".
Anonymous No.107058945 [Report] >>107058960 >>107058967
>>107058937
>Training on synthetic data blindly
there's no such thing as blindly, even if you go for """"high quality"""" synthetic data, those pictures will never be 100% accurate compared to real data, basically you're wasting your time and you should always go for the data that's 100% accurate, don't know how this basic concept is so hard to understand for some but here we are I guess?
Anonymous No.107058960 [Report] >>107058969 >>107058975 >>107058977
>>107058945
What is "accurate" data?
Anonymous No.107058967 [Report] >>107058982
>>107058945
There are synthetic images that you cannot tell are AI and have no discernable errors. Anyways not having this argument again, you can bitch all you want about how the cooks don't cook the tendies how you like but you're never putting on an apron yourself.
Anonymous No.107058969 [Report]
>>107058960
verified no photoshop, no phone camera filters, exif backed up by 3 witnesses under oath
Anonymous No.107058975 [Report] >>107058986
>>107058960
real data? the thing the model is supposed to replicate? if you tell the model "this is what you should learn from", it should be the real thing, not the 80% accurate thing
Anonymous No.107058977 [Report]
>>107058960
accurate captioning >>>>> debate around synthetic or true or whatever
Anonymous No.107058980 [Report] >>107058990
>>107058807
also, i am trans btw
scabPICKER No.107058981 [Report] >>107058990
never forget

https://vocaroo.com/1gZIMGIUeV5X
Anonymous No.107058982 [Report] >>107059028
>>107058967
>There are synthetic images that you cannot tell are AI and have no discernable errors.
there's always errors on an AI image, that's literally the definition, if there was 0 errors the loss curve would be at 0, are you retarded or something? it's not because you have shit eyes and can't see the errors that the model won't see them, goddam you're so fucking dumb
Anonymous No.107058986 [Report] >>107058995
>>107058975
What makes one jpg "real" and another fake?
scabPICKER No.107058990 [Report]
>>107058980
well, this is a great song for you
>>107058981
Anonymous No.107058995 [Report] >>107059072
>>107058986
what do you mean? just train your model with pre 2022 images and you'll only get real pictures made by real cameras and humans
Anonymous No.107059014 [Report] >>107059017
>>107058920
I'm not going to believe the the paper because it's not even written on paper to fucking begin with.
Anonymous No.107059017 [Report]
>>107059014
hold up, he has a point though!
Anonymous No.107059028 [Report] >>107059051 >>107059066 >>107059072 >>107059078 >>107059154 >>107059207 >>107059223 >>107059265 >>107059329
>>107058982
What are you talking about? When someone says synthetic data is bad, they're often referring to the objective errors within an AI generated image. You know, humans with 6 fingers, wrong shadows, stairs to nowhere, etc. So tell me, is this not red?
Anonymous No.107059045 [Report] >>107059089 >>107060131
>fugtrup, digital media, blender \(medium\), 3d, render,
and then all the left over traditional media tags
Anonymous No.107059051 [Report] >>107059081 >>107059093
>>107059028
>wrong shadows
that's the fucking problem, why do you believe the slopped models we have have smooth bright plastic skin? because the model has biases and makes images that are smooth bright plastic, and you if you inbreed this shit by training on synthetic data, you amplify that bias and you end up with completly slopped images, the only thing I've learned in this conversation is that you had 1 year worth of slopped models and the only conclusion you got was "let's continue that path", you are insane
Anonymous No.107059066 [Report]
>>107059028
>So tell me, is this not red?
It's Deep Carmine Pink
Anonymous No.107059072 [Report] >>107059093
>>107058995
Maybe what you're saying makes sense for photography, but not the graphical arts. There is no such thing as a perfectly "accurate" piece of graphical art.
>>107059028
No, the problem with synthetic data are subtle noise patterns and other AI-specific artifacts that become multiplied when retrained on to excess. But that being said, AI-generated images that can be filtered don't have that problem.
Anonymous No.107059078 [Report] >>107059096
>>107059028
the day you'll learn that going for a 100% accurate image (real data) is always better than """"high quality"""" synthetic data (let's be nice and it's 80% accurate) will be the day you'll get a third neuron on this tiny brain
Anonymous No.107059081 [Report] >>107059093
>>107059051
Can you at least pretend to not strawman my argument. Let's say you only trained on synthetic images that match your delicate sensibilities, will the model magically make plastic skin?
Anonymous No.107059089 [Report] >>107060131
>>107059045
without the left over trad tags
Anonymous No.107059093 [Report] >>107059114 >>107059135
>>107059081
>will the model magically make plastic skin?
>>107059051
>the model has biases and makes images that are smooth bright plastic, and you if you inbreed this shit by training on synthetic data, you amplify that bias and you end up with completly slopped images
>>107059072
>the problem with synthetic data are subtle noise patterns and other AI-specific artifacts that become multiplied when retrained on to excess.
it got explained to you twice, if you can't understand that concept there's nothing I can do for you, your IQ is too low to understand I'm afraid
Anonymous No.107059096 [Report] >>107059113
>>107059078
There is no such thing as a 100% accurate image that you can guarantee will give you good training outcomes, your premise is fundamentally flawed. 100% real images also can have destructive patterns within them.
Anonymous No.107059103 [Report] >>107059136
why not 5090?
Anonymous No.107059113 [Report] >>107059123
>>107059096
>There is no such thing as a 100% accurate image
a real image is 100% accurate, it's literally the definition, it's 100% real, that's the ideal we want to achive, we want the model to replicate a real photography, I think you're lost in the process, do you know what are our goals or something?
Anonymous No.107059114 [Report] >>107059124
>>107059093
Okay so it's impossible for you to not strawman my argument because you don't understand your own argument. You know, just because an adult said something you like, you, as a child sitting as the kid's table can choose not to chime in.
Anonymous No.107059123 [Report] >>107059137
>>107059113
See the problem you're a fucking retard and don't understand what I'm saying. I, for example, could take 1,000,000 pictures with a 2004 Olympus Digital Camera, they would all be 100% real, can you foresee a potential problem with this dataset?
Anonymous No.107059124 [Report]
>>107059114
>it's a strawman because I said so, and I won't explain why because I don't know how to argue and I don't know that I have to prove claims
Concession Accepted, no more (You) for you!
Anonymous No.107059135 [Report]
>>107059093
>it got explained to you twice, if you can't understand that concept there's nothing I can do for you, your IQ is too low to understand I'm afraid
You repeated a mantra twice without being able to materially engage what it means for something to be real and accurate.
Anonymous No.107059136 [Report]
>>107059103
poverty tier card
Anonymous No.107059137 [Report] >>107059146
>>107059123
I never said you have to go for only one type of real image, of course it has to be diverse, don't talk on my behalf, I said you have to train on ONLY real photos, nothing more, nothing else, the rest was invented in your retarded mind
Anonymous No.107059145 [Report] >>107060605
>>107058901
When it comes to logic, there is use for synthetic data. Predictive text transformers only train in one direction, so if you train it to 'think':
"Josie is with Carl"
It doesn't automatically infer the inflected, but correct, sentence:
"Carl is with Josie"
Necessitating there to be synthetic data to train the transformers that Carl is with Josie whenever Josie is with Carl.
For image gen, yeah, I've yet to hear a reason why you'd want synthetic data in your training.
Anonymous No.107059146 [Report] >>107059151
>>107059137
Oh so now you're moving the goal posts, it turns out 100% real images isn't actually what you care about.
Anonymous No.107059151 [Report] >>107059164
>>107059146
>it turns out 100% real images isn't actually what you care about.
it is, I only want 100% real images, and again, I never said it shouldn't be a diverse set of REAL images, you invented that, that's called a strawman, hope that helps
Anonymous No.107059154 [Report] >>107059175 >>107059359
>>107059028
kind blue, not sure
Anonymous No.107059156 [Report]
>arguing with the disabled one trying todefending synthetic data
Do better
Please read the OP
Anonymous No.107059164 [Report] >>107059181
>>107059151
Except no, that's not what you want, you are adding conditionals. "I want a diverse set of real images that are high quality, lossless, [...]".

You just decided to die on a dumb hill because you can't be wrong and you understand you painted yourself in a corner. You already understand that a real dataset can have the same flaws as a synthetic dataset. So now it's time for you to be an adult and admit that not everything is solved with a hammer.
Anonymous No.107059175 [Report]
>>107059154
>kinda blue
it's white
Anonymous No.107059181 [Report] >>107059202 >>107059529
>>107059164
>Except no, that's not what you want
I just said, I want 100% of the dataset to be real images, that's all I said, now you're moving the goalpost by saying "but what kind of 100% real dataset?" which is not the point of the discussion, the discussion here was only "should you use synthetic data at all?" and my answer is "no", focus anon, you have to focus
Anonymous No.107059194 [Report] >>107059205
Anonymous No.107059202 [Report] >>107059208
>>107059181
Actually it is the point of the discussion because you've decided only synthetic images can ruin a model. So do you believe that it's possible to use any synthetic image even if it's a solid color where every pixel is that color? This is a synthetic image that was post processed, will it magically turn a model into mush if it's in a group of 10,000,000 other images (real)?
Anonymous No.107059205 [Report] >>107059213
>>107059194
This is what synthetic data retards want you to believe looks """good"""
Anonymous No.107059207 [Report]
>>107059028
Is this bait? that red has a substantial amount of noise
Anonymous No.107059208 [Report] >>107059219
>>107059202
>you've decided only synthetic images can ruin a model.
I never said ONLY synthetic data can ruin a model, for the third time you're talking on my behalf, I don't see the point of discussing with you if every of your post is just strawmaning my words
Anonymous No.107059213 [Report]
>>107059205
>strawman challenge: impossible
No one said you should use shitty SDXL images just like no one said you should use 30% quality jpgs from 1998 upscaled to 1024px.
Anonymous No.107059219 [Report]
>>107059208
Given you refuse to address what I said I'll take your answer as "No, I don't think that image would ruin a model but it's the principle of the matter, data researchers should spend time how I like and I won't be contributing to a real dataset".
Anonymous No.107059223 [Report] >>107059232
>>107059028
I just ran a python script to see if the image was completly uniform and I got this
>Uniform color percentage: 1.87%
lmao
Anonymous No.107059227 [Report] >>107059234
Watching localpajeets sabotage their models by training on seedream outputs, meanwhile seedream remains number 1. Local is a joke
Anonymous No.107059232 [Report] >>107059240
>>107059223
I'm concerned anon given that's clearly a gradient, did you really write a script for that? Get your eyes checked because it's obviously not a uniform red color. Also wait until you see what the search results for "red" is for images. 100% real is an illusion.
Anonymous No.107059234 [Report] >>107059273
>>107059227
>Local is a joke
as long as those niggers aren't willing to make a serious dataset without synthetic shit, we'll never go to the next level, this is grim...
Anonymous No.107059240 [Report] >>107059248
>>107059232
>did you really write a script for that?
asking this in the year of our lord 2025 is crazy, obviously I used chatgpt retard
Anonymous No.107059242 [Report]
*yawn*
Anonymous No.107059248 [Report] >>107059265
>>107059240
>Get your eyes checked because it's obviously not a uniform red color.
So you asked ChatGPT to write a script to check the color uniformity of a red gradient?
Anonymous No.107059249 [Report]
Anonymous No.107059252 [Report]
Anonymous No.107059265 [Report] >>107059283
>>107059248
>a red gradient?
>>107059028
>So tell me, is this not red?
oh now you're telling me it's a gradient, sure. it's obviously a gradient, my eyes and your eyes can totally see that
Anonymous No.107059273 [Report] >>107059284
>>107059234
I mean this seriously: what's your proposal for hand captioning a 25+ million image dataset with the generous assumption that your human captioners are themselves 100% accurate and can write consistent and useful captions?
Anonymous No.107059283 [Report]
>>107059265
Well given the Google search results for "red" includes gradients...
Anonymous No.107059284 [Report] >>107059288
>>107059273
OpenAI did it with Sora 2, that's the path to success, I never said it was easy, but if there's one easy thing to do, is to train your model only with images pre-2022, that way you're sure you'll never get synthetic shit to poison your model
Anonymous No.107059288 [Report] >>107059292
>>107059284
Yeah, you're going to bet Sora 2 didn't use synthetic captions? Going to bet they didn't use Whisper to write the captions for voices? You people are hilarious.
Anonymous No.107059292 [Report] >>107059295
>>107059288
I really believe they don't do that, they literally hire thousands of african slaves to do manual filtering and captioning, they have the money anon
https://time.com/6247678/openai-chatgpt-kenya-workers/
Anonymous No.107059295 [Report] >>107059296
>>107059292
Oh yeah those Kenya workers, I bet they write stellar captions!
Anonymous No.107059296 [Report] >>107059307 >>107059313
>>107059295
well, they do, Sora 2 is a great model yes or no?
Anonymous No.107059306 [Report]
Anonymous No.107059307 [Report]
>>107059296
ask GPT to explain syllogism
Anonymous No.107059313 [Report] >>107059323
>>107059296
Okay you're just trolling, you got me anon.
Anonymous No.107059323 [Report]
>>107059313
Concession Accepted.
Anonymous No.107059326 [Report] >>107059335 >>107059441
Anyone upgraded from 4090 to 5090? how much faster is 5090 gen compared to 4090? 20% faster?
Anonymous No.107059329 [Report]
>>107059028
>Top 10 colors in this image:
1. RGB(244, 26, 36) - 1.87%
2. RGB(236, 18, 28) - 1.73%
3. RGB(238, 20, 30) - 1.71%
4. RGB(234, 16, 26) - 1.64%
5. RGB(246, 28, 38) - 1.63%
6. RGB(240, 22, 32) - 1.61%
7. RGB(242, 24, 34) - 1.54%
8. RGB(216, 0, 10) - 1.51%
9. RGB(232, 14, 24) - 1.43%
10. RGB(220, 0, 12) - 1.29%
Anonymous No.107059334 [Report] >>107059342 >>107059343 >>107059349 >>107059380
The issue isn't that it's a gradient, the issue is that it has noise.
That noise is the thing the AI picks up on in training and magnifies to produce the patterns on faces seen here >>107058901
color picker with a tolerance of 0 can show it but you can just see it with your eyes if your display is calibrated right and you're not color blind.
Try scrolling up and down with the picture enlarged, that can make it more visible too. I took the color out of it which might help if you're colorblind
Anonymous No.107059335 [Report] >>107059524
>>107059326
It's 30% faster and you're buying it for the VRAM.
Anonymous No.107059340 [Report]
Anonymous No.107059342 [Report] >>107059387
>>107059334
wait until you find out about gradients saved as jpgs found 10,000 times in LAION
Anonymous No.107059343 [Report]
>>107059334
>pircel
>Top colors in image:
1. RGB(146, 146, 146) - 11.12%
2. RGB(138, 138, 138) - 7.61%
3. RGB(134, 134, 134) - 7.50%
4. RGB(136, 136, 136) - 7.47%
5. RGB(130, 130, 130) - 7.07%
6. RGB(132, 132, 132) - 7.04%
7. RGB(128, 128, 128) - 6.60%
8. RGB(140, 140, 140) - 6.59%
9. RGB(144, 144, 144) - 6.29%
10. RGB(142, 142, 142) - 6.11%
Anonymous No.107059349 [Report] >>107059387 >>107059394
>>107059334
>1761877383368598.jpg
imagine training your model with jpg images lool
Anonymous No.107059359 [Report]
>>107059154
That image is rubbish. The gif is much better.
Anonymous No.107059380 [Report] >>107059386
>>107059334
>The issue isn't that it's a gradient, the issue is that it has noise.
>That noise is the thing the AI picks up on in training and magnifies to produce the patterns on faces seen here
this, it magnifies the noise and it magnifies the biases, inbreed synthetic training is nasty, and I believe that's the difference between API and localkeks, API train on real data, then API shits synethetic images, and localkeks eat that shit to train their models
Anonymous No.107059381 [Report]
Watching localpajeets sabotage their models by training on seedream outputs, meanwhile seedream remains number 1. Local is a joke
Anonymous No.107059386 [Report] >>107059407
>>107059380
wait until you find out what happens to jpgs when you save them
Anonymous No.107059387 [Report] >>107059396
>>107059342
>>107059349
jfc it doesn't a difference but here. I also changed the hue to different colors to maybe find one you're all not blind to.
If you're still denying it after this then i'm certain this is just ragebait because there's no way you can be blind enough to not see this noise but somehow also read this text.
Anonymous No.107059394 [Report] >>107059405
>>107059349
you can't possibly believe they don't. It's the standard format for photographers.
Anonymous No.107059396 [Report]
>>107059387
fucking retard talking about noise with a compressed fucking jpg
Anonymous No.107059405 [Report] >>107059413
>>107059394
imagine having a discussion about noise and then defending jpgs when talking about the data purity of a model
wait until you find out about jpgs that are resaved as pngs
Anonymous No.107059407 [Report] >>107059416
>>107059386
I know, compression after compression is not a good thing, whether it's jpg compression or synthetic compression
https://www.youtube.com/watch?v=nqy_hYDI0As
Anonymous No.107059413 [Report] >>107059422
>>107059405
>wait until you find out about jpgs that are resaved as pngs
that can be detected by a script, that's the beautiful process of filtering your data, and yes, you have to be careful of that, as a data scientist, that's the worst part of my job, filtering, but it's also the most important one
Anonymous No.107059416 [Report] >>107059421
>>107059407
Do you believe just 1 AI image in 1,000,000 images will ruin a model?
Anonymous No.107059421 [Report] >>107059432
>>107059416
no I don't believe that, like every poison, it's the amount that count, if it's like less than 0.5% I'd say it's fine
Anonymous No.107059422 [Report] >>107059434
>>107059413
And you probably do a shitty job given the state of every modern generative model since you probably throw out good images arbitrarily lmao, it's why Flux looks like shit.
Anonymous No.107059432 [Report] >>107059440
>>107059421
So then would you say that if you were strategic with your synthetic images and made them count by creating things that would be otherwise be impossible or impractical to find a "real" image of could be useful in a generalization objective?
Anonymous No.107059434 [Report]
>>107059422
I'm not working on diffusion models, what I do is much more modest, and you are right, I wouldn't even take the role of someone working on diffusion models, unless you're woking on OpenAI all they will be asking is to use synthetic shit, and I'm against that
Anonymous No.107059440 [Report] >>107059449 >>107059469
>>107059432
>impossible
not impossible if you only gather images created before the AI era, so I'd say only train your model with images uploaded on the internet before 2022
Anonymous No.107059441 [Report]
>>107059326
50% but only at higher resolutions or with bigger models.
https://chimolog-co.translate.goog/bto-gpu-stable-diffusion-specs/?_x_tr_sl=auto&_x_tr_tl=en&_x_tr_hl=bg&_x_tr_pto=wapp#16002151024SDXL_10
Anonymous No.107059449 [Report] >>107059456 >>107059475
>>107059440
Yeah we wouldn't want Megabonk or Frankenstein 2025 images in any new AI models, smart!
Anonymous No.107059456 [Report] >>107059459
>>107059449
you're implying we actually have IP shit on our local models, flux and qwen image only know miku lol
Anonymous No.107059459 [Report] >>107059468
>>107059456
Only because you refuse to actually do any handcaptioning.
Anonymous No.107059468 [Report] >>107059472
>>107059459
at the end of the day, you'll never get frankenstein 2025 if you prompt it, so...
Anonymous No.107059469 [Report] >>107059475
>>107059440
You can also collect data yourself.
For realism models all you need is a small army of people with cameras and you could have a million pictures in a day.
Anonymous No.107059471 [Report] >>107059502
Reminder that messiblitzballfag was right about flux. Local models refuse to add copyrighted content which is why theyre so slopped
Anonymous No.107059472 [Report] >>107059485
>>107059468
Only because you refuse to write "In this screencap from the made for Netflix movie Frankenstein directed by Guillermo del Toro"
Anonymous No.107059475 [Report] >>107059489 >>107059549 >>107059637
>>107059449
let's pretend we made the perfect model in 2025, then what? 10 years later it will be obsolete because it won't have the IP between 2026 and 2035, it's an endless cycle
>>107059469
>You can also collect data yourself.
Qwen Image was trained on billions images, even if you counted up to 1 billion you wouldn't have enough of your lifetime
Anonymous No.107059485 [Report]
>>107059472
>Frankenstein
>Guillermo del Toro
the model doesn't know that, it only knows miku
Anonymous No.107059489 [Report] >>107059508 >>107059538 >>107059547
>>107059475
So your solution is to have no IP after 2022 because AI images exist. You know, it's actually relatively easy to stay on top of last year's memes assuming you already had a perfect 2025 model. You're really only talking about 100,000 images per year for new stuff people care about.
Anonymous No.107059502 [Report]
>>107059471
>Local models refuse to add copyrighted content
Of course they add them.
Anonymous No.107059508 [Report]
>>107059489
Which is about 275 images per day. I doubt you have 275 images per day of things people would want in an image model related to current events and pop culture.
Anonymous No.107059524 [Report] >>107059533
>>107059335
Does moar VRAM = better quality slop or just faster?
Anonymous No.107059529 [Report]
>>107059181
For fucks sake stop being baited by BIM reversed
Anonymous No.107059533 [Report]
>>107059524
More frames for video models or extremely high resolutions for an image model. Then for LoRA training you really kind of need 32 GB for Qwen Image and Wan.
Anonymous No.107059538 [Report]
>>107059489
>You know, it's actually relatively easy to stay on top of last year's memes assuming you already had a perfect 2025 model. You're really only talking about 100,000 images per year for new stuff people care about.
that doesn't sound bad desu, it's like getting a new DLC each year, Qwen Image 2k26 baby!
Anonymous No.107059547 [Report] >>107059563 >>107059574
>>107059489
it's hard to "update" a model, the more you finetune it, the more it loses other concepts, that's catastrophic forgetting and it's already happening on Qwen Image Edit 2509 (the styles got worse compared to before)
Anonymous No.107059549 [Report]
>>107059475
I mean as a solution for new data.
You can take your pre-2022 data as assumed legit, and add onto the data set every year with inhouse collected photos.
Anonymous No.107059563 [Report] >>107059570
>>107059547
That's not what I even remotely suggested.
Anonymous No.107059570 [Report] >>107059577
>>107059563
that's what you even remotely suggested debo
Anonymous No.107059574 [Report] >>107059591 >>107059598
>>107059547
That's pretty bleak if true for the AI businesses because it means they're gonna have to keep investing this startup level of expenditure on an ongoing basis to keep their models up to date so the dream of making a good model and sitting on it for profit would be dead.
Anonymous No.107059577 [Report] >>107059588
>>107059570
I didn't say a single word about finetuning, it's something you hallucinated, I only talked about adding to an existing dataset.
Anonymous No.107059588 [Report] >>107059596
>>107059577
>I only talked about adding to an existing dataset
are you retarded or something? you pretrained a model with dataset A in 2025, then in 2026 you have to update the new memes, you'll have to finetune with a dataset B
Anonymous No.107059591 [Report]
>>107059574
You can literally train SDXL from scratch for less than $10k in compute. Also that's not even talking about other smarter routes you can take like half-baking a model and using that as your base to start because a large portion of training is training your model to understand edges, colors, dimensionality, etc.
Anonymous No.107059596 [Report] >>107059602
>>107059588
Again, you are hallucinating what I said. If you have a model you trained in 2025 with dataset A. And you have dataset B from 2026, you add B to A and then train a new model. I never once said to finetune, you just assumed that because you're a disingenuous freak.
Anonymous No.107059598 [Report]
>>107059574
it also means you need a giant model to remember all the concepts/memes of humanity, and those concepts are only getting bigger and bigger with time
Anonymous No.107059602 [Report] >>107059626
>>107059596
>you add B to A and then train a new model.
oh so you want to pretrain the model from scratch every year, you know that one single pretrain cost millions right? and you know it's not spent with YOUR money right? what a loser you are
Anonymous No.107059604 [Report] >>107059614
>why would you paint a picture in 2025 if you're going to have to paint a new picture in 2026
Is the level of discussion we're having now.
Anonymous No.107059614 [Report] >>107059624
>>107059604
let's not pretend humanity will cease to exist in 5 years, your 2025 model will be completly obsolete in 2080, it's like asking you to make black and white mickey mouse memes today, no one do that, every generation enjoy the product of their own era
Anonymous No.107059620 [Report]
Why pretrain a new model when you can just finetune sd1.4?? With controlnet and inpainting you can do anything!
Anonymous No.107059624 [Report] >>107059640
>>107059614
>every generation enjoy the product of their own era
Thats not what the insufferable comments on 80s rock youtube videos always say.
Well the ones that aren't "This song reminds me of dead relative"
Anonymous No.107059626 [Report] >>107059637 >>107060041
>>107059602
It doesn't cost millions to train a model that's what's so funny about this whole discussion. You can literally extrapolate compute from HDM or AMD's toy model. AMD's Nitro-E is 340m parameters, trained in 1.5 days on an 8-node cluster which would be $355 to rent. Go to 600m, $1200, go to 1.2B, $4800, go to 2.4B, $19200. And that's not talking about simply making a 5090 cluster which what HDM did and your compute cost is flat after buying the hardware, training an updated model every year is barely inconvenience if you already have a pipeline.
Anonymous No.107059636 [Report] >>107060017
which chroma version should I be using with a 3060 12gb?
Anonymous No.107059637 [Report] >>107059652
>>107059475
>let's pretend we made the perfect model
>>107059626
>340m parameters
the perfect model won't be a 340m model, ahah you're so funny anon
Anonymous No.107059640 [Report] >>107059643
>>107059624
Why do you ever read these dumb comments, just enjoy the music.
Anonymous No.107059643 [Report]
>>107059640
this
Anonymous No.107059652 [Report] >>107059655
>>107059637
What do you get out of this? Do you think the math is wrong? Or do you not want anyone to train a local model for you to use? If I didn't know any better I would assume you were here to discourage people making local models.
Anonymous No.107059654 [Report] >>107062710
>Believe or not but this lora was a failure concept,
oh I believe you

https://civitai.com/models/2088076/luisap-wan-22-shaking-concept
Anonymous No.107059655 [Report] >>107059659
>>107059652
first of all, do you seriously believe a 340m parameter model can remember all the concepts of humanity? are you joking or something?
Anonymous No.107059659 [Report] >>107059664 >>107059738 >>107059746 >>107060041
>>107059655
Go to 600m, $1200, go to 1.2B, $4800, go to 2.4B, $19,200.
Anonymous No.107059664 [Report] >>107059676
>>107059659
>2.4B, $19,200.
then why did it cost 150k for lodestone on chroma (8.9b), and it was just a small finetune of 5 millions images on low res 512x512
Anonymous No.107059676 [Report] >>107059687
>>107059664
I don't want to hurt your feelings but both Chroma and Pony are trained by retards. Anon, HDM and Nitro both exist, they both work. You do realize model architectures aren't voodoo right? You can literally just increase the hidden size and layers and the recipe still works and the difference is the increased linear or quadratic compute requirements.
Anonymous No.107059687 [Report]
>>107059676
>I don't want to hurt your feelings but both Chroma and Pony are trained by retards.
fair enough kek
Anonymous No.107059721 [Report]
>I'm literally shaking right now.
>Shivering spine
Anonymous No.107059726 [Report] >>107059734 >>107062728
>>107058901
I love how they implicitly say:
>you retards only generate AI white women and you don't care of the browns so the internet will be filled with one AI race and the future models will be more biased towards white people
based! keep doing that 1girl genners
Anonymous No.107059734 [Report]
>>107059726
>the tendancy of users to favor data quality over diversity
>"quality"
>only crackers on t = 5
OY VEY!
Anonymous No.107059737 [Report] >>107059740
>I fucked up and pressed "Try new nodes format"
How do you revert back to old nodes "theme" ?
Anonymous No.107059738 [Report] >>107059745
>>107059659
lol good luck with those figures, not in the real world
Anonymous No.107059740 [Report] >>107059769
>>107059737
no refunds
Anonymous No.107059745 [Report] >>107059776
>>107059738
Thanks BFL employee, where's the video model?
Anonymous No.107059746 [Report]
>>107059659
it implies the first try was the good one and you never did any substancial tests before that, literally impossible
Anonymous No.107059769 [Report]
>>107059740
Nevermind, got it fixed.

Btw, what's the best Lora or settings to improve details & background?
I use Smooth Booster V4 but not sure if it's the best nowadays.
Anonymous No.107059776 [Report] >>107059794
>>107059745
ask illustrious how much their finetune cost, ask stability who made the base model
Anonymous No.107059794 [Report] >>107059890
>>107059776
dumb bait SD 1.5 doesn't cost $600,000 to train today, we know this because Pixart is better than SD 1.5 and was trained on dusty university GPUs
Anonymous No.107059835 [Report] >>107059848
>>107058927
>>107058938
I mean, isn't artwork also an inaccurate depiction of reality? So as long as the AI images used in training is high quality then it doesn't matter does it? I guess if you're trying to go for 100% realism and you accidentally get a bunch of realistic AI slop then it could be a problem but not for anything else.
Anonymous No.107059839 [Report]
i haven't read anything here, lol. wake me up when new model videos are available
Anonymous No.107059848 [Report]
>>107059835
>I mean, isn't artwork also an inaccurate depiction of reality?
I'd say it's even for artwork to use synthetic data you know why? artwork isn't supposed to have some set of objective rules like photos (like the laws of physics and light) and yet synthetic data always have some patterns to it, something "objective", and you amplify that bias by training AI artwork, that's why the drawings we got are so sovless and they all look alike, because the model learned some rules that shouldn't be there in the first place
Anonymous No.107059888 [Report]
if that's the next open source model we'll be getting, I have 0 hype, it looks like shit lool
https://xcancel.com/bdsqlsz/status/1984112604005249431#m
Anonymous No.107059890 [Report] >>107059899
>>107059794
>Pixart
oh yea, that turned out well, 30k (with edu at cost discounts for sure) well spent, that is why everyone is using it
Anonymous No.107059899 [Report]
>>107059890
kek
Anonymous No.107059956 [Report] >>107061869
Are there any good webui forks for comfyui that resembles automatic1111s webui? I remember I once instaleld a comfyui webui that was like that.
Anonymous No.107060008 [Report]
Anonymous No.107060017 [Report] >>107060025
>>107059636
>which chroma version should I be using with a 3060 12gb?

Either HD Flash for speed and improved quality or HD SDNQ for speed. Flash model may perform slightly worse at prompt following but the quality of images is superior.

https://huggingface.co/silveroxides/Chroma1-Flash-GGUF/tree/main https://huggingface.co/Disty0/Chroma1-HD-SDNQ-uint4-svd-r32
Anonymous No.107060025 [Report] >>107060068
>>107060017
the mix posted https://www.reddit.com/r/StableDiffusion/comments/1ogx7j4/chroma_radiance_mid_training_but_the_most/
was the best imo

https://github.com/silveroxides/ComfyUI_Hybrid-Scaled_fp8-Loader

https://huggingface.co/silveroxides/Chroma-Misc-Models/blob/main/Chroma1-HD-flash-heun/Chroma1-HD-flash-heun-fp8_scaled_original_hybrid_large_rev2.safetensors

Loras:
https://huggingface.co/silveroxides/Chroma-LoRAs/tree/main
Anonymous No.107060041 [Report]
>>107059626
>>107059659
>You need $20k to train a model that surpasses SDXL

If it's so easy why don't we see more of it? Why did Pixart Sigma stop at that and then flop? Turns out you guys are talking out of your ass.
Anonymous No.107060068 [Report]
>>107060025
Mix with what? Is that HD Flash mixed with Radiance? And do you still prompt it with default HD Flash settings?
Anonymous No.107060071 [Report] >>107060112
chiiiiiiiiii~
Anonymous No.107060112 [Report] >>107060145
>>107060071
Weird snowing-like artifacts. I'm guessing wan2.2 lightxv2 version.
Anonymous No.107060131 [Report] >>107060141 >>107060434
>>107059045
>>107059089
What?
Anonymous No.107060141 [Report]
>>107060131
Ignore him he's just avatarposting
Anonymous No.107060145 [Report] >>107060202
>>107060112
Yep, what's the current meta?
Anonymous No.107060202 [Report] >>107060214 >>107060216
>>107060145

The old wan2.1 lightx2v LoRa still works as well if not better. Wan2.2 lightx2v keep producing weird background particles and other problems.
Anonymous No.107060214 [Report]
>>107060202
I tried 2.1 a bunch a couple days ago and couldn't get it to work worth a damn other than free seizures
Anonymous No.107060216 [Report] >>107060227 >>107060245 >>107060301 >>107060553 >>107061318 >>107061885 >>107061968
>>107060202
use newest high lora, 2.1 has shit prompt following https://civitai.com/models/1585622
Anonymous No.107060227 [Report] >>107060240
>>107060216
he basically extracted the lora out of this right?
https://huggingface.co/lightx2v/Wan2.2-Distill-Models/tree/main
Anonymous No.107060230 [Report]
Sometimes I feel I don't deserve to live in a timeline this good
Anonymous No.107060240 [Report]
>>107060227
the latest one yes, it was just a new high model, it performs the best so far, they are constantly refining it
Anonymous No.107060245 [Report] >>107060254 >>107060258
>>107060216
Okay, I'll bite. You asshole never post proofs. May thousands of your gens becomes cursed and always comes out shit if you are wasting my time.
Anonymous No.107060254 [Report]
>>107060245
you wont like what im into anyways
Anonymous No.107060258 [Report]
>>107060245
please stop making vertical comparisons it's not convenient to look at at all, put them on the same horizontality
Anonymous No.107060301 [Report] >>107060332
>>107060216
>https://civitai.com/models/1585622
this is pretty good, I wished they could completly remove the slo mo effect though
Anonymous No.107060332 [Report] >>107060407
>>107060301
increase the weight of it on high, try 1.1 or 1.2, too high and it will go too fast
Anonymous No.107060368 [Report]
Anonymous No.107060407 [Report] >>107060515 >>107061897
>>107060332
>1.2
yeah that seems to be a good spot
Anonymous No.107060434 [Report]
>>107060131
which part was confusing
Anonymous No.107060450 [Report]
Anonymous No.107060515 [Report] >>107060532
>>107060407
to fix the blurriness turn low up a bit as well, and maybe use 3 + 3 steps
Anonymous No.107060522 [Report]
Anonymous No.107060526 [Report] >>107060532 >>107061908
How do I dictate a perspective move in Wan? Specifically I want the perspective to rotate above Kurosawa, as if I am standing above her.
Anonymous No.107060529 [Report]
Anonymous No.107060532 [Report] >>107060543
>>107060526
SANTA ANTAGI
>>107060515
what low lora are you using?
Anonymous No.107060543 [Report]
>>107060532
latest 2.2
Anonymous No.107060553 [Report] >>107060562 >>107060566 >>107060571 >>107060572 >>107060591 >>107060600
>>107060216
Anonymous No.107060560 [Report]
Fun fact: Flux was trained using synthetic data. Chroma is a small scale tune compared to the base model itself that uncucks it. Same with Krea. Qwen, the same thing happened. Qwen LoRAs give you much better results. So it's a fact that it's possible to train or align your model on synthetic data, then finetune the slop out of it. It's just that, for some reason, these model devs are choosing not to do that. They'd rather give us slop, over a properly trained base model.
Anonymous No.107060562 [Report]
>>107060553
the one on the right is better
Anonymous No.107060566 [Report]
>>107060553
Mmm, look at that beautiful confetti, fireworks, amazing.
Anonymous No.107060571 [Report] >>107060581 >>107060591 >>107061925
>>107060553
Try
New HIGH:
https://huggingface.co/Kijai/WanVideo_comfy/blob/main/LoRAs/Wan22_Lightx2v/Wan_2_2_I2V_A14B_HIGH_lightx2v_MoE_distill_lora_rank_64_bf16.safetensors

Old LOW:
https://huggingface.co/Kijai/WanVideo_comfy/blob/main/LoRAs/Wan22-Lightning/old/Wan2.2-Lightning_I2V-A14B-4steps-lora_LOW_fp16.safetensors
Anonymous No.107060572 [Report]
>>107060553
you are using the ones on the right at too low values for starters, other than that ive seen it back and forth from seed to seed, old light loras are bad at 2d for instance
Anonymous No.107060573 [Report]
Anonymous No.107060581 [Report] >>107060591
>>107060571
that is not the newest high lora
Anonymous No.107060591 [Report]
>>107060553
>>107060571
Then also try those two with cfg 1, unipc
>>107060581
I copy pasted the message from when it was
Anonymous No.107060597 [Report]
Anonymous No.107060600 [Report] >>107060633
>>107060553
could you share your input image and prompt? I'd like to try this out as well.
Anonymous No.107060605 [Report]
>>107059145
But an LLM can't even really understand that
"Josie is with Carl" means "Carl is with Josie". I don't understand what synthetic data is even supposed to fix. Unless I'm misunderstanding something and th purpose is to auto generate relationships that can that will then be reviewed by humans.
But is assuming that an LLM that receives A+B, and then reviewing if the inference is actually B+A really better than just having minimum wage workers do it?
Anonymous No.107060624 [Report]
Anonymous No.107060633 [Report] >>107060830
>>107060600
Wan_2_2_I2V_A14B_HIGH_lightx2v_4step_lora_v1030_rank_64_bf16 High 1.2
wan2.2_i2v_A14b_low_noise_lora_rank64_lightx2v_4step_1022 Low 1

https://litter.catbox.moe/rsnucm5j9utv8270.png

The camera moves alongside a moving subject to maintain framing.

A woman with an axe.

A stunning woman moving backwards fluidly with expressive, rhythmic motion, leaning and swaying with precision, arms flowing gracefully through the air, body in perfect control, each movement filled with emotion and energy, confident posture, captivating presence, hair moving with her motion,

She swing her axe while a first person view of another's person sword collide with her axe at extremely high speed with spark, then another person's armored hand goes into frame and punch her in the face with his fist and blood splatters from her nose, she gets knocked back, she fell over sideways onto the ground, blood on her nose, with purple bruise on her face she close her eyes and fall asleep.
Anonymous No.107060641 [Report] >>107061168 >>107061213
The most erotic thing a 1girl can do is look at her phone
Anonymous No.107060652 [Report]
Anonymous No.107060675 [Report] >>107060680 >>107060739
Why is it so rare for SAAS to let you control the 'temperature' of the output? I remember Kling had a compliance slider that you could set all the way down to get very real-looking videos, and that was so much better.
Anonymous No.107060680 [Report]
>>107060675
because the normies don't care about that shit, they just want to write the funni prompt, press play and call it a day
Anonymous No.107060739 [Report]
>>107060675
Lol I opened Kling just now and they actually removed that control. It used to be there, now it's gone. Bravo.
Anonymous No.107060830 [Report]
>>107060633
ok where's the next scene with the goblin loss screen???
Anonymous No.107060923 [Report] >>107061000
TOP KEK, trying light lora combos.

https://files.catbox.moe/7zpjb2.mp4 NSFW
Anonymous No.107061000 [Report] >>107061087
>>107060923
That animation is chaos.
Anonymous No.107061087 [Report]
>>107061000
They must have so much confetti and floaty things in their training data, it's so common in the light loras.
Not even above 1 strength, lol.
Anonymous No.107061168 [Report]
>>107060641
Anonymous No.107061213 [Report]
>>107060641
Anonymous No.107061235 [Report]
>gen video at the brink of OOMing
>system runs fine
>it clears the models and starts doing color matching
>system starts to lag

Makes sense.
Anonymous No.107061241 [Report] >>107061249 >>107061260 >>107061261 >>107061303 >>107061320 >>107061348 >>107061572
>think about upgrading to 64gb of ram a couple of days ago
>it's 277€
>check again today
WHAT THE FUCK
Anonymous No.107061249 [Report]
>>107061241
Yeah it's not coming down for a while too
Anonymous No.107061260 [Report]
>>107061241
192gb cost me 800usd. Guess i was lucky.
Anonymous No.107061261 [Report]
>>107061241
back then it were miners
now it's ai retards
Anonymous No.107061303 [Report] >>107061312 >>107061320 >>107061420
>>107061241
Is is true that you need 128GB to do Wan2.2 comfortably?
Anonymous No.107061312 [Report]
>>107061303
256
Anonymous No.107061318 [Report]
>>107060216
Damn, this new 1030 version seems to have removed the confetti for me completely.
Anonymous No.107061320 [Report]
>>107061241
why are you buying RGB shit?
> looks at his unused 2x32GB DDR5-6600

>>107061303
no. it's probably convenient some time between 64 and 96GB
Anonymous No.107061348 [Report]
>>107061241
Just get DDR3
Anonymous No.107061420 [Report] >>107061429
>>107061303
64GB per GPU is comfy
Anonymous No.107061429 [Report] >>107061440 >>107061486
>>107061420
>per GPU
A-am I supposed to have more than one?
Anonymous No.107061440 [Report]
>>107061429
Holy poorfag
Anonymous No.107061486 [Report]
>>107061429
A dedicated ai gpu makes everything less annoying
Anonymous No.107061572 [Report] >>107061666
>>107061241
>same with coffee prices
this country sucks, riots when
Anonymous No.107061666 [Report]
>>107061572
> riots
are bad and illegal
but you can vote
Anonymous No.107061758 [Report]
Anonymous No.107061869 [Report]
>>107059956
there's swarmui. there's also this for comfy
https://github.com/chrisgoringe/cg-controller
Anonymous No.107061885 [Report]
>>107060216
>click
>pony porn
epic
Anonymous No.107061890 [Report]
the weights got released
https://huggingface.co/BAAI/Emu3.5-Image/tree/main
Anonymous No.107061897 [Report] >>107061913 >>107061928
>>107060407
you call that good retard?
Anonymous No.107061908 [Report]
>>107060526
https://civitai.com/models/1878750?modelVersionId=2126493
Anonymous No.107061911 [Report]
Can you train a character lora with different ages for a character, or will the lora just mix it up? If you want a specific background(s) for a character, is it enough to include it into the trainign set of character, or will iamges that don't show the character have a negative impact on a character lora?
Anonymous No.107061913 [Report]
>>107061897
>too retarded to understand there's a difference between a lora having a "good spot" and "having good results"
Anonymous No.107061925 [Report] >>107061928
>>107060571
ITS NOT NEW YOU FUCKIN RETARD
Anonymous No.107061928 [Report]
>>107061897
>>107061925
>debo having another meltie
Anonymous No.107061929 [Report]
Ran took everything from me
Anonymous No.107061948 [Report] >>107062127
>i am a nigbophile
Anonymous No.107061956 [Report] >>107061968 >>107062024
>4 steps, split at 2
>unipc
>Wan_2_2_I2V_A14B_HIGH_lightx2v_MoE_distill_lora_rank_64_bf16 at strength 2
>Wan2.2-Lightning_I2V-A14B-4steps-lora_LOW_fp16 at strength 1
am I missing something? Works best so far but still tends to cook the output.
Anonymous No.107061968 [Report] >>107061994
>>107061956
>strength 2
stay at strength 1, and are you using the latest one? >>107060216
Anonymous No.107061991 [Report]
Debo been trolling this general for years.
Anonymous No.107061994 [Report] >>107062013
>>107061968
I am not. Should I replace both low and high or just high?
Anonymous No.107062000 [Report]
why only i2v lightning loras?? where tf are the t2v loras?!
Anonymous No.107062008 [Report] >>107062038
Comfy should be dragged into the streets and shot
Anonymous No.107062013 [Report] >>107062172
>>107061994
just high
Anonymous No.107062024 [Report] >>107062172
>>107061956
2 strength is too high. you should do 6 steps because 4 is really not enough
Anonymous No.107062029 [Report] >>107062048 >>107062060 >>107062067
>>107058480 (OP)
I've been trying to install comfyui on my amd gpu pc these past couple of days and I won't lie I am struggling
Anonymous No.107062038 [Report] >>107062065
>>107062008
only if it's under commercial license
Anonymous No.107062048 [Report]
>>107062029
>amd
Anonymous No.107062060 [Report]
>>107062029
What is difficult about it? Outside of using venv.
Anonymous No.107062065 [Report] >>107062850
>>107062038
it may as well be if it's sending data to glowies and isreal
Anonymous No.107062067 [Report] >>107062085
>>107062029
if you're having technical issues, ask chatgpt to make step by step process for you
Anonymous No.107062085 [Report] >>107062110
>>107062067
You are absolutely right — recommending ChatGPT is a smart move.
Anonymous No.107062100 [Report] >>107062104 >>107062140 >>107062157 >>107062254 >>107062397 >>107062865
Ignoring the obvious image flaws, I cannot get a white woman to appear with this prompt on HD-Flash. No matter how many re-rolls, which sampler, or schedule.

>In a conservatory's frost-kissed quiet, this sharp photo with glass glare shows a young white woman pruning orchids, tea gown of watered silk shimmering, silver shears in hand with ebony hair tendrils framing her focused brow, fallen petals on the floor, blooms nodding in verdant blur for a botanical, perfumed ritual.
Anonymous No.107062104 [Report] >>107062124 >>107062281
>>107062100
>Still using Chroma in near 2026
Anonymous No.107062110 [Report]
>>107062085
Thank you — fellow anon individual.
Anonymous No.107062124 [Report]
>>107062104
this
Anonymous No.107062127 [Report]
>>107061948
Why do you say the same retarded shit in both threads for years?
Anonymous No.107062140 [Report] >>107062216
>>107062100
put nigger in the negatives
Anonymous No.107062145 [Report]
>>107058856
Yes
Think of the dogs, Hassan
Free Kaya
Anonymous No.107062157 [Report] >>107062216
>>107062100
>ebony
remove
Anonymous No.107062172 [Report]
>>107062013
>>107062024
>new high at strength 1
>6 steps
much better, thanks
Anonymous No.107062216 [Report] >>107062233 >>107062254
>>107062140
CFG=1 for HD-Flash

>>107062157
>ebony
>remove
Thanks, that did it!
Anonymous No.107062233 [Report]
>>107062216
>CFG=1 for HD-Flash
use NAG or whatever taht node is that lets you use negative on cfg 1
Anonymous No.107062242 [Report]
Anonymous No.107062254 [Report] >>107062277
>>107062100
>>107062216
I think I found the word to describe "chroma", it's just "oversaturated", the colors are too harsh for the eyes
Anonymous No.107062261 [Report]
Anonymous No.107062277 [Report] >>107062298
>>107062254
prompt issue/sampler issue
Anonymous No.107062281 [Report] >>107062295 >>107062304
>>107062104
what model should we be using?
Anonymous No.107062295 [Report]
>>107062281
SDXL
Anonymous No.107062298 [Report] >>107062499
>>107062277
you should tell that to the guy that uploaded those images, not me
Anonymous No.107062304 [Report]
>>107062281
pony v7
Anonymous No.107062306 [Report] >>107062347 >>107062380 >>107062415
https://www.reddit.com/r/udiomusic/comments/1okj79g/important_update_from_team_udio/
lmao that damage control, can't wait for the chinks to provide udio at home
Anonymous No.107062315 [Report] >>107062319 >>107062348
>FIBO is moderately good at NSFW
Why is no one talking about this?
Anonymous No.107062319 [Report] >>107062337
>>107062315
catbox?
Anonymous No.107062337 [Report] >>107062354
>>107062319
It's good out of the box
Anonymous No.107062347 [Report]
>>107062306
based
Anonymous No.107062348 [Report]
>>107062315
will not be in comfy because of non commercial license
unless krinjai implements
doa
Anonymous No.107062354 [Report]
>>107062337
prove it or it never existed
Anonymous No.107062380 [Report]
>>107062306
>can't wait for the chinks to provide udio at home
tencent released something interesting today
https://github.com/tencent-ailab/SongBloom
https://github.com/fredconex/ComfyUI-SongBloom
Anonymous No.107062388 [Report]
Anonymous No.107062397 [Report] >>107062437
>>107062100
>frost-kissed quiet
>blooms nodding
>perfumed ritual
why do we have to prompt with this horribly tacky tryhard style
Anonymous No.107062415 [Report] >>107062428
>>107062306
it's not just damage control, changing retroactively TOS is illegal
Anonymous No.107062428 [Report]
>>107062415
it is illegal, that's why they're doing that damage control in the first place, but who's gonna sue them though? UMG has infinite money, they can make this kind of move
Anonymous No.107062437 [Report]
>>107062397
because that's how dataset was captioned
Anonymous No.107062454 [Report]
new
>>107062451
>>107062451
>>107062451
>>107062451
Anonymous No.107062499 [Report] >>107062512 >>107062597
>>107062298
You were the one that said something about chroma that was false
Anonymous No.107062512 [Report] >>107062551
>>107062499
if you can't see the saturation I think you might be blind anon, your eyes have skills issue
Anonymous No.107062551 [Report]
>>107062512
try leaving your room when its a day outside
Anonymous No.107062597 [Report]
>>107062499
> fingers
Anonymous No.107062710 [Report]
>>107059654
>me when helping the elderly
Anonymous No.107062728 [Report]
>>107059726
Not even their own race likes black wimmen
Anonymous No.107062850 [Report]
>>107062065
Be me, pervert
Gen the most distasteful perversions
Get letter
Its the Israëli embassy
Inside is 5000€ demanding I upgrade my system and gen more
>Hosanna, I love telemetry
Anonymous No.107062865 [Report]
>>107062100
Caucasian’ usually forces a human to appear