← Home ← Back to /g/

Thread 106991205

315 posts 176 images /g/
Anonymous No.106991205 [Report] >>106991226
/ldg/ - Local Diffusion General
Even Comfy Himself Edition

Discussion of Free and Open Source Text-to-Image/Video Models

Prev: >>106988458

https://rentry.org/ldg-lazy-getting-started-guide

>UI
ComfyUI: https://github.com/comfyanonymous/ComfyUI
SwarmUI: https://github.com/mcmonkeyprojects/SwarmUI
re/Forge/Classic/Neo: https://rentry.org/ldg-lazy-getting-started-guide#reforgeclassicneo
SD.Next: https://github.com/vladmandic/sdnext
Wan2GP: https://github.com/deepbeepmeep/Wan2GP

>Checkpoints, LoRAs, Upscalers, & Workflows
https://civitai.com
https://civitaiarchive.com/
https://openmodeldb.info
https://openart.ai/workflows

>Tuning
https://github.com/spacepxl/demystifying-sd-finetuning
https://github.com/Nerogar/OneTrainer
https://github.com/kohya-ss/sd-scripts/tree/sd3
https://github.com/derrian-distro/LoRA_Easy_Training_Scripts
https://github.com/tdrussell/diffusion-pipe

>WanX
https://comfyanonymous.github.io/ComfyUI_examples/wan22/
https://github.com/Wan-Video

>Chroma
https://huggingface.co/lodestones/Chroma1-Base
Training: https://rentry.org/mvu52t46

>Neta Lumina
https://civitai.com/models/1790792?modelVersionId=2298660
https://gumgum10.github.io/gumgum.github.io/https://huggingface.co/neta-art/Neta-Lumina

>Illustrious
1girl and Beyond: https://rentry.org/comfyui_guide_1girl
Tag Explorer: https://tagexplorer.github.io/

>Misc
Local Model Meta: https://rentry.org/localmodelsmeta
Share Metadata: https://catbox.moe | https://litterbox.catbox.moe/
GPU Benchmarks: https://chimolog.co/bto-gpu-stable-diffusion-specs/
Img2Prompt: https://huggingface.co/spaces/fancyfeast/joy-caption-beta-one
Txt2Img Plugin: https://github.com/Acly/krita-ai-diffusion
Archive: https://rentry.org/sdg-link
Bakery: https://rentry.org/ldgcollage

>Neighbors
>>>/aco/csdg
>>>/b/degen
>>>/b/realistic+parody
>>>/gif/vdg
>>>/d/ddg
>>>/e/edg
>>>/h/hdg
>>>/trash/slop
>>>/vt/vtai
>>>/u/udg

>Local Text
>>>/g/lmg

>Maintain Thread Quality
https://rentry.org/debo
Anonymous No.106991224 [Report] >>106991350 >>106991681
THREE MORE YEARS OF SDXL
Anonymous No.106991226 [Report]
>>106991205 (OP)
>https://gumgum10.github.io/gumgum.github.io/https://huggingface.co/neta-art/Neta-Lumina
fix this link and seperate them you retarded nigger faggot
Anonymous No.106991228 [Report] >>106991232 >>106991304
Anonymous No.106991232 [Report] >>106991233
>>106991228
How you achieved a crappy camcorder look? is it a lora or just some prompting?
Anonymous No.106991233 [Report] >>106991250
>>106991232
it's this
https://civitai.com/models/1134895/2000s-analog-core
Anonymous No.106991238 [Report] >>106991245
how do I create hyperslop
Anonymous No.106991245 [Report]
>>106991238
use a mix or merge created in the last six to nine or so months
Anonymous No.106991246 [Report]
Anonymous No.106991250 [Report]
>>106991233
oh yes I fucking love grainy analog y2k look, slop me more bra
Anonymous No.106991252 [Report] >>106991261
>>106990996
that's just for part of it in the most recent version though, it's probably not a big deal. Mixed NLP / tag captions are generally what you want for this kind of model anyways.

>>106991062
that's not gonna happen lmao, it would take an enormously huge amount of degradation given the text encoder itself is far superior to CLIP
Anonymous No.106991254 [Report]
Anonymous No.106991261 [Report] >>106991266
>>106991252
>that's not gonna happen lmao, it would take an enormously huge amount of degradation given the text encoder itself is far superior to CLIP
tbqh i think it's saying something considering the model still retains a lot of its original knowledge even after extensive training on anime
Anonymous No.106991264 [Report] >>106991600
Anonymous No.106991265 [Report]
I wish Qwen was nearly 1/5th as good as Chroma
Anonymous No.106991266 [Report]
>>106991261
yeah the realism that must be from base Lumina isn't that degraded at all, you can bring it back pretty easily with boomer prompts
Anonymous No.106991268 [Report] >>106991280
I wish Chroma wasn’t 1/5 the resolution of Qwen
Anonymous No.106991270 [Report]
Anonymous No.106991280 [Report]
>>106991268
KEEEEEEK chroma really was trained at 512x512 in 2025. embarrassing!
Anonymous No.106991282 [Report] >>106991289 >>106991297 >>106991302
Does anyone else notice Yume suffers from duplications at resolutions higher than ~1400px. Need controlnets ASAP.
Anonymous No.106991289 [Report]
>>106991282
Or just a non shit model
Anonymous No.106991297 [Report] >>106991309
>>106991282
Well yeah, clearly it’s not trained above that resolution. Happens with SD1.5 above 768 and SDXL above 1200
Anonymous No.106991302 [Report] >>106991315
>>106991282
not really, I gen at 1536x1536 with it all the time. Even higher every now and then. Could depend on your artist tags though possibly
Anonymous No.106991304 [Report]
>>106991228
Anonymous No.106991306 [Report] >>106993523
Anonymous No.106991307 [Report] >>106991412
i'm using a bunch of different face detailers in my workflow, but i think i would be getting way better results if i took the detected area, resized it, inpainted it at a higher resolution, and then downscaled it. is there a clean way to do this? would simply resizing the whole image before and after work well?
Anonymous No.106991309 [Report]
>>106991297
IDK about Neta Lumina 1.0 but Yume is supposed to have been mult-res trained at between 768 and 1536.
Anonymous No.106991311 [Report]
>try out the double loras for an old gen

Jesus christ, why is it so bad?
Anonymous No.106991315 [Report] >>106991346
>>106991302
>Could depend on your artist tags though possibly
Without a doubt, now that I think about it. Still I would love cnets so I can at least gen at a lower res in order to choose which I throw through a second pass. Like other high(er than XL) res models, it's a cool feature but I much prefer "highres fix"ing instead.
Anonymous No.106991342 [Report] >>106991350 >>106991355 >>106991595
How much longer until local reach midjourney levels?
Anonymous No.106991345 [Report] >>106991354 >>106991474
>>106991191
Chroma really being carried by prompt engineering here. Can't do this at all if I describe
>Lifeless body of a man
And descriptions as such just gives me body horror.

But then changing that to
>sleeping man, who lays with his arms spread, eyes closed

Is much closer to what I want even if not perfect (I wanted her to hold axe, but then it can't depict it unless I have her standing there alone).

Chroma truly is all about prompt engineering and that's why Plebbitors are sleeping on it.

>>106991193
It's down, but prompt was
>Amateur flash photograph capturing a striking and adventurous beautiful young Japanese idol woman, embodying a mix of fierce determination and ethereal beauty, squatting low in a shadowy woodland clearing at night beside the sleeping man, who lays with his arms spread, eyes closed, extended across the leaf-strewn ground with a faint glimmer of crimson catching the camera's harsh light. She grips a katana, its sharp blade prominently displayed, with dried blood on it, and held in a manner both triumphant and solemn, as if to mark a rite of passage in this rugged outdoor expedition. She has long, dark hair with heavy bangs covering her forehead, and seems to be wearing makeup that creates a tired or distressed look, with smudged eyes and possibly pale skin. She gazes directly into the lens with wide, intense eyes enhanced by subtle makeup—her expression a magnetic blend of quiet pride, melancholy, and idol-like poise—while her chilled cheeks flush with the effort. Her attire is as dark as her: A maid dress, with ripped stockings. The backdrop fades into a veil of dense trees and tangled undergrowth barely touched by the abrupt, brilliant flare of the flash, suggesting the vast obscurity of the woods beyond. The overall scene radiates themes of survival instinct, primal empowerment, and the uncanny allure of an idol transformed into a huntress under the stark, unflinching glow of a nighttime capture.
Anonymous No.106991346 [Report] >>106991370
>>106991315
have you tried just using two KSamplers where everything is exactly the same except for the denoise strength, with an upscale model in the middle? Should work fine at like 0.3 - 0.4 strength, that's how I upscale with Neta sometimes. Unless your reason for using controlnet tile was solely to save memory
Anonymous No.106991350 [Report]
>>106991342
>>106991224
Anonymous No.106991354 [Report] >>106991369
>>106991345
You can thank T5 for that. The thing wants extremely literal prompts or else it'll misinterpret it.
Anonymous No.106991355 [Report] >>106991360 >>106991364 >>106991374 >>106991595
>>106991342
It has, MidJourney isn't even close to the top of any benchmark chart that exists anywhere
Anonymous No.106991360 [Report] >>106991368
>>106991355
i think that's the point he's making. local is so shit for the past few years and the next several years while API is constantly raising the bar every week
Anonymous No.106991364 [Report]
>>106991355
There are no valid charts for image models
Anonymous No.106991365 [Report]
Anonymous No.106991368 [Report]
>>106991360
SDXL came out in late June 2023, Flux came out in August 2024.
Anonymous No.106991369 [Report]
>>106991354
Just requires intuition what works and what doesn't. It can depict two people together perfectly, and I have confirmed that thanks to the POV experiments, so after that you just guess what words it wants and where it wants them.
Anonymous No.106991370 [Report] >>106991388 >>106991428
>>106991346
I'm sure it would, but I prefer using laten upscale which requires a high denoise and thus cnets. Pixelspace upscaling is often not terrible, but latent is superior hands down.
>Unless your reason for using controlnet tile was solely to save memory
No, just because I think latent is much better.
Anonymous No.106991374 [Report]
>>106991355
> top of any benchmark chart
like hynuan 3.0?
Anonymous No.106991375 [Report]
all midjourney gens rook same same though. sometimes its okay but often it ruins it.
Anonymous No.106991388 [Report] >>106991397 >>106991484
>>106991370
latent upscale is just using traditional dumb algos to increase the size before you move into the next KSampler, it's not superior in any way to using a purpose trained ESRGAN / DAT / etc model to do the exact same thing, really it's worse by all accounts. I frankly don't understand what you mean.
Anonymous No.106991397 [Report] >>106991402 >>106991423 >>106991428 >>106991549
>>106991388
For one, needing to translate to and from pixel space isn't lossy... so just based on that it's better. Also the use of cnets allows the user more control over how the second pass holds to or departs from the original image. Subjectively, any denoise lower than 0.4 is pointless anyway.
>DAT
Desu the best out of the bunch but still not as good as latent when doing comparisons.
Anonymous No.106991402 [Report]
>>106991397
>isn't lossy.
*isn't lossless
Anonymous No.106991407 [Report]
Give me one good reason why training with synthetic content is a bad thing
Anonymous No.106991412 [Report] >>106991478
>>106991307
Are you using comfy or some variation of forge/webui/etc? What you described is the inpaint behavior in webui if you have "masked area only" set. It scales the area to whatever resolution your have appreciated and you can specify a padding to bring in more context around the inpainted region
Anonymous No.106991423 [Report] >>106991469 >>106991484
>>106991397
>needing to translate to and from pixel space isn't lossy
nta but the absolute best upscaling workflow imo would be training DAT but exclusively on VAE degradation. traditional latent upscaling methods aren't great because they're, like the other anon said, using dumb algos like bicubic, nearest, etc. with the very low resolution of the actual latents, this often hurts details more than it helps. the true endgame would be using a model similar to DAT but in latent space, but this would require a much much more powerful arch due to the very low resolution of the latents.
Anonymous No.106991428 [Report] >>106991469 >>106991469
>>106991370
>>106991397
You're relying on intuition. Empirically it's better to upscale the raw image, not the latents. Yes it requires one extra pass through the vae, but that isn't as lossy as you think
Anonymous No.106991469 [Report] >>106991477 >>106991489
>>106991423
>>106991428
Perhaps. I have done direct 1:1 tests (on XL to be clear) and pixel space has always fucked the outputs. Again, sure it's often not terrible but latents superiority virtually jumps out of the screen at me.
>dumb algos like bicubic, nearest, etc.
I wish Comfy had that aliased latent upscale that whatever Forge fork has.
>but that isn't as lossy as you think
This is especially true with *Lumina models but, again, I have done tests, and the benefits of latent far surpass that of pixelspace.
With Chroma I was surprised at how well it holds an image when doing pixelspace upscale second pass, but even that still falls apart when you push the denoise up to anything close to .7.

>>106991428
What is the downside to cnet support, regardless of mine and your points? I don't see a reason to NOT have them desu.
Anonymous No.106991474 [Report]
>>106991345
Hunyuan 3. Yeah, about those benchmark rankings...
Anonymous No.106991477 [Report] >>106991484
>>106991469
>I wish Comfy had that aliased latent upscale that whatever Forge fork has.
this?
Anonymous No.106991478 [Report]
>>106991412
im using comfy right now. resizing the whole image beforehand works, however it seems to mess with bbox detection.
Anonymous No.106991484 [Report] >>106991508
>>106991477
Yes, and that bicubic antialiased.

>>106991388
>>106991423
>traditional dumb algos
Is there a problem with Bislerp? I only use that.
Anonymous No.106991489 [Report] >>106991497
>>106991469
>pixel space has always fucked the outputs
hasn't been the case in my experience. if anything latent upscale often introduced more artifacts for me.
Anonymous No.106991495 [Report] >>106991546 >>106995611
Name one thing chroma does better than other models
Anonymous No.106991497 [Report]
>>106991489
I think often the problem lies in ones cnet settings and prompt. It's a bitch to dial in (especially with some models) but once one does, it's like magic.
Anonymous No.106991508 [Report] >>106991549
>>106991484
Use case for latent upscaling?
(I jumped into this conversation just to help jog your memory i have no idea whats going on im running on 2% brainpower but i wanna see were this goes)

i ran some upscales with latent bicubic antialiased and it looks really good
Anonymous No.106991546 [Report]
>>106991495
Its the only model that has a built-in noise filter
Anonymous No.106991549 [Report]
>>106991508
>>106991397
Even if the loss is minimal, the logical approach is to minimize it as much as possible as in not translating at all. It is admittedly less now with models like Flux, Lumina, and other modern arch compared to the shit that is XL's. But still, it's there.
For past models, it was most apparent in the colors and high noise details. Even with a suped up external VAE.
Anonymous No.106991556 [Report] >>106991568 >>106991572 >>106991577 >>106991583 >>106991903
sell me on using Qwen
Anonymous No.106991566 [Report] >>106991813
Anonymous No.106991568 [Report]
>>106991556
You can use the analog lora and pretend its chroma to trick anons into thinking chroma is actually good
Anonymous No.106991572 [Report] >>106991579
>>106991556
it's like chroma but worse in every way
Anonymous No.106991577 [Report] >>106991591
>>106991556
highest param open image model ever released
Anonymous No.106991579 [Report] >>106991588
>>106991572
Post a chroma guitar with 6 strings and 6 pegs
Anonymous No.106991583 [Report]
>>106991556
it's like chroma but better in every way*
*very bad seed variety and no nsfw
Anonymous No.106991588 [Report]
>>106991579
best i can do is a 1girl
Anonymous No.106991591 [Report]
>>106991577
broski, your hunyuan 3 80B?
Anonymous No.106991594 [Report]
Anonymous No.106991595 [Report]
>>106991355
>>106991342
Local caught up around Flux. That's when its LoRAs were really up there.

For realism, MJ is currently not that good. Pic rel are four MJ gens made not too long ago. SDXL tier crap (though you could argue it's better than SDXL all you want, it's still not Flux tier).
Anonymous No.106991600 [Report] >>106991601
>>106991264
finally... untooned if it was good
Anonymous No.106991601 [Report] >>106991607 >>106992607
>>106991600
do peter griffin
Anonymous No.106991607 [Report] >>106991615
>>106991601
>>>/r/
Anonymous No.106991615 [Report]
>>106991607
it wasn't a request
Anonymous No.106991681 [Report]
>>106991224
SDXL be like
Anonymous No.106991694 [Report] >>106991712 >>106993062
Trying the latent upscale from an anon from before. It won't work for using the same image as last frame, I guess this is because the low noise now has the upscaled resolution, but am I not feeding it the upscaled resolution?
Anonymous No.106991704 [Report] >>106991711 >>106991863 >>106993234 >>106993262
https://noamissachar.github.io/DyPE/
slop in 4k let's goo!
Anonymous No.106991711 [Report]
>>106991704
chromaxysters... we won...
Anonymous No.106991712 [Report] >>106991742
>>106991694
Oh, I was talking only about images. No idea for videos.
Anonymous No.106991736 [Report] >>106992494 >>106992753 >>106993220 >>106993259
https://xcancel.com/bdsqlsz/status/1981610051422040067#m
new cope soon(TM)
Anonymous No.106991738 [Report] >>106991740 >>106991824 >>106993066
>Keeps models loaded onto vram even after closing
>Logs prompts and send them for """telemetry"'" purposes
>Will soon be closed source
Tell me again why Comfy is good?
Anonymous No.106991740 [Report]
>>106991738
>Logs prompts and send them
me when I lie
Anonymous No.106991742 [Report] >>106991953
>>106991712
Shit. Well I got it to not error by doing pic related. But the genned result is just a static image.
Anonymous No.106991746 [Report] >>106991749
ComfyUI Hijacks your phone and sends your dick pic to Comfyanon himself
Anonymous No.106991749 [Report]
>>106991746
but i already do that myself
Anonymous No.106991753 [Report] >>106991813
Anonymous No.106991813 [Report] >>106991825 >>106991845
>>106991753
>>106991566
interested in recipe
Anonymous No.106991824 [Report]
>>106991738
Why is Comfy such a promptlet that he needs to steal other people's prompts?
Anonymous No.106991825 [Report]
>>106991813
i'll try, but share places are getting retarded...
Anonymous No.106991845 [Report] >>106992002 >>106992858
>>106991813
gettem while they're hot
https://litter.catbox.moe/9alwt7ad0aziq1r9.png
https://litter.catbox.moe/fhji583fokthr8h8.png
Anonymous No.106991863 [Report]
>>106991704
Looks insane
Anonymous No.106991871 [Report] >>106991930 >>106991945 >>106992871 >>106992977
Wansisters, long vid 2.2 is here

>State-of-the-art text-to-video models excel at generating isolated clips but fall short of creating coherent, multi-shot narratives—the essence of storytelling. We bridge this "narrative gap" with HoloCine, a framework that generates entire scenes holistically to ensure global consistency from the first shot to the last. Our architecture achieves precise directorial control through a Window Cross-Attention mechanism that localizes text prompts to specific shots, while a Sparse Inter-Shot Self-Attention pattern—dense within shots but sparse between them—ensures the efficiency required for minute-scale generation. Beyond setting a new state-of-the-art in narrative coherence, HoloCine develops remarkable emergent abilities: a persistent memory for characters and scenes, and an intuitive grasp of cinematic techniques. Our work marks a pivotal shift from clip synthesis towards automated cinematic storytelling.

https://github.com/yihao-meng/HoloCine
https://huggingface.co/hlwang06/HoloCine/tree/main/HoloCine_dit/full
https://holo-cine.github.io/
Anonymous No.106991877 [Report]
Anonymous No.106991883 [Report] >>106992294 >>106992472
Reminder the next release of Wan is already showing to be better than Sora2
Anonymous No.106991903 [Report]
>>106991556
qwen image is boring and has terrible seed rng variety.
Anonymous No.106991911 [Report]
>order 96gb of ram because it's the only thing in stock and other stores have no date for restock
>meant to be delivered yesterday
>got delayed till monday
>get an email now that it's out of stock

But with me searching again led me to a 192gb pack and it's completely in stock and arrives monday.

What a blessing.
Anonymous No.106991921 [Report] >>106992043
What is the Jeets recommendation for a good image model?
Anonymous No.106991930 [Report] >>106991950
>>106991871
It needs the new code to work properly, kijai is supposedly working on it at the moment.
Finally 5 sec slop will stop, been waiting for this for like a year.
Anonymous No.106991945 [Report] >>106991954
>>106991871
>no audio+video combined generation.
fuck off with this bullshit.
>57.2gb
dead in the water, even ovi would have better potential community support if properly integrated with comfy, wan2gp and neoforge.
Anonymous No.106991950 [Report]
>>106991930
>gen 10 minute vidoe
>prompt completely fails 6 minutes in
5 seconds will always be superior
Anonymous No.106991953 [Report] >>106991976
>>106991742
I can't get around this it seems.
I guess using a last frame doesn't work when low noise is starting from step 0 with less than 1 on denoise?
Anonymous No.106991954 [Report]
>>106991945
Can't tell if trolling or legitimately braindead.
Anonymous No.106991957 [Report]
Anonymous No.106991971 [Report]
https://youtu.be/9HwCNiUtYv4
gn
Anonymous No.106991976 [Report] >>106992008
>>106991953
Compared to just using first frame. Stuff actually happens and the latent upscale is working.
Anonymous No.106992002 [Report] >>106992858
>>106991845
got, tyvm
Anonymous No.106992008 [Report]
>>106991976
I went back to the original workflow and hooked up one single thing and it just works..
I shouldn't be doing these things after waking up and desperately needing to take a shit.
Anonymous No.106992040 [Report]
Anonymous No.106992043 [Report]
>>106991921
Are you asking in order to avoid it?
Anonymous No.106992093 [Report] >>106992166
Why is there so many gay loras for Chroma?
Anonymous No.106992102 [Report]
Anonymous No.106992166 [Report]
>>106992093
because the chroma creator is a gay furry (I'm not joking)
Anonymous No.106992175 [Report]
Anonymous No.106992190 [Report] >>106992214 >>106992229 >>106992235 >>106992725 >>106992966
can you do this shit with 16gb vram? i stopped paying attention to new models after flux because i was already pushing the limits of my card
Anonymous No.106992209 [Report] >>106992256 >>106992383
Anonymous No.106992214 [Report] >>106992235
>>106992190
You can do that on 3GB of vram
Anonymous No.106992229 [Report] >>106992725
>>106992190
That example looks untrustworthy. Qwen-E is good at preserving text style and combining images but a restoration like that seems out of its reach.
Anonymous No.106992235 [Report] >>106992256 >>106992633
>>106992190
>>106992214
Can I use it to make nudes (of adults)?
Anonymous No.106992256 [Report]
>>106992235
Ask >>106992209
Anonymous No.106992271 [Report] >>106992492
Anonymous No.106992294 [Report] >>106992381
>>106991883
Of course it is, they upgraded to SaaS for Wan2.5 which is why they were able to compete
Anonymous No.106992381 [Report]
>>106992294
retard
Anonymous No.106992383 [Report]
>>106992209
i love me some plastic
Anonymous No.106992386 [Report] >>106992615
wan2.5 will be local just like mogao, only two more weeks of waiting!
Anonymous No.106992404 [Report]
Anonymous No.106992439 [Report]
>saastech so powerful it let Wan skip over 2.3 and 2.4
It’s no surprise local is so far behind, SaaS must be literal magic
Anonymous No.106992472 [Report]
>>106991883
>the next release of Wan is already showing to be better than Sora2
you mean wan 3.0?
Anonymous No.106992492 [Report] >>106992642
>>106992271
I tried qwen image edit but the workflow says it needs more than 16gb vram and indeed it did not work
Anonymous No.106992494 [Report]
>>106991736
let's hope it won't be another slopped shit this time, wake the fuck up chinks and stop training your models with synthetic data
Anonymous No.106992519 [Report]
Anonymous No.106992607 [Report]
>>106991601
the homer one isnt ai its just an old thing someone by the name of pixeloo made, they called it untoons
Anonymous No.106992615 [Report] >>106992625 >>106992638
>>106992386
ltx 2 seems way better anyway and that's confirmed to be open source in november and running on consumer gpus, alibaba can suck it.
Anonymous No.106992625 [Report]
>>106992615
the ltx guys always give the distilled shit model though no?
Anonymous No.106992633 [Report]
>>106992235
Yes, with the clothes remover lora.
Go back a few threads for a link.
Hopefully it still works.
Anonymous No.106992638 [Report] >>106992649
>>106992615
It’s also a western model, and western models are better quality than chinese slop. It’s just we rarely get weights without bullshit attached
Anonymous No.106992642 [Report]
>>106992492
currently running a qwen image edit on my 16gb card, amd at that
so you're on several layers of skill issues here
Anonymous No.106992649 [Report] >>106992655
>>106992638
>It’s just we rarely get weights without bullshit attached
when was the last time we got a non distilled western model? lool
Anonymous No.106992653 [Report] >>106992660
Why is shitjai still paying attention to that absolute svi dogshit loras?
The new holo finetune seems orders of magnitude better, guess he's too dumb to work with different and complex code when he vibecodes with claude.
Anonymous No.106992655 [Report]
>>106992649
was sd3 distilled? sd3 also had bullshit attached with the license though. maybe sd cascade or sdxl
Anonymous No.106992660 [Report] >>106992666
>>106992653
You should do it since you have everything figured out
Anonymous No.106992666 [Report]
>>106992660
Don't need to till so much autistic finngolian
Anonymous No.106992691 [Report]
comfyui and forge should switch names
Anonymous No.106992725 [Report] >>106992731 >>106992924
>>106992190
>>106992229
I just ran it through qwen edit no loras because you had me curious.
Prompt:
adjust the color of the image to a realistic photo
Anonymous No.106992731 [Report]
>>106992725
input
Anonymous No.106992753 [Report]
>>106991736
he didn't say when it will be released?
Anonymous No.106992777 [Report]
Anonymous No.106992803 [Report]
man I love the pornmix plastic sloppa
Anonymous No.106992855 [Report] >>106992924
not too bad honestly
a bit slopped but eh
Anonymous No.106992858 [Report]
>>106991845
>>106992002
Fuck, I missed it at work
Reup please?
Anonymous No.106992861 [Report]
when will local reach this level of kino? >>>/wsg/6008898
Anonymous No.106992868 [Report]
Anonymous No.106992871 [Report]
>>106991871
>HoloCine
16 seconds is hype. I'm staying optimistic until I run this myself
I'm already bored of video without audio now though
Anonymous No.106992924 [Report]
>>106992855
>>106992725
Anonymous No.106992956 [Report]
Anonymous No.106992966 [Report]
>>106992190

manual edits with 2GB ram
Anonymous No.106992968 [Report]
Anonymous No.106992977 [Report] >>106992993
>>106991871
looks kino desu
cumfart when????
Anonymous No.106992993 [Report] >>106993965 >>106994089
>>106992977
Kijai seems to be struggling with the implementation at the moment
Anonymous No.106993008 [Report]
Anonymous No.106993015 [Report]
Anonymous No.106993062 [Report] >>106993079
>>106991694
dude don't bother. the output is slopped and the background is grainy. anon must've been trolling
Anonymous No.106993066 [Report]
>>106991738
none of those are true, julien
Anonymous No.106993079 [Report] >>106993118
>>106993062
No it's working for me. The quality is equivalent to going 720p on low noise, but the motion is enhanced.
Anonymous No.106993104 [Report]
Anonymous No.106993118 [Report] >>106993132
>>106993079
I'm looking at your workflow and both samplers are genning at 704p, no?
Anonymous No.106993132 [Report]
>>106993118
Oh ignore that one, I wrote in a later post that I went back to the original one and it's working, but it's not quite as good for last frame, weird things happening.
Anonymous No.106993141 [Report]
Anonymous No.106993220 [Report]
>>106991736
Oh god, if it's good enough to kill chroma I'm all for it.
Anonymous No.106993234 [Report]
>>106991704
LET'S GET DYPED UP DYPER BROTHERS
Anonymous No.106993259 [Report] >>106993282
>>106991736
>trusting this faggot when he said the same about the new wan model
LOL
Anonymous No.106993262 [Report]
>>106991704
uhmmm sisters??? this doesnt look right
Anonymous No.106993282 [Report] >>106993290
>>106993259
>he got wrong one time out of 1000 therefore we shouldn't trust him anymore
meh, he still has a great ratio though
Anonymous No.106993290 [Report] >>106993299
>>106993282
the wan ragpull left a deep scar man
Anonymous No.106993299 [Report] >>106993302 >>106993325 >>106993331
>>106993290
When ltx 2 gets out wan is basically dead anyway.
Anonymous No.106993301 [Report] >>106994131
So I take it neta lumina isn't good with text
Anonymous No.106993302 [Report]
>>106993299
meh, I saw some ltx 2 videos, the sound is atrocious
Anonymous No.106993325 [Report]
>>106993299
doesnt really look better visually, we'll see how it trains and how it hold coherence but wan 2.2 is pretty good for physics actually and for cartoony art styles already

the 4k 50fps long generations is good on paper but means little if the videos genned ultimately look like they are 720p "upscales"

although its obvious ltx was trained very heavily on veo 3, given it copies its voice styles very closely, so at least we will have veo 3 mini at home for ok audio and video gen

and we will also see about speed compared to wan
Anonymous No.106993331 [Report]
>>106993299
from the few clips I've seen it looks really slopped, maybe for I2V it will be good though, I want my own I2V grok meme generator at home >>>/wsg/6009078
Anonymous No.106993371 [Report] >>106993379 >>106993443 >>106993475 >>106993529 >>106995603 >>106995726
The based chinks are waiting for someone to btfo them hard before the finally have to pull out the trump card of just saying fuck kikes and ip "rights" and train on the entirety of youtube they must have been scraping all this time like sora 2 and all movies and cartoons ever made to finally get a huge boost in model quality and knowledge.

They don't want to do it too soon because if they put out a great model that knows all popular media:
1. IP "rights" holder companies will put large pressure on China to shut it down.
2. They will have no more trump cards until they can make their own gpus which wont be for a couple more years and everyone else will be able to train on their models while adding their own advancements, leaving China to follow behind

So by always having this extra aspect of being able to train on copyrighted media, they have a reasonably big leeway to do whatever and always be able to add the extra high quality copyrighted dataset spice to get juust near the top of the list of good gen ai models
Anonymous No.106993376 [Report]
When will the based chinkoids finally release a vram monster
I'm rooting for the insects
Anonymous No.106993379 [Report] >>106993389
>>106993371
>The based chinks
I'll call them based the day they'll really do train their model on the entirety of youtube like OpenAI did
Anonymous No.106993389 [Report] >>106993397
>>106993379
What matters is releasing good models, and wan 2.1 was a huge jump they contributed that wont be matched any time soon that they didnt need to release, there was no pressure in that space from anyone else, hunyuan was okish but still very much a toy
Anonymous No.106993396 [Report] >>106993539
So how does bucketing and batch size work together?
I am training with a batch size of 2. I have some buckets with odd number of images.
I have 65 images, no repeats and 10 epochs. This should give me 325 steps, based on batch size 2.
Judging by the fact that I have 360 total steps, I am guessing the training script is doing some steps with batch size 1 to compensate for the odd numbered buckets.
The question is, does this have an adverse effect on the training quality? Should I manually resize or use higher bucket steps like 128?
Anonymous No.106993397 [Report] >>106993412
>>106993389
>hunyuan was okish but still very much a toy
Tencent has the balls to put nudity on their models, but Wan has more competent engineers, unfortunately :(
Anonymous No.106993412 [Report] >>106993415 >>106993750 >>106995479
>>106993397
Wasn't the Wan 2.1 chinese project page memed for having coombait women in their literal cherry picked examples at the beginning?

Also its very good at everything NSFW with any NSFW lora.
Anonymous No.106993415 [Report] >>106993422
>>106993412
>Wasn't the Wan 2.1 chinese project page memed for having coombait women in their literal cherry picked examples at the beginning?
I remember that, it was a fake website unfortunately kek
Anonymous No.106993422 [Report] >>106993443
>>106993415
I was here the entire time and don't remember it being exposed as a fake website, i dont feel like going through the archives and my second point still stands to disprove the censorship part
Anonymous No.106993443 [Report] >>106993498
>>106993422
>my second point still stands to disprove the censorship part
your second point destroys your initial argument though >>106993371

you said "you want NSFW, just do a lora bro" but at the same time you want IP shit on the base model, why can't we respond to that "you want IP characters on the model? just do a lora bro"
Anonymous No.106993467 [Report] >>106993476 >>106995525
https://xcancel.com/maxescu/status/1981416100303950309#m
it's so fucking plastic, why do they all train their model on synthetic shit, I'm going craaazzyyy, only OpenAI doesn't do that
Anonymous No.106993475 [Report] >>106993481
>>106993371
>based chinks
first, they're not based at all. second, they can't beat sora 2. or maybe in 5 years kek
Anonymous No.106993476 [Report]
>>106993467
>https://xcancel.com/maxescu/status/1981416100303950309#m
>flux chin
DOA
Anonymous No.106993481 [Report]
>>106993475
>they can't beat sora 2. or maybe in 5 years kek
they'll never beat sora 2 if they keep training their model on synthetic data, one day they must learn that they can't cheap out on the dataset, it'll always be the most important thing on deep learning, period
Anonymous No.106993498 [Report] >>106993514
>>106993443
When mentioning the censorship i meant its not censored against nsfw, given that it so easily learns any nsfw concept in lora training.

With IP characters, it doesn't learn them as fast and they are not the same type of data to expect the model to be able to generate compared to nsfw because there is a difference for a company to train heavily on youtube and when asked about IP rights say "oh well we trained on everything like sora 2 did" versus them training on a huge dataset of literal porn.

And im not saying a model should be limited in the data its trained on even when it comes to porn given the anatomy benefits, its just that training on porn for a model company is not something that we can almost ever really expect to happen, meaning when it comes to the discussion of censorship, what that really means is that as long as the model is not specifically trained against genning nsfw or it gets lobotomized to the tier of sd3 so it cant even generate women, then that is good enough of a sign that the model's core wasnt "censored"/lobotomized.
Anonymous No.106993514 [Report] >>106993529
>>106993498
>there is a difference for a company to train heavily on youtube and when asked about IP rights say "oh well we trained on everything like sora 2 did" versus them training on a huge dataset of literal porn.
it's way more dangerous to train on IP, you can piss off anime artists, celebrities... OpenAI is getting some heat recently because of that, copyright is something serious, really serious
Anonymous No.106993523 [Report] >>106996101
>>106991306
Not bad. Can you do NTSC artifacts like cross-color, dot-crawl, dot-hang?
Anonymous No.106993529 [Report] >>106993571 >>106993681
>>106993514
Right, that's what I said at 1. >>106993371
And that's why sora 2 limited their gens quickly after, to show what was possible, try to push the overton window on what should be acceptable, to gain popularity and funding, but without stirring the industry too much. Although it's all inevitable, thankfully.
Anonymous No.106993532 [Report]
Divine axioms of diffusion:
1: SaaS is years ahead of local
2: China mogs the west
Therefore it’s easy to understand why Wan stopped releasing local models.
Anonymous No.106993539 [Report]
>>106993396
>So how does bucketing and batch size work together?
Some say it affects, but using gradient checkpointing should negate it which is always on for me so I haven't even thought about the thing. Might be worth testing out.
Anonymous No.106993549 [Report] >>106993648 >>106993674 >>106993796 >>106995536
why is comfy ignoring the openpose controlnet?
Anonymous No.106993571 [Report]
>>106993529
OpenAI did push the overtron window, but I don't believe this was their intention, they just wanted hype by showing their model could do Will Smith playing ping pong against 2pac, ff7 style and shit, they know what people like, so they bait by letting them do copyright shit for like one week and then switch to stay safe, they did this on 4o and dalle3 as well, I'm NOOOTICING the pattern at this point

but hey, everything that pushes the overton window in the right direction is welcome, even if it's not being done intentionally
Anonymous No.106993610 [Report]
Anonymous No.106993648 [Report]
>>106993549
>trannymai
good
Anonymous No.106993674 [Report]
>>106993549
you need to go back >>106970615
Anonymous No.106993681 [Report]
>>106993529
>Although it's all inevitable, thankfully.
there will be a long fight before it being normalized though, I don't believe copyright companies will give up that easily
Anonymous No.106993750 [Report]
>>106993412
>Also its very good at everything NSFW with any NSFW lora
lol no. it's ok at best if you stack half a dozen loras and fuck around with strengths
Anonymous No.106993796 [Report]
>>106993549
cute
Anonymous No.106993798 [Report] >>106993816 >>106993853 >>106994036 >>106994575
https://github.com/bytedance-fanqie-ai/MoGA
Make OpenSource Great Again!
Anonymous No.106993816 [Report] >>106993837
>>106993798
Either jump on the API train or get run over by it, API is the future.
Anonymous No.106993825 [Report] >>106993831 >>106993837
Does Chroma and Qwen share workflows? Or would I need to set up different nodes for each one? Do they work similarly to Flux Kontext?
Anonymous No.106993831 [Report]
>>106993825
just check the default templates, retard.
do you know how to breathe?
Anonymous No.106993837 [Report] >>106994005
>>106993825
>Does Chroma and Qwen share workflows
no, i mean you wouldn't use different nodes but you would use different settings
>>106993816
this is the local diffusion general. fuck off
Anonymous No.106993853 [Report] >>106993869
>>106993798
>more buttdance scraps
There is not a single thing they released that is actually useful. Bytedance literally only releases garbage
Anonymous No.106993869 [Report]
>>106993853
>Bytedance literally only releases garbage
to be fair, they seem to only have made failures, Seedream 4.0 is the only succesful model they have lol
Anonymous No.106993965 [Report] >>106994089
>>106992993
Where can i read "Mien Comfyui" ?
Anonymous No.106994005 [Report] >>106994054
>>106993837
you can load apis on applications like comfy retard.
Anonymous No.106994023 [Report] >>106994090
Anonymous No.106994036 [Report]
>>106993798
>Make OpenSource Great Again!
not thanks to free poop models of jewdance
Anonymous No.106994054 [Report]
>>106994005

I can't generate that video. Try describing another idea. You can also get tips for how to write prompts and review our video policy guidelines.
Anonymous No.106994089 [Report] >>106994198
>>106992993
>>106993965
Nevermind I forgot the mongol is only interested in i2v and control shit, he's still wasting time with the useless svi loras.
Probably won't even work on the holocine implementation, which it's insane considering long gens are what everyone has been waiting forever.
Anonymous No.106994090 [Report] >>106994253 >>106994382
>>106994023
Man I am not even trying to be a "hater" but can you look at your gens for longer than 2 seconds before posting them here?
She has like 8 fingers in her right hand.
Anonymous No.106994131 [Report] >>106994378
>>106993301
Use NetaYume, not the original Neta Lumina, if you aren't already. They can do it decently enough, DPM++ 2S Ancestral Linear Quadratic seems to give the most consistently good results for it. Particularly long text support definitely isn't as strong as in e.g. Flux or Qwen though.
Anonymous No.106994198 [Report]
>>106994089
i2v does a lot of the heavy lifting for getting a satisfactory gen though, it's understandable though not desirable.
Anonymous No.106994253 [Report] >>106994951
>>106994090
>he doesn't have 8 (6) fingers on his right hand.
Anonymous No.106994264 [Report] >>106994275
anyone know how i can have an image to 3d set up?
Anonymous No.106994275 [Report] >>106994284
The new lightx2 loras from a couple days ago (yesterday?) seem quite good. Just running them at 1 strength. I guess there's still some slowmo.

>>106994264
What do you mean by 3d? Do you want to make a 3d model or do you want to make a 3d video that rotates around the subject?
Anonymous No.106994284 [Report] >>106994295
>>106994275
yes i want a 3d model. i use sparc 3d now, but it takes forever to get a turn
https://huggingface.co/spaces/ilcve21/Sparc3D
Anonymous No.106994295 [Report] >>106994320
>>106994284
Idk about that model specifically but if you have a decent GPU you can just try cloning their repo and running it locally. Lots of those example apps on huggingface can just be cloned and run locally.
Anonymous No.106994311 [Report] >>106994339
Anonymous No.106994320 [Report]
>>106994295
i have no idea how to set this up. i just set up a text-to-image generator once using A1111
Anonymous No.106994339 [Report] >>106994429
>>106994311
Flux sometimes just kills ittttttt
Anonymous No.106994375 [Report]
any work on low step Lumina models? 30-50 steps is too much
Anonymous No.106994378 [Report] >>106995133
>>106994131
Using Yume with comfy's default workflow, consistently fucks up on a short phrase
Anonymous No.106994382 [Report]
>>106994090
on sdxl, hands are very difficult to get right especially when prompting for complex poses with foreshortening and combat involved. adetailer can't fix all the aspects of a bad hands and fingers.
Anonymous No.106994390 [Report]
i regret testing sora 2. it's hard to go back to mute videos now. and isn't like we have, the best mute video models anyway
Anonymous No.106994394 [Report] >>106994410
Any tips for prompt adherence for WAN2.2 not to zoom in randomly? I feel I hit this more than slowmo nowadays
Anonymous No.106994410 [Report] >>106994445
>>106994394
prompt in chinese. works 110%
Anonymous No.106994429 [Report]
>>106994339
Anonymous No.106994443 [Report] >>106994467
Anonymous No.106994445 [Report] >>106994452
>>106994410
Don't know if you're trolling me or not but will try it out lol
Anonymous No.106994452 [Report] >>106994639 >>106995518
>>106994445
not even kidding. give it a whirl
Anonymous No.106994467 [Report] >>106994552
>>106994443
realism >>>>>>>
Anonymous No.106994512 [Report] >>106994696
>>106994229
is this a good plan for an application?
Anonymous No.106994552 [Report] >>106994569
>>106994467
thats called uncanny valley desu
Anonymous No.106994569 [Report]
>>106994552
not saying its bad. jus pref.
Anonymous No.106994575 [Report]
>>106993798
Hopefully its not another dead project that'll never release their model. Speaking of released models,wonder if Kijai or anyone know that Rolling Forcing is already out https://huggingface.co/TencentARC/RollingForcing/tree/main/checkpoints
Anonymous No.106994639 [Report]
>>106994452
Did not work, trying another gen without any loras to see if there's any weird interaction fucking up.
Or maybe I suck at prompting
Anonymous No.106994655 [Report] >>106994667 >>106994773 >>106994823
Anonymous No.106994667 [Report]
>>106994655
Anonymous No.106994696 [Report]
>>106994512
better than cumfart at least
Anonymous No.106994734 [Report] >>106994826
Anonymous No.106994773 [Report] >>106994817
>>106994655
based ani
Anonymous No.106994817 [Report]
>>106994773
wtf i love julien now
Anonymous No.106994823 [Report] >>106994842
>>106994655
What is this platform? Just a slop character interaction ui? Can the avatars change?
Anonymous No.106994826 [Report] >>106994829
>>106994734
>>106985727
Anonymous No.106994829 [Report] >>106994839
>>106994826
there it is ;)
Anonymous No.106994839 [Report]
>>106994829
glad i could help o/
Anonymous No.106994842 [Report]
>>106994823
civitai pony v7 comment section replies
Anonymous No.106994880 [Report]
Trying the 2.2 distilled loras, not too shabby.
Anonymous No.106994951 [Report] >>106995173
>>106994253
yes I love 1-2-3girls laughing at me. Wheres the laughing at me gens?????????
Anonymous No.106995080 [Report] >>106995092
I prefer "girls zapping me with magic lightning bolts" personally although that's more of a video prompt.
Anonymous No.106995092 [Report] >>106995120
>>106995080
thanks for the idea kind anon, ill make some kino zapping 1girls!
Anonymous No.106995120 [Report]
>>106995092
Looking forward to it
Anonymous No.106995123 [Report] >>106995138
why is chroma full of shitty gay loras, it's sad
Anonymous No.106995125 [Report] >>106995223 >>106995420
What the fuck, I can't open workflows in comfyui anymore, but opened the last one 5 minutes ago. Did not update, just restarted.
Did this shit updated by itself silently or what?
> [DEPRECATION WARNING] Detected import of deprecated legacy API
Anonymous No.106995133 [Report] >>106995145
>>106994378
Try the specific sampler / scheduler combo I mentioned
Anonymous No.106995138 [Report]
>>106995123
create good straight loras. you can train, right?
Anonymous No.106995145 [Report]
>>106995133
(samefag) woops I should have mentioned also, around CFG 4.5 to 5.5 is best.
Anonymous No.106995157 [Report]
Anonymous No.106995165 [Report]
Anonymous No.106995173 [Report] >>106995177
>>106994951
Anonymous No.106995177 [Report]
>>106995173
can you make it of anime girls
Anonymous No.106995223 [Report] >>106995312
>>106995125
>deprecated legacy API
>comfy deprecating all API nodes
based
Anonymous No.106995229 [Report] >>106995261 >>106995279
is 5090 worth it
Anonymous No.106995251 [Report]
Giving video gen a shot, I downloaded wan2GP, 32gb system ram + 16gb 5070.
Are 5s/step on the 1.3B t2v model expected or am I doing something wrong?
Anonymous No.106995261 [Report]
>>106995229
if you are thinking of a 5090 in terms of "value" then no. Like all high-end hardware (speakers/cameras/headphones/whatever) it isn't about value, it is about how much you enjoy owning high-end shit and seeing the marginal advantages.
Anonymous No.106995276 [Report] >>106995280
Kijai-Sama, please, I can only test so many models

>holocine

https://huggingface.co/Kijai/WanVideo_comfy_fp8_scaled/tree/main/T2V/HoloCine
Anonymous No.106995279 [Report]
>>106995229
not really. the VRAM helps fit models but the speed isn't much better than a 4090
Anonymous No.106995280 [Report]
>>106995276
>t2v
but I want i2v
Anonymous No.106995285 [Report]
Anonymous No.106995312 [Report]
>>106995223
it's comfyui manager and rgthree give these warnings
Anonymous No.106995348 [Report]
what is the prompt to consistently remove all people in the scene while keeping the viewpoint unchanged in wan i2v?
Anonymous No.106995386 [Report] >>106995410 >>106995417 >>106995421 >>106995438
Can an anon catbox a decent NetaYume workflow?
Anonymous No.106995407 [Report] >>106995474 >>106995517
qwen image, analogcore 2000s lora
Anonymous No.106995410 [Report]
>>106995386
I could
Anonymous No.106995417 [Report]
>>106995386
he did in previous
Anonymous No.106995420 [Report]
>>106995125
Try --disable-api-nodes
Anonymous No.106995421 [Report]
>>106995386
here, I dedicate to you my first gen of the day
Anonymous No.106995438 [Report] >>106995444
>>106995386
There is nothing special with Yume workflows desu.
Anonymous No.106995441 [Report]
Anonymous No.106995444 [Report]
>>106995438
there is nothing special with yume
Anonymous No.106995456 [Report]
>see a cool lora on civitai
>early access and you need to pay for it to download it
Anonymous No.106995474 [Report]
>>106995407
>long dick general
Anonymous No.106995479 [Report] >>106995484
>>106993412
> Also its very good at everything NSFW with any NSFW lora.
as long as nsfw is not genitals or sexual acts
Anonymous No.106995484 [Report]
>>106995479
?
https://civitai.com/user/LocalOptima/models
Anonymous No.106995507 [Report] >>106995537
>genning funny reaction images
>results are complete shit with cartoon stuff
>find funny baby
>have it act like a footballer witnessing a goal
>turns out great
>continue with other images
>start contemplating in the back of my mind
>realize what people can do with photos of kids
>truly realize
>am aware of the realization

Local needs to be banned.
Anonymous No.106995517 [Report] >>106995530
>>106995407
oh yeah broi gimme the grain and analog oh yeah I love shitty photos that remind me of crappy cameras ohb yeah bro i can feel the soul bro ycamcorder bro yeah bro give it to me bro
Anonymous No.106995518 [Report]
>>106994452
apparently it's either the light or the fusion loras that add the movement, without them camera stays static, but quality becomes ASS
Anonymous No.106995525 [Report]
>>106993467
The model is probably cucked too, but at least we got a hypothetical Flux video.
Anonymous No.106995530 [Report] >>106995542 >>106995660
>>106995517
the problem bro is that you're non-white bro
Anonymous No.106995535 [Report]
tfw my wife will never launch lighting bolts at me
Anonymous No.106995536 [Report]
>>106993549
rgb -> bgr
Anonymous No.106995537 [Report]
>>106995507
And that's why I dont put personal stuff online anymore.
Bringing back printed family albums
Anonymous No.106995542 [Report]
>>106995530
and that's a good thing, i'd hate to be a minority like a nigger
Anonymous No.106995554 [Report]
I am not even gonna give a (You) to that fucking redditor
Anonymous No.106995603 [Report]
>>106993371
They don't have leeway to do anything. US/ClosedAI could do it, but not them, plus copyright holders would attempt to charge them double the tax.
Anonymous No.106995607 [Report]
Anonymous No.106995611 [Report] >>106995659
>>106991495
Porn
and easier lora training
Anonymous No.106995659 [Report]
>>106995611
He can't train unfortunately
Anonymous No.106995660 [Report] >>106995668
>>106995530
whiter than you post hand
Anonymous No.106995668 [Report]
>>106995660
kek
Anonymous No.106995680 [Report]
>>106995676
>>106995676
>>106995676
>>106995676
Anonymous No.106995726 [Report]
>>106993371
Anon, stop being delusional, we don't even have open-source T2I models with dalle3's levels of pop culture knowledge, so videos are a given it's "never ever" in that regard.
Chinks don't care about having a video model with trillions of parameters that "knows everything", they just want a model that performs well enough on benchmarks while being small enough to run on their gpu-embargoed datacenters
Anonymous No.106996101 [Report]
>>106993523
Unsure if specifics like that were trained into the lora, I did attempt to prompt for extra haloing / rainbowing but it didn't do anything.