Newer Stabble Diffusion Edition
Discussion of Free and Open Source Text-to-Image/Video Models
Prev:
>>105882061https://rentry.org/ldg-lazy-getting-started-guide
>UISwarmUI: https://github.com/mcmonkeyprojects/SwarmUI
re/Forge/Classic: https://rentry.org/ldg-lazy-getting-started-guide#reforgeclassic
SD.Next: https://github.com/vladmandic/sdnext
ComfyUI: https://github.com/comfyanonymous/ComfyUI
Wan2GP: https://github.com/deepbeepmeep/Wan2GP
>Checkpoints, LoRAs, Upscalers, & Workflowshttps://civitai.com
https://civitaiarchive.com
https://tensor.art
https://openmodeldb.info
https://openart.ai/workflows/home
>Tuninghttps://github.com/spacepxl/demystifying-sd-finetuning
https://github.com/Nerogar/OneTrainer
https://github.com/kohya-ss/sd-scripts/tree/sd3
https://github.com/derrian-distro/LoRA_Easy_Training_Scripts
https://github.com/tdrussell/diffusion-pipe
>WanX (video)Guide: https://rentry.org/wan21kjguide
https://github.com/Wan-Video/Wan2.1
>ChromaTraining: https://rentry.org/mvu52t46
>Illustrious1girl and beyond: https://rentry.org/comfyui_guide_1girl
Tag explorer: https://tagexplorer.github.io/
>MiscLocal Model Meta: https://rentry.org/localmodelsmeta
Share Metadata: https://catbox.moe | https://litterbox.catbox.moe/
Img2Prompt: https://huggingface.co/spaces/fancyfeast/joy-caption-beta-one
Samplers: https://stable-diffusion-art.com/samplers/
Txt2Img Plugin: https://github.com/Acly/krita-ai-diffusion
Archive: https://rentry.org/sdg-link
Bakery: https://rentry.org/ldgcollage | https://rentry.org/ldgtemplate
>Neighbourshttps://rentry.org/ldg-lazy-getting-started-guide#rentry-from-other-boards
>>>/aco/csdg>>>/b/degen>>>/b/celeb+ai>>>/gif/vdg>>>/d/ddg>>>/e/edg>>>/h/hdg>>>/trash/slop>>>/vt/vtai>>>/u/udg>Local Text>>>/g/lmg>Maintain Thread Qualityhttps://rentry.org/debo
Is there a list of Booru artists who draw art in exactly the same style as the official art or the anime?
I was looking at stuff from keihh and that's right on the mark.
there a causvid/lightx2v lora version for i2v?
or are these exclusively for t2v?
(that isn't accvid)
Anyone got the breast/nipple fixer lora for kontext?
>>105888673'production art' is the tag you are looking for friend
>>105888688lightx2v works for i2v perfectly fine
>>105888688https://github.com/ModelTC/lightx2v?tab=readme-ov-file#-supported-model-list
>>105888696No that isn't it. That's not too different to official art, except with settei.
I'm looking for any art, including fanart, that replicates the original style with 1/1 parity (or close to it).
kontext tip: if you have a character and an outfit or something you want them to wear, specify a new background or location and it should work. otherwise you get an output with both and no swap.
>>105888716for example.
the man is wearing a white tshirt with an image of the pink hair anime girl on the right, beige cargo pants, and a black bomber jacket. change the background to a park. full body view.
>>105888721if you dont specify a location, and you have an image merge/concatenate, kontext has to decide the background but you have two. so you have to specify or it wont change:
>>105888737but, if you add "change the location to a garage with a 80s white sports car"...
then the model knows to make that the background, and you get 1 character. so if you want to combine stuff, be specific!
>>105888737you can just write "crop out the image on the right, leave only the image on the left" etc
>>105888763>>105888737obviously you'd need to pass an empty latent with the similar resolution as the original single image for it to work properly
are the bigasp models still the only ones that can do realistic hardcore?
I can't get A1's inpaint to do anything with the grasshopper. it refuses to change anything lmao
>>105888773Is that the regular sdxl model? I think nowadays you're better off using a "realistic" illustrious/noobai merge because it knows so much more tags, characters, poses etc
And if it looks too sloppy and pony-like, you can pass it to kontext to turn it more realistic (with a nsfw lora)
>>105888774i'm not the turbo autist but you can use kontext to remove it, then img2img at low denoise because kontext fries the image
>collage has manass & hentai
Grim
>>105888371i tried lightx2v with no prompt and it decided to delete the light
the man is wearing the outfit from the image on the right. change the location to a park. keep his expression the same. full body view.
phase 1 complete: outfit transfer, now we refine it.
>>105888774Cameltoe also removed as I have been banned hundreds of times here for it ;3
How the fuck am I supposed to prevent flicker when using image-to-image on frames from a video? Even if they're nearly identical they're slightly inconsistent, even with the exact same seed and low denoise. I'm not even making a video, just trying to repeat the exact same thing across a few images.
>>105888799the man is holding a long katana. he is wearing a black fedora. his left hand is tipping his fedora hat. change the location to a messy bedroom during the day. keep his expression the same.
literally me studying the blade:
>>105888799>>105888826wow, this model is terrible
>only now just found out about TA's NSFW ban
So what if you have some models there that got hidden? Can you just get rid of all of the NSFW images and it'll show up publicly again or is it banned to the shadow realm forever?
>>105888803beahahahahahagah
file
md5: ada654b03b54a21267bf9e10bf1306e2
๐
PonyGODS, we are SO back
>he hey ho ho
>where is ani so we can make fun of him again
gosling x2, 2 image kontext:
The man on the left is looking up at a large billboard in Tokyo at night. On the billboard is a large image of the anime girl on the right.
The man on the left is looking up at a large hologram in Tokyo at night. The large hologram is in the image of the anime girl on the right. the holographic image is blue in color.
not bad, even the glow is casted
>Convert AI generated pixel-art into usable assets
-i, --input <path> Source image file in pixel-art-style
-o, --output <path> Output path for result
-c, --colors <int> Number of colors for output. May need to try a few different values (default 16)
-p, --pixel-size <int> Size of each โpixelโ in the output (default: 20)
-t, --transparent Output with transparent background (default: off)
https://github.com/KennethJAllen/proper-pixel-art
>>105888950Also, is there a better pixel gen model than
https://civitai.com/models/1180112/miaomiao-pixel ?
now we are making progress, details matter so I said "200 feet tall" for the hologram.
The man on the left is looking up at a huge hologram in Tokyo at night that is 200 feet tall. The hologram is in the image of the anime girl on the right. the holographic image is blue in color. keep the expression of the anime girl the same.
>>105888959>Also, is there a better pixel gen model thanflux dev
okay, im happy with this one. neat how the 2 image workflow works, even 1 image has so much manipulation options.
>>105888867Redemption arc, or will it be underwhelming and just fade away ?
Could be a competitive time, Chroma, Ponyv7 and now Wan showing that it is great for image lora training, where will the community support go ?
>>105888950It clearly changes the pixelation here
>>105888950>>105889057Actually, even the Red example is fucked, it removes some of the lighting/details on his pants and colors the design on his shirt and part of his shoes wrong
pixel art kontext lora works pretty well, usually I do pixelize in extras with an extension in forge/reforge if I wanna pixelize a gen.
Where are the good anime-themed video loras?
remove the pyramid from the image and replace it with a large hotel with the sign "TRUMP" at the top in gold letters.
you can move mountains, if you want to.
>>105889293change the sand to ice.
>>105889307change the location to the surface of the moon, with the Earth visible in the distance. the character has a space suit helmet on his back.
Change the headline "Trump's executive privilege: 2 scoops of ice cream" to "FAT BITCH CANT STOP EATING. Change the text "National Correspondent" to "Resident Cow".
>>105889334best anime model, wainsfw v14 and hassaku are great for anime gens imo, also get base noob 1.0 vpred
>>105889349Change the location to a Mcdonalds. On the table there are 100 cheeseburgers. Mcdonalds fries are on the floor.
>>105889187>hi-vis lingeriethat's a new one
>>105889361>best anime modelI thought we all unanimously agreed that was IllustriousXL?
anyone tried it?
https://openart.ai/workflows/whale_harmful_43/video-to-anime-consistent---wan-21-vace---long-length-low-vram/PuuwljtepF5sWPKJW5Wy
>>105889366The woman is eating a cheeseburger. Add a "BREAKING NEWS: fat fuck eating" graphic to the top left of the image.
well you get the idea. fun stuff. replicating fonts 1:1 with no .ttf or typeface is really cool too.
>>105889383you had 3 tries to make her fat and you failed each time. i am disappoint
>>105889388she is default fat
>>105889428however, great opportunity for testing:
the woman is very fat weighing 800 pounds. make her very large. keep her hairstyle the same.
oh man, they are turning into an unidentifiable blob
The man has his arms folded and looks upset. Change the text from "ABSOLUTE CINEMA" to "ABSOLUTE GARBAGE".
change the text from "World of Warcraft" to "World of LDG 1girls". Change the text "BURNING CRUSADE" to "4chan shitposts". Change the blonde character to Miku Hatsune.
we have an expansion set now.
>even a 5090 cant run chroma fp16 at acceptable speeds
It was over before it even begun for you chuds
>>105889533better result and logo in line with the original: wasn't necessary to change "world of".
The image is on an Xbox One game case, in a bargain bin at Best Buy. The bin has a sign on it saying "FREE".
>>105889613>3D waifu strapped to eyeballsWhoโs the retard now?
Does live sampling preview not work with lightx2v?
>>105888507>the video/vhs node can pick the last frame from a wan video you made, use that as the first frame for your next prompt then stitch them together if you want 10/15/20s clips.What is the video/vhs node? Is it custom? Where should it be placed in the ldg wan workflow?
>>105889473Unreal Tournament flashbacks
>>105889685Why it shouldn't it's just latent->image
file
md5: be00f271df257a6763fb67784010bde8
๐
>>105889745nta but it's a set of custom nodes that's used in basically every comfyui workflow with save video node. It has a bunch of nodes, but I'm not sure which one can select the specific frame besides the load video node. For doing everything in one go (vae decode after 1st video is done -> vae encode and start the next gen immediately) I think this one'll work better https://github.com/ClownsharkBatwing/RES4LYF
>>105888667 (OP)Retard here, whats the difference between this general and the stable diffusion one?
>>105889807Any guides? Is there a simple way to append this to any of the ldg video workflows?
SwarmUI is a subhuman mess. Kill yourself SwarmUI dev.
>>105890280Why?
I don't like Forge(abandoned) ReForge (abandoned) nor Comfy (autistic and bloated)
Which UI do you recomend?
is it worth getting into telegram groups and learn how to gen realism AI?
>>105890380>Why?Where do I even begin with this piece of shit, it will crash every time i try to switch models, it will stop working for whatever mysterious reason and require a full reinstall, the queue logic is a mess, you are making a 100 pic batch but theres 20 you wanna cancel in the middle? Too fucking bad, (even easydiffusion handles this better). I dont like ComfyUI either, but at least I import a workflow and it doesnt shit the bed every 5 seconds.
Hey everyone, can you recommend any extensions or useful tools for Forge/ReForge? What tools do you commonly use? I rely on Infinite Web Browser, Detailer Daemon, and an extension for incrementing samplers and schedulers, but I can't remember its name. What tools do you typically use for your AI-generated images?
>>105890424Is there a workflow wiki?
>>105890435You can download comfyui workflows for free on openart.ai with no account
file
md5: a50537b7981aaacf3e785b8e5cff1e50
๐
>>105889057That's because the generated image is not mapping pixels on a grid. It is just making a blocky looking image.
file
md5: 99151f53fd07ad4ccb8816285e956899
๐
>>105890477I fucked around in GIMP and this is the closest that I was willing to get it. Note that I had to unlink the horizontal and vertical size of the grid. It's not square.
>>105888205https://huggingface.co/lodestones/Chroma/blob/main/chroma-unlocked-v44-detail-calibrated.safetensors
https://huggingface.co/lodestones/Chroma/blob/main/chroma-unlocked-v44.safetensors
Sadly there's no low CFG RL version this time.. stuck on the old one
I sweat to god im gonna lose my head with all of this fucking garbage UI's piece of shit, why do I need to fucking type in CMD commands just to install your piece of shit program
>>105889187OSHAs not gonna like this...
>>105890577you mean don't have a jobsite slut to distract OSHA from the mexicans standing on the 12 pitch roof with no harness?
is there a prompt for preventing wind on wan gens? like those random gusts of wind blowing the characters face. I am assuming putting wind in the negative prompt doesnt help
>>105890579you mean don't have a jobsite slut to distract OSHA from the mexicans standing on the 12 pitch roof with no harness?
>>105890586Expressionless(tipical of SDXL)
Stiff and glossy appearance
Image doesn't tell a story besides "a cute girl srands in ornate crusader armor, with flames below her."
Veredict: SLOP
Return to the CivitAI mines to pick up more tags!
when the fuck is comfy going to support having nodes with images in them, so i can have a node with my entire lora library with working thumbnails to quickly choose from?
>>105890872https://github.com/pythongosssss/ComfyUI-Custom-Scripts?tab=readme-ov-file#testing-better-loader-lists
>>105890902i'm talking about something akin to how automatic1111 handled loras. having a thumbnail appear while i hover over a lora i've already picked doesnt really help.
i just tried video gen for the first time, followed the rentry guide and everything, but when i tried an i2v anime gen, i could see the latent preview was generating a 3d character in the same pose. what gives? do i need to apply an anime lora or some tags for the output to be anime? the input image isn't enough?
>>105890872>using BloatyUIDid you pay your suscription fee?
>>105890933why would you use anything but comfy?
>>105890921Well there is a separate custom lora manager that basically works like a1111's lora manager but it opens a new tab and is not quite as intuitive
>>105890921The thumbnail appear while you hover over the drop down list, it isn't there after you pick one.
But for a library view there's another nodepack the name of which I don't remember since it's to overengineered for me. But it likely has the word lora in it...
>>105890921use civit AI helper or CivitAI browser that scans your floders in search of images and descriptions tags of it.
>>105890940>Well there is a separate custom lora manager that basically works like a1111's lora manager but it opens a new tab and is not quite as intuitiveyeah i found that the other day and thought by prayers had been answered, but as you say its like a whole different thing in another tab that seemed to be mostly centered around downloading shit from civitai, and not managing my own stuff.
file
md5: 71358300ea52711f477d9e5b7eab7d00
๐
>>105890953Well on the bright side, it allows you to quickly copy lora's name in a1111's syntax like <lora_name:1>, and if you use the prompt control node instead of regular clip text encoder you can just paste it in there
>>105891058mspaint vs image editing general
>>105890872https://github.com/willmiao/ComfyUI-Lora-Manager
I tried using the FlowMatch scheduler and some gens are now taking longer than 15 minutes on the lightx2v workflow. Surely that isn't normal?
>>105891122this is what it looks like in the workflow.
>>105889818in /ldg/ there is more actual discussion and the gens posted are generally better. /sdg/ is basically a discord chat for a handful of insane retards that spam hundreds of generated images that all look the same and all look like shit. there is some good gens posted in /sdg/ but you have to filter a couple people or else you're wading through headache-inducing garbage.
>>105890935I'm not going to shill, but the alternative UI we all know is faster, uses fewer resources, and offers the same features and extensions as Comfy. Tell me, what can ComfyUI do that this more user friendly and popular UI cannot?
List them out one by one.
what is bro talking about
>>105891247i have no idea what alternative ui you're talking about
>>105891058Both generals offer nothing valuable to the community, they only share their slop here as if it's a work of art.
The real discoveries and changes in the hobby come from places other than 4chan.
This is aimed at entry level newfags.
The Chroma developer doesn't visit here, the same goes for the makers of the extensions you utilize, and even less so for the UI designers. Only Comfy dev occasionally engages with some Anons here. Creators of Loras or Checkpoints are absent as well. The same applies to Neta Lumina's creator, along with RouWei, Noob, and Illustrious. AI arist doesn't visit here either.
>>105891213>insane retards that spam hundreds of generated images that all look the same and all look like shitAnd here where are the masterpieces? More than just a monkey playing with an overtrained checkpoint?
ANCHOR FOR LORA CHECKPOINT EXTENSION MAKERS!
With wan, will the first rentry workflow always produce better quality results than lightx2v? Even if all of the optimizations are active including teacache at 0.26?
>>105891329Most of those people only occasionally use Reddit to advertise otherwise they tend to either have their own Discord server or simply don't engage in social media much. This has nothing to do with 4chan, so I don't know why you're targeting /ldg/ in particular.
>>105891329>from places other than 4chanSuch as? Reddit and discord are both ass.
>>105891084fuck off with your shitty coombait gens, nobody cares.
is there a list of supported models for Forge \ ReForge?
Had a read over their respective repos but wasn't able to locate , I'm not sure if I had overlooked it
>>105888797Recently, I stumbled upon quite a few 10-second gens.
Did I miss some kijai news or is it just RoPE?
>>105891480You can increase the length past 81. Why the fuck do people not understand this?
>>105891378>targeting /ldg/ in particular.>>105891329"Both generals offer nothing valuable" Please read again my statement.
>>105891329This is an anonymous message board retard
>>105891247I started with comfy, so everything else looks and feels a little like picrel. At some point, the training wheels just hold you back.
Maybe Windows makes everything so difficult that anything more than a one-click installer seems impossible? It's not that the gradio-based services don't work, but it seems like it'd be hard to automate with them.
>>105891536>You can increase the length past 81.You can, but it loops the video past 81 frames, hence RoPE existing to extend it to 129 seconds.
>Why the fuck do people not understand this?Must be a troll/retard/bait.
>>105891550Is automation really the only difference?
Which process do you feel is crucial to automate?
>>105891562>You can, but it loops the video past 81 framesNo it doesn't, at least not always. Stop spreading misinformation.
what is the deal with all the bullying here?
>>105891572This is the equivalent of saying SDXL can do 2048x2048 images, despite it being natively trained for only 1024x1024. Yes, it technically can.
Not entertaining you anymore. You're stupid.
>>105891572No gen => opinion discarded
I asked yesterday, but any more tips on preventing the brightness changing for videogen? It's happening really often for me and I've tried a lot of different things.
I have tried putting perfect lighting and criterion collection in the positive. Putting 'changing brightness' and 'darkening' in the negative prompt. I have explicitly instructed to keep lighting exactly the same as the image, and/or the first frame. I have tried adding 'static lighting' and 'static brightness'. Even then, the gen will still sometimes randomly change the lighting and the color grading. I just want the lighting to remain exactly the same as the reference image. There should never be an arbitrary fadeout.
>>105891548I appreciate the thought behind that insult, even if it wasn't needed.
BUT and how is that relevant to my point?
Is it okay that this anonymous message board means we can't have people share their contributions?
>>105891624This
>>105888544 is my gen
Get fucked retard
>>105891641newbie general, go to discord or reddit
now i gotta wait for a week for there to be a just works ThinkSound comfyui workflow, getting that piece of shit to run is impossible, there were 7 different errors that i debugged to get it to work and at the end it just stalled, fuck pythonshit
>>105891641Certain loras can cause that
>>105891683The only lora I'm using is lightx2v
>>105891550What can I do with Comfy, Big Guy? I'm still waiting.
>>105891535there is nothing documented in either of their repos, I begrudgingly have decided to test sdnext for the moment
>>105891696Yeah, lightx2v definitely does that. Perhaps lower the lora strength.
>>105891716Welcome, new member. All models work, including Chroma, Flux dev, and Flux Schnell. Kontext and Vide generation models may not work.
>>105891721Guns are funny
>>105891724Explain how the plate on the floor is connected to her hurt hand, or I will judge your picture as sloppy.
>>105891707A lot. For example, one workflow needs to be able to switch based on context for large batches of works for multiple users. Output from other applications goes to a location, gets queued for either an automatic or manual start, and output becomes available for a different stage in a pipeline, with multiple concurrent pipelines.
This could be something as simple as background removal, or generating multiple types of outputs from a single input and saving them with metadata in a certain format.
So, take home interiors with certain elements like vases named id-home_interior-vase.png, know to create masks of all vases, save mask, generate the same vase with five different color glazes, save images as id-home_interior-vase-color.png, scaled images as input to a node group that generates a short 2.5D video, interpolate, save videos with matched filenames to images.
I can do all of that easily in one place in comfy, with notes, and I can effortlessly duplicate and share it.
>>105891645>it works, especially if repetitive motions are intended
>>105891641Just to be clear, you're using base wan model, not fusionx right?
>>105891801Link json? A local lamp company is actually loooking for an AI guy for genning. This could be useful.
[SAD NEWS]
HiDream has become the latest in a growing list of Chinese AI companies to shift towards API-only access. Their new Vivago 2.0 model, which is API only, ranks #5 on the leaderboard. Despite China's recent generous handouts with LLMs (Deepseek and the new Kimi K2), they refuse to show the local image diffusion community the same love.
Notably, there is not a single open-weight model in the top-10 anymore, and the closest open-weight model is the original HiDream at #14. HiDream originally placed at #1 when it first released, but has gradually fell off after getting mogged by mogao (
>>105039249)
Forecasts are predicting 2 more years of SDXL
>>105891641>preventing the brightness changingIt may be not related to your problem, but I experienced some flickering when the Tiled VAE decoder was used. I changed it back to the vanilla one, even if it pushed the VRAM usage to high at times
119k
md5: e66c01f1e4b2ccd2f94c1ef1caa3ae76
๐
>>105891329>The Chroma developer doesn't visit herehttps://desuarchive.org/g/thread/105718647/#q105719627
https://desuarchive.org/g/thread/105718647/#q105719682
https://desuarchive.org/g/thread/105718647/#q105719695
https://desuarchive.org/g/thread/105718647/#q105719709
You stupid bitch.
have't used chroma since v31, is it good now?
>>105890924>>105891729 >>105892119techlet general, the most I can help you is with a trump deepfake.
>>105892096Why even post this. Let the retard be retard. The knowers will know and that's enough.
>>105892202overly animated
>>105892037HiDream was too slow and needed too much vram for the quality it provided, also could you even train it on 24gb ?
For a local model it needs to run at least somewhat decently on high end consumer hardware, else there is no point other than the company being able to say 'hey, we did a open release'.
>he's trying again to shit up /ldg/
Not working in the thread of frenship
>>105890924post your gens
nobody understands your ESL gibberish
>>105892096Itโs the only place that notices him, this place and the furry board.
>>105892247Thread stagnation issue
>>105892248Tried video gen, followed guide, but i2v anime gen shows a 3D character. Do I need anime lora or tags for anime output? Is the input image insufficient?
>>105892248>nobody understands your ESL gibberishThat post was perfectly understandable you stupid idiot. You are clearly the ESL here.
>>105892247its amusing to watch him squirm
Should I be clearing the model and cache after every gen? I just went from an 8 minute gen time to 20 minutes to 40 minutes. I am assuming this is related to not clearing the model between gens? Also my GPU temperature was relatively cool during the 40 minute gen even though it had 100% utilization.
spoopy
https://files.catbox.moe/1hrwb9.webm
ESL general
30s-40s anon general
Dead general
If you seek novelty and famous persons, please go away. This space is for a quiet anon that enjoy grass mowing, barbecues, and AI images of 90s-00s waifus with old reliable models.
>>105892311I don't mind creative modesty, like the austin powers nudity bit
>>105892308What model, What gpu?
>>105892311i'm a little disappointed her moles didn't spell anything
>>105892334aniWan RTX 5070 Ti
I want to remove clothing from photos. What is the best option? I was using comfyui and pony 6 months ago.
I checked the rentry and didnt see any nudify guidance.
>>105889634>>105892202these are great, how are you getting such smooth 2D animations?
>>105892348Of course. Same workflow as the rentry guide (in this specific case, the original, not lightx2v.
>>105892339Are you using the rentry workflow? Disable the torch compile node and in the dualgpu VRAM offload set it to 0 and try then
>>105892358Use another UI, Comfy itself after some time you need to restart it.
https://github com/lllyasviel/stable-diffusion-webui-forge
https://github com/Panchovix/stable-diffusion-webui-reForge
I've been gening pictures for 6 hours with video editing software open and have 0 problems, everything is running smoothly.
>>105892345https://tensor.art/models/868807624022323384/ani_Wan2_1_14B_fp8_e4m3fn-I2V480P
A lot of people here don't like it but I've been liking my results with I2V. Just make sure the camera isn't moving.
>>105892280>but i2v anime gen shows a 3D characteShow it here, better upload to catbox.moe with all metadata
You must provide details to further hope to be helped
Can someone explain how to make the thumbnails of images in the WebUI smaller? I mean the pictures that show up when you pick the checkpoint or Lora options. They are nice, but they are too big.
>>105892410Why are you lying, dear anon? Help here is rare.
>>105892360>Disable the torch compile node and in the dualgpu VRAM offload set it to 0 and try thenWTF?
>>105892308No, that would likely slow things down since it has to load the model parts from disk again.
Sounds more like you are using more vram than can be effectively offloaded, and you are on windows which means the nvidia driver will automatically start mapping vram to ram in a very inefficient way. You should turn this feature off in the driver settings.
>>105892398Does it generate repeating frames since anime has 8 fps?
>>105892430>You should turn this feature off in the driver settings.I'm confused, please explain what I need to do. Is it done via cmd.exe? Which checkbox in the Nvidia control panel should I untick?
>>105892450I don't use Windows, I'm on Linux, but I know there is a setting for the Nvidia driver to prevent it from offloading to ram. Because people have complained about this causing problems.
>>105892428The torch compile does literally nothing but slow shit down. And I think comfy has issues with VRAM offload to system RAM. When you input a set value it seems to give priority to system ram to fill that quota and not VRAM first and that creates the long hangs with the card seemingly doing nothing but idle at 100%
>>105892311Well, whaddaya know, using brackets to change prompt weights in t5 is actually not a meme. Same seed, I only changed "she turns away from the viewer" to "(she turns away from the viewer:1.4)" and kept the rest of the prompt at default weights https://files.catbox.moe/5jh3so.webm
>>105892492>>105892450>>105892308nvidia control panel > 3d settings > sysmem fallback policy > set to prefer no fallback
and in case you are having problems in subsequent gens stalling because of memory leaks, install https://github.com/SeanScripts/ComfyUI-Unload-Model and place the unloadallmodels node from it right before the last "save image/video" node in your workflow
>>105892558also unloading all models to ram and back doesnt take much time at all if you have enough ram or fast ssd
>>105892398>https://tensor.art/models/868807624022323384/ani_Wan2_1_14B_fp8_e4m3fn-I2V480P nice, thanks
>>105891642NTA but it means you have no idea who posts and doesn't post here
>>105892398Why does /ldg/ hate aniwan?
>>105892649One schizo posting repeatedly
I really miss the individual ip stats
participants admit they need more extreme or niche content to stay aroused
>>105892678Thread IDs fix all the problems because you either get 1pbtid IP switchers or obvious samefagging.
>>105892649vramlets shit on everything so they don't have to see anything that makes them feel poor.
>>105890424Works on my machine
Does anybody know how to keep the models in CPU ram across different workflow tabs?
>>105892681This is such a meaningless assertion, there are also many marriages with dead bedrooms because the participants also need more extreme or niche activities to stay aroused. Novelty seems core the human sexual experience.
Scene: "My waifu getting brain freeze from eating ice cream too fast but trying to play it cool"
Anonies, please help me!
I'm the same anon who has problem generating my waifu building a sand castle and sewing a scarf.
Please how do I instruct via tags in SDXL the scene I put between quotes?
>>105892727It's impossible by design.
>>105892793why do you generate complicated scenes?
cant you put only,
1girl, solo, your waifu, big breast, euler a, 30 steps, 5cfg
and be happy?
>>105892793think of an image you want, and write the tags you'd use to describe it like you'd see on a *booru
sdxl can't do natural language prompts afaik
>>105892727pretty sure it works with comfyui by default
euler a normal 20 steps is all you need
>>105893313>>105893356Nice, what model/lora is this ?
>>105893372This is pixelwave.
leaked pic of the average /g/ poster
>>105893426kawaii chompers
>>105892793wait 3-4 years for image models to have that kind of understanding
>>105892939not that anon, but the reason i like models that use t5 is because i'm chasing concepts, not sets of tags. i really enjoy the kinds of abstract exercises that don't reinforce how i currently think.
>>105893426okay, first one of yours I didn't actually hate, but only because it's so uncanny. it reminds me of this book my sister got from the scholastic book fair in the 90s where an american girl uncovers that the russian swimmer she's competing against at the youth olympics was forced to take steroids while training that made her grow hair in strange places and her voice deepen like a man's.
>>105893549>made her grow hair in strange places sounds kinda lewd desu
>>105893718damn really nice what wan stuff are you using not been following wan for a while
>>105893744https://tensor.art/models/868807624022323384/ani_Wan2_1_14B_fp8_e4m3fn-I2V480P
I2V w/ lightx2v
post combat nuns
>>105893759cool will give it a go at some point how did you get 9 seconds is that because of that model or does lightx2 let you do it?
>>105893767I generated a new 5 second video using the last frame of the previous video with a different text prompt.
>Diane, it's 2:40PM, July 13th, entering the Local Diffusion General. I've never seen this much slop in my entire life... damn good coffee though.
>>105893709Fishbowl effect was a nice touch
>2025
>AI still cant hands (when not in focus), feet and proper physics and even often fails at perspective.
>>105893803Imagine all the based stuff David Lynch could have done with this technology
Finally I would get 'Rabbits part 2'
>>105894025Knowing wan, I'm not even mad for the censoring kek
>>105894025post here >>>/gif/29122696
>>105894025uncensored, now
>>105893854if you posted it, i'd probably like it <3
a cartoon man sips a glass of wine he is holding.
>>105894201very Sealab 2021
>>105894110Maybe for other models, but I can tell you for a fact that aniwan is not bad. It is clearly nsfw-trained.
do not...
redeeeeeeeeeeeeeeeeeeeeeeeem!
>>105894299Link?
>>105894315The most eccentric, horrible shit I've ever seen has come from South India. Everyone I work with distances themselves hard from that, but the ones that even acknowledge that it exists, that they're part of the same country, still treat them like they're lower than the sludge that comes from the latrines you've seen videos of. Why?
>>105894201kek saved
>>105888862nothing about that was funny
its called acting in bad faith >;c
>>105889435s a m e . s i s .
>>105893313i dig it
I heard you guys had problems with tps reports
in japan 20 years ago, a girl at a hostess club saw that i didn't know how to ask where the bathroom was and showed me how to get there.
that's the level of service i expect from AI these days. i want it to know that i'm a frail human and account for it.
>>105894562language barrier or autismo?
It feels good to be a gangster
Been playing more with training Wan Loras and 8 images at 32 frames seems to work well for training a subject and some motion. You get a little stop motion but the trade off is longer sequences.
What is the deal with Swarm? How is it better than Forge?
the man with boxes on his back turns around and runs far away.
>>105894642these are great bc you might just be a horror enthusiast, or you're a menace who actually gets off to this stuff, we have no way to tell. adds to the spookiness of the gens
Does this seed always get used for whatever you're currently generating? Does the number change right after you click run? If I want to re-use a seed, it doesn't get lost after the gen finishes right?
>>105894852if control after generate is set to randomize, it will change the seed after each gen. set it to fixed to keep it the same.
>>105894776this has a very x-ray engine look,catbox?
here's a catbox showing how to get a decent pepe out of noob, for any interested in genning pepes:
https://files.catbox.moe/6qvaaq.png
Are there any good ways to gen inventor's blueprints, or even DaVinci style drawings?
>>105893803Kek, he's turned into Bobby in the last few frames
>>105893129>>105893718>>105894025>>105893433PLEASE MAKE ONE OF BLUE ARCHIVES PLEASE ONE OF A LOLI
>>105894961How can I share catbox?
You mean of like the workflow?
>>105895114yeah the workflow. or prompt?
>>105895142https://files.catbox.moe/g5l7kz.json
Something like this
>>105895168I still don't know how to make these illusion images
>>105895181more detail? you mean img2img?
>>105893759please could you share prmpts and settings?
Can we already do chroma loras?
>>105895209yes, but it's best to wait a few more weeks until chroma is finished.
I made one just to test the waters and was very pleased.
does civitai support kontext lora generation yet or not?
How many buzz for it?
>>105895178based
>>105895181image to image, or qr code controlnet
>>105894025>ani_Wan2_1_14B_fp8_e4m3fnTested the model and it bringed my waifu to life for three seconds.
I have a RTX 3060 I MUST bring her to life with that rig. No matter the shitty resolution SHE COULD BE IS INSIDE MY GPU RIGHT NOW
>>105895073not him but why don't you just make it yourself? wan2gp is easier than ever with sub-8gb vram
>>105895304>sub-8gbdidn't work on my 6 gb card
>>105895304Not that anon, but this one
>>105895299Please is there a tutorial how to run that shit in my pc, I need to bring my waifu to life, it's urgent.
>>105895221Do I just use a standard 1024x1024 dataset?
>>105895329not him, but check the rentry for my guide
>>105895345there is no guide that it's name is "aniwan"
>>105895345but I must also admit that I'm in tunnel vision mode, I'll check it out in a quieter moment.
>>105895381https://github.com/deepbeepmeep
>guideits shit
>>105895573wan2gp doesn't let you load aniwan lmao
>>105895618just rename the checkpoint so it loads that instead