Search Results

Found 1 results for "ded9b4c3715fd62f446ac64794f97886" across all boards searching md5.

Anonymous /g/105998691#106002986
7/23/2025, 11:42:22 PM
>Neta Lumina UPDATE
– Fine-tuned, high-quality anime-style image-generation model (Diffusion Transformer) built on Lumina-Image-2.0
– Excels at illustration, posters, storyboards, character design, etc.
– Leverages Gemma text encoder for strong prompt understanding and multilingual support (EN/JP/ZH)

• Key Features
– Optimized for diverse styles: furry, Guofeng, pets, and more
– Understands both natural language and Danbooru-style tags
– Supports complex, multilingual prompts (best in ZH/EN/JP)

• Model Variants
– Neta-lumina-v1.0 (official release, best overall)
– Neta-lumina-beta-0624 (α-test; 13 M images, 46k A100 hrs)
– Private alpha versions (apply on HF page)

• System Requirements & Runtime
– ComfyUI only (latest version)
– ≥ 8 GB VRAM

• Installation Options

Component release (three files)
• UNet: neta-lumina-v1.0.safetensors ComfyUI/models/unet/
• Text Encoder: gemma_2_2b_fp16.safetensors ComfyUI/models/text_encoders/
• VAE (16-ch FLUX): ae.safetensors ComfyUI/models/vae/
All-in-one checkpoint
• neta-lumina-v1.0-all-in-one.safetensors (md5: dca5 …)
• Basic Workflow Nodes in ComfyUI
UNETLoader VAELoader CLIPLoader Text Encoder Sampler

• Recommended Generation Settings
– Sampler: res_multistep | Scheduler: linear_quadratic
– Steps: ~30 | CFG: 4 – 5.5
– Resolutions: 1024×1024, 768×1532, 968×1322, or ≥ 1024

• Prompt Resources
– Prompt guide: https://civitai.com/articles/16274/neta-lumina-drawing-model-prompt-guide

• Roadmap Highlights
– Continual base-model training (reasoning, anatomy, background richness)
– Enhanced tagging tools & LoRA tutorials
– Advanced control/style-consistency features (e.g., Omni Control)

• Extra Resources
– TeaCache repo: https://github.com/spawner1145/CUI-Lumina2-TeaCache
– Sampler & TeaCache guide (Chinese): linked QQ doc


URL: https://civitai.com/models/1612109?modelVersionId=2036419