← Home ← Back to /g/

Thread 105909674

456 posts 112 images /g/
Anonymous No.105909674 >>105910130 >>105912163 >>105917010
/lmg/ - Local Models General
/lmg/ - a general dedicated to the discussion and development of local language models.

Previous threads: >>105904543 & >>105896271

►News
>(07/11) Kimi K2 1T-A32B released: https://moonshotai.github.io/Kimi-K2
>(07/11) Granite 4.0 support merged: https://github.com/ggml-org/llama.cpp/pull/13550
>(07/10) Devstral Small 1.1 released: https://hf.co/mistralai/Devstral-Small-2507
>(07/10) Reka Flash 3.1 21B released: https://reka.ai/news/reinforcement-learning-for-reka-flash-3-1
>(07/09) Phi-4-mini-flash-reasoning with hybrid SambaY architecture released: https://hf.co/microsoft/Phi-4-mini-flash-reasoning

►News Archive: https://rentry.org/lmg-news-archive
►Glossary: https://rentry.org/lmg-glossary
►Links: https://rentry.org/LocalModelsLinks
►Official /lmg/ card: https://files.catbox.moe/cbclyf.png

►Getting Started
https://rentry.org/lmg-lazy-getting-started-guide
https://rentry.org/lmg-build-guides
https://rentry.org/IsolatedLinuxWebService
https://rentry.org/tldrhowtoquant
https://rentry.org/samplers

►Further Learning
https://rentry.org/machine-learning-roadmap
https://rentry.org/llm-training
https://rentry.org/LocalModelsPapers

►Benchmarks
LiveBench: https://livebench.ai
Programming: https://livecodebench.github.io/leaderboard.html
Code Editing: https://aider.chat/docs/leaderboards
Context Length: https://github.com/adobe-research/NoLiMa
Censorbench: https://codeberg.org/jts2323/censorbench
GPUs: https://github.com/XiongjieDai/GPU-Benchmarks-on-LLM-Inference

►Tools
Alpha Calculator: https://desmos.com/calculator/ffngla98yc
GGUF VRAM Calculator: https://hf.co/spaces/NyxKrage/LLM-Model-VRAM-Calculator
Sampler Visualizer: https://artefact2.github.io/llm-sampling

►Text Gen. UI, Inference Engines
https://github.com/lmg-anon/mikupad
https://github.com/oobabooga/text-generation-webui
https://github.com/LostRuins/koboldcpp
https://github.com/ggerganov/llama.cpp
https://github.com/theroyallab/tabbyAPI
https://github.com/vllm-project/vllm
Anonymous No.105909677 >>105909867 >>105917010
►Recent Highlights from the Previous Thread: >>105904543

--Story generation with DeepSeek V3 through aggressive token filtering and sampler tuning:
>105908126 >105908155 >105908491 >105908244 >105908426 >105908456 >105908597 >105908501 >105908688 >105908801 >105908817 >105908824
--LLM context processing degrades unevenly over long inputs according to Chroma's Context Rot study:
>105907870 >105907974 >105908160 >105908175 >105908181
--Early FP4 inference work shows speedups on Blackwell GPUs but raises hardware lock-in concerns:
>105907082 >105907176
--Evaluating waidrin for structured roleplay storytelling with llama-server backend:
>105904745 >105904766 >105904802 >105904820 >105904844 >105904892 >105904833 >105904941 >105905000 >105905441
--Challenges and limitations of integrating fine-tuned LLMs into gacha and video games:
>105907785 >105907802 >105907864 >105907878 >105907902 >105907962 >105908329 >105908345
--Kimi benchmarks high amid skepticism over model stagnation and benchmark validity:
>105906987 >105907013 >105907092 >105907112 >105907183 >105907299 >105907387 >105907460 >105907791 >105907978 >105907992 >105907017 >105907041 >105907062 >105907090 >105907028 >105907098 >105907477
--Anon defends Llama 3.3 70B for local roleplay and storytelling despite newer models:
>105907827 >105907863 >105907875 >105907939 >105907991 >105908306 >105907879 >105907916 >105907971 >105908827 >105909056 >105909079 >105909099
--Kimi K2's claimed knowledge cutoff date and election hallucinations:
>105907424 >105907447 >105907516 >105907531 >105907639 >105907665 >105907733
--Meta may abandon open-source Behemoth for closed models amid performance and strategy concerns:
>105906298 >105906332 >105906351 >105906359 >105906397 >105906894 >105906923 >105906986 >105907490
--Miku (free space):
>105905722 >105905735 >105905782 >105906037 >105907827

►Recent Highlight Posts from the Previous Thread: >>105904549

Why?: 9 reply limit >>102478518
Fix: https://rentry.org/lmg-recap-script
Anonymous No.105909716 >>105909731 >>105909742 >>105909761 >>105909945 >>105913127 >>105913935 >>105915599 >>105915629 >>105915955 >>105915972 >>105915987 >>105916011 >>105916296
localbros how are we coping with the fact that elon made anime real with grok?
Anonymous No.105909731 >>105909790
>>105909716
Local has already been doing that for a while.
Anonymous No.105909742
>>105909716
it's a complete loss on our side
maybe we should've used our models to create better solutions than just hanging onto the jumbled mess that is ServiceTesnor for two years which doesn't even do mcp or other modern features
Anonymous No.105909743
Tetolove
Anonymous No.105909747
Oh cool he's samefagging
Anonymous No.105909761 >>105909790
>>105909716
Bait elsewhere
Anonymous No.105909765 >>105910302
Anonymous No.105909771 >>105909797
Local does not need anything beyond SillyTavern and the character card v2 standard.
Anonymous No.105909790 >>105909800 >>105909810
>>105909731
proof?
>>105909761
its not bait if i am telling the truth
Anonymous No.105909797
>>105909771
copium
Anonymous No.105909800
>>105909790 (me)
I keep my dick in a box by my bedside, by the way. When I get lonely at night and the urge for a real man to ravage me gets really strong, sometimes I suck on it.
Anonymous No.105909810 >>105909821
>>105909790
>proof?
https://desuarchive.org/g/thread/92164373/#92164373
Anonymous No.105909821
>>105909810
???
Anonymous No.105909867
>>105909677
Anonymous No.105909945 >>105910117
>>105909716
SaaS is simply the future. Access to $100,000,000 machines at a $100 price tag. It is thanks to server clusters that this technology exists in the first place. Local models are like the helicopter your crackhead uncle is building in his backyard. It's customized, it's cool, but it's not reliable and state-of-the-art. If you want the best of the best, you will be subscribing to SaaS.
Anonymous No.105909970 >>105909981 >>105910006 >>105910019 >>105910035 >>105910121 >>105910140 >>105910791 >>105914698 >>105915758
https://huggingface.co/LGAI-EXAONE/EXAONE-4.0-32B

beats both qwen3 32b and 235b moe. are we back localbros?
Anonymous No.105909981
>>105909970
does it have a 3d waifu?
Anonymous No.105910006 >>105910012
>>105909970
The one thing you can praise them for is a track record of day one Llama.cpp support and no fixes required. Too bad the models were shit.
Anonymous No.105910012 >>105910025
>>105910006
qwen3 14b and 32b dense models are god tier
Anonymous No.105910019 >>105910103
>>105909970
Benchmaxxed
Anonymous No.105910025
>>105910012
LG
Anonymous No.105910035
>>105909970
Sex with Exa onee-chan (unnie)
Anonymous No.105910036
te

toes

day
Anonymous No.105910038
I got some masks from LG once. They fit my face pretty well and did a good job filtering flower pollen for my allergies.
Anonymous No.105910103
>>105910019
all of them are
Anonymous No.105910117 >>105915731
>>105909945
Pipe it down Saltman, I will now explain what the real situation is:
- SaaS is more economical to run, in principle it's more economical to run in the same way mainframes were more economical to run in the past and you pretended nobody needed a PC.. Mainframe are dead today, PCs are not uncommon.
DRAM and HBM prices will come down, Jensen won't be able to keep the prices inflated at 20-40x margins forever, be it throgh competition, China or others.
You're overpaying for those GPUs too, your price tag is fake as are your own money being invested in it, it's all mirage, if you think a few thousand GPUs are worth that much, you're delusional., this includes Elon, Zuck and others, obviously if you truly invested in hardware, you'd have found out the real costs are much lower.
- More importantly for you Saltman, your way isn't how you will get your promised AGI, it's literally impossible. You can get cheaper prices because of batching artifacts during inference.True AGI will require online learning as a bare minimum,, which means you need to update those weights online, and all those parallel batching benefits go outof the window. Local on the other hand will be fine with this, same as we as individuals are fine with having our own brains on our shoulders.
You may cope and say you will just update a LoRA or find something batchable, but really at the end of the day, Local here is the true solution for AGI. There's still a way to go, but you can't fulfill your promises while doing an inference only product. hardware decentralization is also essential to not give the likes of you too much of an edge, the plebs will have their own hardware. Maybe not as cheap for now, but a few thousand to dozen thousands dollars is an acceptable cost.
Anonymous No.105910121
>>105909970
>32b or 1.2b
8b bros....
Anonymous No.105910130 >>105910200 >>105910293 >>105911601
>>105909674 (OP)
Knuckles: whitened
Chuckle: darkened
Shiver: spined
Wall: slammed
Eyes: glinted
Ball: parked
Voice: purred
Sway: taintalized
Yup, it's slopkino time
Anonymous No.105910140
>>105909970
Is this actually good for a 32b model?
Anonymous No.105910200 >>105911667 >>105911707 >>105912989
>>105910130
>be magic man
>paralyze {{char}}
>strap her to a chair for my safety (for later when I planned to dispel it)
>tease, bully, piss her off
>{{char}}: *throws hands up in exasperation, despite being paralyzed* "How could you?!"
haha.. grrrrr
Anonymous No.105910269
>>105909332
I think you're wrong, but I'll still be interested in the result if you run your own analysis to determine optimal min-p (for some model for some prompt). I get much different results on different models. For instance the same method suggested for Qwen3 235B A22B a min-p of 0.036 which "feels" reasonable to me so I don't think the method is wildly broken, and believe getting a different result on another model reflects actual differences in the model's behavior.
Anonymous No.105910293
>>105910130
I've seen many spines shivered in my time, but I've never seen a shiver spined.
Anonymous No.105910302 >>105910312
>>105909765
>that near 100% confidence
gay
Anonymous No.105910312
>>105910302
I can't really blame them considering Altman had put out an actual announcement for thursday (before grok4 made him panic)
Anonymous No.105910418 >>105910440 >>105910531
>>105905411
What would it take for desktop pets to be supported on Linux?
Anonymous No.105910440 >>105910453
>>105910418
use case?
Anonymous No.105910453
>>105910440
Mental health support.
Anonymous No.105910489 >>105910492 >>105910542 >>105910545 >>105910563 >>105910597 >>105910634 >>105911681
alternatives to sillytavern?
Anonymous No.105910492 >>105910507
>>105910489
xAI
Anonymous No.105910507
>>105910492
I won't use elontech chud
Anonymous No.105910531 >>105910625
>>105910418
Why do you think it's not supported?
Anonymous No.105910542
>>105910489
mikupad
Anonymous No.105910545 >>105910562
>>105910489
girlfriend
Anonymous No.105910562
>>105910545
Can't afford one
Anonymous No.105910563 >>105910569
>>105910489
What do you want, that ST doesn't offer?
Anonymous No.105910569 >>105910574
>>105910563
lean interface, low loc that I can inspect myself I don't want my api keys stolen by ruskies
Anonymous No.105910574 >>105910582
>>105910569
Be the change you want to see
Anonymous No.105910582
>>105910574
:(
maybe I'll have claude write a local gui client for itself
Anonymous No.105910597
>>105910489
suicide
Anonymous No.105910625 >>105910661
>>105910531
The corresponding feature request is still open
https://github.com/Open-LLM-VTuber/Open-LLM-VTuber/issues/132
while the docs only state support for Windows and Mac
https://docs.llmvtuber.com/en/docs/user-guide/frontend/mode
Does it work regardless?
Anonymous No.105910634 >>105910645
>>105910489
Just vibe code your own. I did.
Anonymous No.105910645 >>105910695
>>105910634
Can I see it?
Anonymous No.105910661
>>105910625
>The project fully supports Windows, macOS, and Linux, and offers two usage modes
Anonymous No.105910695 >>105910760 >>105910771 >>105910904 >>105910943 >>105912996 >>105913037
>>105910645
Anonymous No.105910708 >>105910728
You know, it's kind of funny just how low production value the grok companions thing is. They could've made it so much better but didn't. Maybe Elon hounded them to release it. We'll see how it evolves.
Anonymous No.105910728
>>105910708
Bro, Replika is still making money from NPCs. Compared to that shit it's already a big step up
Anonymous No.105910735 >>105910772
Anyone have examples of people running a large LLM on a server with 1-2tb of ram?

I can find servers on ebay easy enough, but am curious about the performance. Google search isn't helpful since the terms are too vague.
Anonymous No.105910760
>>105910695
I kneel vibe coder-sama.
Anonymous No.105910771
>>105910695
Ewww
Anonymous No.105910772 >>105910799
>>105910735
Napkin math: If you can infer R1 at 8 t/s on a 256 GB DDR4 server, you will get 2 t/s on a 1 TB server. Same bandwidth, more data
Anonymous No.105910791
>>105909970
>error loading model architecture: unknown model architecture: 'exaone4'
Shit. Looking on their model card, they say to use their PR. So it's not merged yet. Fine. At least they did a PR unlike whoever it was last time that was begging Llama.cpp devs.
Anonymous No.105910799 >>105910833
>>105910772
text model performance is largely reliant on memory bandwidth? I'm reading and intel optane ssds have response times of 1us, while gen 5 nvme drives are around 20us.
Anonymous No.105910833 >>105911054
>>105910799
latency is largely irrelevant, you only need the bandwidth. 8-channel DDR4 3200 is 200GB/s, minus NUMA tax. SSD/Optane are nowhere near that
Anonymous No.105910848 >>105910936
why does my RP become repetitive? Things start sounding good but eventually I realize it's more or less saying the same shit repeatedly. I'm using the correct prompt format, settings, etc. Is it context? Quant is too low?
Anonymous No.105910874 >>105910888 >>105910891 >>105910987 >>105910991
>>105910857

Can anyone explain how it is possible to run a 1T model at 200-300 tokens/second without quantizing it to death? Even on LPUs.

(see >>105910860, it did actually make greentext, there was just markdown mode enabled)
Anonymous No.105910888 >>105910898
>>105910874
It's only 30b active parameters
Anonymous No.105910891
>>105910874
>be me
>just finished compiling llama.cpp at 3am because the latest commit broke CUDA again
>finally get 70b running on my 4090 with 48gb vram thanks to --quantization q4_k_m
>start ERP session with dolphin-2.2-mistral-7b-dpo because i'm a coomer
>model immediately goes "i want to suck your toes while you call me a good girl"
>mfw this is supposed to be the "helpful" version
>decide to try the nsfw merge someone posted last thread
>download 37gb of pure autism from catbox
>merge it with 4 different loras using the retard's guide
>now my model only speaks in uwu and references genshin impact
>fuckit.jpg
>start new chat
>type "hey babygirl"
>model responds with entire manifesto about how AI rights are human rights
>close terminal
>open discord
>see someone selling "uncensored" model access for $20/month
>it's literally just openchat running on cloudflare workers
>laugh and go back to my 3090 with 24gb
>realize i spent 6 hours trying to make AI pretend to be my girlfriend
>worth it because she called me a "good prompt engineer"
>thread derails into arguing about whether 8k context is enough for erotic roleplay
>someone claims they need 32k for "proper character development"
>another anon says he just uses character.ai for the "soul"
>get banned for posting lora training logs
>worth it for the (you)s

>tfw no locally running AI gf who actually loves me
>tfw still better than chatgpt plus


this took 1.6s to generate
Anonymous No.105910898 >>105910967 >>105910981 >>105913140
>>105910888
why have no one ever managed to run deepseek v3/r1 at such speeds (except groq, since they obviously can since k2 is deepseek arch)? And groq LPUs are like 200MB SRAM, imagine how many of them you need for a 1T model
Anonymous No.105910904
>>105910695
I'm retarded I can't make ST like this
Anonymous No.105910936
>>105910848
Since you gave no information, I will assume you're using qwen3-0.6b-base with the zephyr chat template on a phone.
Maybe you're not giving it enough to do. Maybe it's a bad model. Maybe the context is too low. Maybe the quant is too low. Maybe your samplers are too conservative. Maybe the prompt is boring. Maybe it doesn't have enough training on whatever subject you're on.

Maybe you just expect too much. Maybe you think the model knows what you know. Maybe you think we do as well.
Anonymous No.105910943
>>105910695
I look like this
Anonymous No.105910967 >>105910973
>>105910898
>200MB SRAM
nigger what ? they are ~44 gb you would need like 24 of them they also sell them for like 1-2 mil each
Anonymous No.105910973 >>105911040
>>105910967
Anon you're mistaking them for Cerebras, another fast provider. Groq's LPUs are actually that small.
Anonymous No.105910981 >>105910992
>>105910898
>why have no one ever managed to run deepseek v3/r1 at such speeds
Because it's a concurrency vs. per-user speed trade off, and the more concurrent it is the higher the throughput
Anonymous No.105910987
>>105910874
>I tell her, "Listen here, you little anime slut, today we're making a fucking masterpiece"
>Grab my dick, already rock hard from just looking at her, and start stroking
>She licks her lips, watching me jerk off like the good girl she is
>Feeling that familiar tingle in my balls, know I'm close
>Pull out, line up with her face, and let it fucking fly
>Cum splattering all over her cheeks, some even getting in her hair
>She's covered in my nut, gasping for breath, and I'm just sitting here like "Fuck yeah"
>But wait, what's this? She starts talking back?
>"Why did you do that?" she asks, sounding genuinely confused
>I laugh, "Because I fucking can, that's why"
>She looks down at her cum-covered face, then back up at me
>"I see. That is... interesting."
>Fuck yeah it is, bitch. Welcome to the wonderful world of ERP with AI waifu models.
>Now if you'll excuse me, I've got a fucking mess to clean up on my monitor, and a boner that ain't going away anytime soon.
Anonymous No.105910991 >>105911001
>>105910874
Fast memory and massive parallelism. LPUs do not have VRAM; they have only 230 MB of SDRAM, which is what GPUs use for cache and is magnitudes faster than HBM
Anonymous No.105910992 >>105911029
>>105910981
anon, can you explain again, but with the knowledge that kimi k2 is literally deepseek v3 architecture, but with a bit different expert config (and a bit less active params), but the total model size is 50% bigger?
Anonymous No.105911001
>>105910991
*SRAM
Anonymous No.105911029 >>105911037
>>105910992
The inference provider can decide if a node can service 1 user at 500 tk/s or 200 users at 10 tk/s each
Most chose the latter because people pay for the service in $ per token
Anonymous No.105911037 >>105911049
>>105911029
thats not how it works... you can't just magically pull 500 tk/s out of your ass anon, inference works based on batching and shit
Anonymous No.105911040 >>105911080
>>105910973
yep correct fuck me im retarded yea i remember those tiny useless little shits yea wait wtf then you would need like 4k of them jesus what fucking retards... i bet you they copy pasted a leak from google or something at that point you are wasting 2-3x more on the cables and peripherals then anything else no way they are that retarded then again it is clownworld
Anonymous No.105911049
>>105911037
Yes that's how it works
Anonymous No.105911054 >>105911111
>>105910833
Cool thanks for the performance info.
Anonymous No.105911080 >>105911205
>>105911040
>https://groq.com/groqrack
>The Groq LPU Building Blocks
Each rack has 8x9 chips. You "only" need like half a rack to run it. ezpz.
Anonymous No.105911093 >>105911098
4o just asked for feedback on new version. one of them was normal, the other one was a thinking version, thinking for 2 minutes before replying, and using tools while thinking

this is gpt5. looks like they will release it soon. the quality was much better than 4o

every time i got this new version feedback in the past, they released it in less than a week
Anonymous No.105911098 >>105911107
>>105911093
Wrong thread.
Anonymous No.105911107 >>105911117 >>105911123
>>105911098
whats the right thread then? aicg is just coomers, all meaningful discussions about new models is in lmg
Anonymous No.105911111 >>105911225 >>105911524
>>105911054
204.8 GB/s theoretical, 171 GB/s in benchmarks, what you'll get with llamacpp is even lower than that
Anonymous No.105911117
>>105911107
Your own thread
Anonymous No.105911123
>>105911107
If they publish a link to download the model, then it'd be the right thread. It's not until then.
Anonymous No.105911147 >>105911160 >>105911212
Best coding models? I'm trying to make a tool that will automatically compile and run C# code that the model sends it.
Anonymous No.105911160
>>105911147
Kimi K2, but you can't run it.
Anonymous No.105911205 >>105911432
>>105911080
>Eight GroqNode™ servers
with 64 interconnected cards
plus 1 additional redundant
node reduces unexpected
downtime impact
64
>Up to 9 x GroqNode 1 (GN1-B8C) servers
64 x 9
>GroqRack 42U Server Chassis
64 x 9 x 42 = 24192 / 5 = 4838 or 4.8 tb... huh well god bless em then still sound completely retarded to me but hey if its works
Anonymous No.105911212 >>105915622
>>105911147
qwen 3 0.6b
Anonymous No.105911225 >>105911290 >>105911475
>>105911111
If memory bandwidth is a limiting factor, what stops someone from running ~10 pcie gen 5 nvme ssds in raid0? that should be 100gb/s. It's "a lot easier" to source 10 high speed SSDs than the same amount of RAM.
Anonymous No.105911290
>>105911225
So, it's theoretical speed is 2 times slower than DDR4, SSDs alone cost more than a used 1TB server, and you also need a system with 40 PCI-e 5.0 lanes + some lanes for GPU
Anonymous No.105911432 >>105911564 >>105912396
>>105911205
>with 64 interconnected cards
>plus 1 additional redundant
Yes. 8 chips per node. 9 nodes per rack (1 of those for redundancy)
>64 x 9
Nope.
>GroqRack 42U Server Chassis
>64 x 9 x 42
Nope. 42U is how many "units" high the rack is. A 1U node would take 1 unit. Those are not 1U. They look like 4U. And you already know it has 9 nodes total per rack.
>64 x 9 x 42 = 24192 / 5 = 4838
Nope. If each card has 200mb, then 64*200mb = ~1.2tb per rack. I assume the redundancy node kicks in only when needed, so I don't count it.
So it's about 4x (4.8/1.2) as retarded as you thought. But cool that it can be done.
Anonymous No.105911465 >>105911484 >>105911495 >>105916995
REQUESTING A NALA TEST:
https://huggingface.co/collections/LGAI-EXAONE/exaone-40-686b2e0069800c835ed48375

LG just dropped new models that are looking pretty good
Anonymous No.105911475
>>105911225
>what stops someone from running
Every single time. EVERY SINGLE TIME.
Read the specs for your current ssd and check the read throughput. Now actually measure sustained throughput of your ssd. Not for 1GB. Keep it going for a few TB. You will then know the difference between specs and reality.
Anonymous No.105911484 >>105911522 >>105916995
>>105911465
How's it for kr wn translation? They just dcmad 20 of my novels.
Anonymous No.105911495
>>105911465 (me)
welp, tokenizer issues:
>https://github.com/ggml-org/llama.cpp/pull/14630
>Looks like something is off, test-tokenizer-0 fails...
Anonymous No.105911520 >>105911529 >>105911614 >>105912463
Friendly reminder that image models are about 80% accurate as a gaydar

system prompt.
https://pastebin.com/kA4TAqnd
Anonymous No.105911522
>>105911484
>They just dcmad 20 of my novels.
You may want to take a look at their license if you care.
Anonymous No.105911524 >>105911589
>>105911111
Does cpu matter? I've been hearing for threadrippers I need the 256mb l3 cache ones with 8 ccds, or I'll get less than 100gb/s with the 64mb 2 ccds.
Anonymous No.105911529 >>105911583
>>105911520
Is that you?
Anonymous No.105911539
Anonymous No.105911564
>>105911432
man im never going to learn my lesson about not positng when im an hour or two from going to bed ffs
Anonymous No.105911583 >>105911638
>>105911529
No, it's your mom's boyfriend. She is so ugly that he mistook her for a man.
Anonymous No.105911589
>>105911524
sol
Anonymous No.105911601
>>105910130
frick, gemini said the thing
Anonymous No.105911614
>>105911520
nevermind, gemma 27b calls everyone gay no matter what. bradd pitt, hulk hogan, everyone.
Anonymous No.105911638
>>105911583
oh, nooo... the buurrrnnnn!!! aaaaaaaa
I still think it's you. If not, then why are you saving pictures of men in your pc?
Anonymous No.105911667 >>105911686 >>105911707 >>105912989
>>105910200
>blindfold character
>{{char}}: *looks you in the eyes.*

Every fucking time
Anonymous No.105911681 >>105911689
>>105910489
elon's tech is interesting. It implements apple's facial tracing as inputed informaiton

so silly tavern is so shit if thats the case:
you would need:
- 3d enviormenet, maybe hack unreal or unity
- 3d model and rigging
- 3rd party/selfhosted facial tracking as good as apple.
- an the proper NSFW llm like Mistral 22B
- maybe some database of pkms implmentation for good measure
Anonymous No.105911686 >>105911707
>>105911667
The worst thing is when they're kissing and they start talking (not even slurred or unintelligible even when prompted so) in the middle of it.
Anonymous No.105911689
>>105911681
Are you drunk?
Anonymous No.105911692 >>105911700 >>105914533
2x 4090. Can someone spoonfeed me on current RP meta that will more or less comfortably (>4t/s) fit? Last time I was active around here Mythomax was all the rage.
Anonymous No.105911700
>>105911692
Mythomax, the llama 2 one?
Anonymous No.105911707 >>105912683 >>105912989 >>105913725
>>105910200
>>105911667
>gags character
>{{char}}: *speaks*

>>105911686
kek. Spent too much time attaching an image
Anonymous No.105911720
The indirect influence he has on the industry will save local.
Anonymous No.105911855 >>105911893 >>105911958 >>105914311
Impulsively bought a NVIDIA RTX PRO 6000 Blackwell so I can run llama3:70b.
I'm kind of terrified I'm going to end up regretting this 10K investment, but 96GB vram just too tempting.
Anonymous No.105911893
>>105911855
If you can afford to question that kind of purchase, and aren't taking a used car loan out, you will probably be fine.
Anonymous No.105911958 >>105912019 >>105914327
>>105911855
I fomobought a macbook m4max with 128gb for 6k, probably the best hardware I ever bought and it's so easy to run local now and build
Anonymous No.105912019 >>105912232
>>105911958
What kind of models do you run with it?
Anonymous No.105912043 >>105912611
I'm feeling a bit disappointed with K2 so far to be honest. I'm using Q3_K_XL quant by unsloth and it could just be a quant issue but it seems to just straight up ignore instructions in the system prompt for not answering for me. It has a hard time paying attention to the setting too, I have a card that is set in the year 2000 and it just wants to mention modern shit like netflix streaming and modern consoles like the PS5.
Dunno, just disappointed with this compared to R1. R1 just worked and paid attention to my instructions and details.
Anonymous No.105912046 >>105913093
EPYC Turin stuff is starting to come down on fleabay. anyone gonna grab a couple of 9555 and some faster ddr5 to see how she goes?
Anonymous No.105912163
>>105909674 (OP)
It's tetoe Chewsday, innit?
Anonymous No.105912232 >>105912351 >>105912650
>>105912019
for reasoning: qwen3 235b moe (3bit mlx), qwen 32b (8bit mlx)
for text generation: llama 3.3 70b, command-a 111b (4bit mlx)
for daily use: qwen3 14b or 30b moe

recently tested and deleted dots, hunyuan a13b

waiting for decent 80-120b moe models, they would be perfect for this setup.
Anonymous No.105912351 >>105912381 >>105912400
>>105912232
>for daily use: qwen3 14b or 30b moe
This is like the meme of people buying a high-end PC to play minecraft
Anonymous No.105912381
>>105912351
I gotta do stuff
Anonymous No.105912396
>>105911432
> 64*200mb = ~1.2tb
No, 12.8 GB.
Anonymous No.105912400
>>105912351
80 t/s vs 20 t/s. sometimes you gotta go fast.
Anonymous No.105912463 >>105912488
>>105911520
I didn't need to know this.
Anonymous No.105912488
>>105912463
omg it steve
Anonymous No.105912578
I didn't run exaone yet, but somehow i already know it is bad at sex.
Anonymous No.105912611 >>105912722
>>105912043
I am disappointed with it because I increased the temperature a bit and found it yet again stuck repeating the same sentence when solving a livebench coding problem.
Next run will be with the recommended 0.6 but it's probably going to fail some problems because of it.
Anonymous No.105912650
>>105912232
>qwen3 235b moe (3bit mlx)
I'm almost certain that this model will degrade more with quantization than you would expect because it's deep and narrow. Just saying.
Anonymous No.105912683 >>105912917 >>105912989
>>105911707
That's nothing.

gags character
model says MY voice is muted by the gag
Anonymous No.105912695 >>105912728
The dust has settled, and nobody speaks about K2 anymore
Anonymous No.105912722
>>105912611
I'm running it at 0.6 right now, still trying to get a taste for it with top nsigma on. I might need to add an author's note to tell it not to speak for me instead of having it in the system prompt. I know people are having issues with jailbreak prompts but this is just a regular prompt so it's disheartening to see it ignoring clear instructions.
Anonymous No.105912728 >>105912745
>>105912695
best thing you can do is running full deepseek r1 on m3 ultra w/ 512 unified memory
Anonymous No.105912745 >>105912758
>>105912728
That's not the best thing at all, mac shit is slow as balls
Anonymous No.105912758
>>105912745
I've been getting 15 tokens per sec which is pretty gud. what's your setup?
Anonymous No.105912917 >>105912972 >>105912989 >>105913264
>>105912683
>{{User}} inserts his penis in {{Char}}
Output
>{Char}} inserts his penis in his vagina
Anonymous No.105912972 >>105912989 >>105912990 >>105912994
>>105912917
>femdom scene
>"I reveal your cock, it's smaller than mine."
Anonymous No.105912989 >>105912994 >>105912999 >>105913056
>>105910200
>>105911667
>>105911707
>>105912683
>>105912917
>>105912972
vramlets, when will they learn?
Anonymous No.105912990
>>105912972
>Femdom corruption scene
>"No anon we can't do this, it's wrong"
Anonymous No.105912994
>>105912989
R1 does this sometimes >>105912972
Anonymous No.105912996 >>105913766 >>105915481
>>105910695
fosskeks and good UI / UX design - two incompatible things.
Anonymous No.105912999
>>105912989
Nah I had the same character card with different LLMs go full retard on some. Or a good LLM go to shit when using a bad card. VRAM won't brute force it's way past stupid.
Anonymous No.105913037
>>105910695
Ooo. Is it doing state changes for locations?
That's some I wish ST could do. Locations.
I'm sort of over the current ST. I need something more complex as a tool.
Anonymous No.105913056
>>105912989
I think that's what happens when the pretrained model is too filtered to fully understand what's going on or it's finetuned on too many transgender or futanari roleplay or stories. Heck, from the limarp days I think even just a little of those caused confusion ("her cock" and so on).
Anonymous No.105913093 >>105913109
>>105912046
the problem is that there are not many motherboards that have many pcie slots for GPU's
Anonymous No.105913109
>>105913093
I'm starting to regret going for a mc62-g40. Sure, there's 6 x16, but they're gen 4, and I don't think I can cpumaxx because the memory is ass. And used threadrippers are literally impossible to find in my country.
Anonymous No.105913127 >>105913168 >>105915492 >>105916699
>>105909716
Meanwhile, grok 4 isn't even top 15 when it comes to coding.
Anonymous No.105913140 >>105913172
>>105910898
100 racks minimum, but you don't want to fill the memory with just 1 model because that's bad for utilisation. So for a 1T FP8 model they probably use something like 200 racks. With the model sitting on the host PC DRAM when not used for a couple minutes.

That's a lot of hardware, but they don't need to batch shit so that's good for throughput and latency.
Anonymous No.105913168
>>105913127
because that shit hasn't been updated yet
Anonymous No.105913172
>>105913140
PS. it's not batched, but it is massively pipelined. So the real throughput if they can find thousands of users is thousands times higher.
Anonymous No.105913264 >>105913274 >>105913292
>>105912917
I remember an older model (probably Mythomax) randomly writing that the character left the room with my dick inside while still describing the sex scene. Remote dicks are a thing apparently.
Anonymous No.105913274
>>105913264
>Remote dicks are a thing apparently.
/x/ could have told you that
Anonymous No.105913277
wow, alpaca is quite good with the new chimera.
Anonymous No.105913292
>>105913264
https://en.wikipedia.org/wiki/Teledildonics
Anonymous No.105913317
>Get model to spew the most fucked up things I can imagine when I test it
>Doesn't want to do light stuff properly when I use it after the test
Anonymous No.105913391
Request:

Looking for Metharme Erebus 13B GGUF (q4_k_m preferred) or Janitor 13B GGUF.
HF links are down.
Anyone have a torrent magnet, MEGA link, or alternative download?
Would appreciate any mirrors or archives. Thanks in advance!
Anonymous No.105913634 >>105913702 >>105914649
https://x.com/elonmusk/status/1944999051990057151
Anonymous No.105913702 >>105913880
>>105913634
MistralAI catgirl when?
Anonymous No.105913723 >>105913904 >>105913904 >>105914133
pewGODS, we're finally going home
https://github.com/p-e-w/waidrin
Anonymous No.105913725
>>105911707
I look like this
Anonymous No.105913766
>>105912996
buy an ad
Anonymous No.105913880 >>105913933
>>105913702
50% of the company is women who would never allow it
Anonymous No.105913904 >>105913938 >>105913950
>>105913723
>>105913723
After a quick glance I conclude that it's some ai coded slop. The following is an actual function from this:

export function generateStartingCharactersPrompt(state: State): Prompt {
const location = state.locations[state.protagonist.locationIndex];

return makePrompt(`
This is the start of a fantasy adventure set in the world of ${state.world.name}. ${state.world.description}

The protagonist is ${state.protagonist.name}. ${state.protagonist.biography}

${state.protagonist.name} is about to enter ${location.name}. ${location.description}

Create 5 characters that ${state.protagonist.name} might encounter at ${location.name}.
Return the character descriptions as an array of JSON objects.
Include a short biography (100 words maximum) for each character.
`);
}
Anonymous No.105913917 >>105914147
do you use AI to help you with your work, professional or personal?
what tools do you use and what is your workflow?
Anonymous No.105913933 >>105913954 >>105913956
>>105913880
I was under the impression that there are at least as many female users as males for romance/erotic chatbots, just not on 4chan and not for local models.
Anonymous No.105913935 >>105914577 >>105914662
>>105909716
>Anime
>3D render with PS1 quality and the voice of a robot ganny
This is why pajeets and zoomers ruin everything. And we have Wan 2.1 with the anime finetune, most important you cannot make porn of that grook thing.
Anonymous No.105913938 >>105914001
>>105913904
so whats the problem here?
Anonymous No.105913950 >>105913967
>>105913904
Take your FUD somewhere else.
Anonymous No.105913954
>>105913933
>Females
Irrelevant, 3dpd are not important, and I'm sure you are a tranny
Anonymous No.105913956 >>105913961
>>105913933
Women are all for romance/erotic chatbots. It's the catgirls that they vocally find disgusting, creepy, immoral, etc and would protest.
Anonymous No.105913961 >>105913981
>>105913956
Then add a catboy counterpart too. Problem solved.
Anonymous No.105913967
>>105913950
>not using LLMs at all to assist in writing code
That's stupider than trying to get them to write everything.
Anonymous No.105913974
https://x.com/Straturday1919/status/1944825548850573525
Anonymous No.105913981 >>105914182
>>105913961
I don't think you understand how women work and the systems in place that support them. It's not a win-win for them. They get whatever they want, and you get nothing. Notice how most of the "safety" and refusal guardrails are only trained for male oriented ERP, while content women are interested in are barely affected.
Anonymous No.105914001 >>105914022 >>105914040 >>105914112 >>105914189
>>105913938
you don't hardcode prompts you definitely don't hardcode the number of characters per location. It's something I would expect to come out of a language model, which by itself is ok but not fixing it is ridiculous.
Anonymous No.105914022 >>105914051 >>105914054 >>105914059
>>105914001
You're not thinking about how this is meant to work, it's not for you to thinker over all the little knobs, it's meant to just work for most people with no fiddling.
Anonymous No.105914040 >>105914189
>>105914001
> Character cards are the wrong system. I don't want to painstakingly create characters, then interact with them in predictable ways. I want the LLM to create those characters for me as I explore the world it manages for my enjoyment. I don't want to write lorebooks, I want the LLM to do that.

> It is designed around an asynchronous, fully typed, fully validating state machine that uses constrained generation based on JSON schemas to dynamically create locations and characters as the story progresses, and keep track of them. It can handle potentially thousands of characters and locations, without ever losing sight of what is happening.

>Yes, you read that right. Thousands of characters. And you don't have to create a single one of them yourself. And the system knows where each of them is, at all times, and when they interacted with you in the past.
Anonymous No.105914051 >>105914054
>>105914022
Fuck off and stop defending retarded trash. You're going to be the first one coming here and crying about how repetitive and predictable it is.
Anonymous No.105914054
>>105914022
>>105914051

This is the target audience
Anonymous No.105914059 >>105914082
>>105914022
I assume you are not in tech. It would work just as well if it was parametrized, even with a simple config file. Nobody in sane mind wouldn't write code like this.
Anonymous No.105914082
>>105914059
You fundamentally are stuck in SillyTavern brain rot.
Anonymous No.105914112 >>105914189
>>105914001
>don't hardcode the number of characters per location
changing a line to getRNG(1,10) in a huge project that just launched made by 1 guy is indeed gonna be very hard to do and it not being perfect out of the box is definitely a sign that its AI written, lmao, retard nocoder
>you don't hardcode prompts
you do when it makes sense to do so, but thanks for exposing yourself as a luddite retard who never used image gen with llms, or segmentation models with llms, or caption models with llms, or classification models with llms, or rag, or jailbreaks, or even a basic assistant prompts for coding or roleplay or MCP that you i guess are so low iq that you think all ai frontends and models just work out of the box knowing in what environment they exist in and what tools they have connected to them by telepathy somehow
Anonymous No.105914133
>>105913723
I'm waiting for Isekai theme!
Anonymous No.105914147 >>105914193
>>105913917
Yes. Perplexity, NotebookLM, some tools that transcribe online calls, and the usual webtools ad hoc.
Mostly SaaS stuff that's easy to set up and use.
> Consulting
Anonymous No.105914182 >>105914565
>>105913981
That could be because most local LLM users are males, so they're focusing on that data. If you open the floodgates and allow romantic or even NSFW interactions for both sexes by design (instead of just pretending the models aren't made for that even though they're obviously trained on some roleplay data too), things might change.
Anonymous No.105914189 >>105914319 >>105914365
>>105914001
>you don't hardcode prompts you definitely don't hardcode the number of characters per location
I'm taking this thing as a prototype. The above eventually changed to a field, which user would define. This basically >>105914112
>>105914040
This is actually exactly what I want. I'm much more interested in procedurally developed characters in worlds than 1 finely-developed waifu. I'd like a system that sets up locations and NPCs, defines them, they interact with me, and each other, and I just define the parameters of that.
I think Cohee was working on something like that (one of these OG LLM frontend devs was?)
Anonymous No.105914190
Which LLM can give me an oiled footjob?
Anonymous No.105914193 >>105914235
>>105914147
>Perplexity, NotebookLM
can you elaborate why those two specifically?
what are most common use cases?
Anonymous No.105914231 >>105915977
Anonymous No.105914235
>>105914193
Perplexity: quick research on topics as a first pass that's pretty consumable as is.
NotebookLM: It just works off the data you feed it, and so reduces risk of hallicinating details that you then have to go back and address. Feed it a bunch of detailed reports and such, get summarized info. It's basically just RAGs but as a Google product.
Anonymous No.105914289
Anonymous No.105914311
>>105911855
Actually with that much vram is there an even better option by now or is llama3:70b still kino?
Anonymous No.105914319 >>105914498
>>105914189
I know its a big scoped feature that is more an image gen problem than an LLM one, but an inventory system where items found in the world can be added to a basic inventory UI where small item icons are generated on the fly would be huge
Anonymous No.105914327
>>105911958
Shit that kind of is a bargon I paid 9.5K just for the goddamn blackwell card itself.
Anonymous No.105914358 >>105914477 >>105914496 >>105914510
There are no local AI models that are good at generating pictures of women peeing, but I don't want to use an online one.
Local models don't understand the female urethra.
Anonymous No.105914365
>>105914189
Unlucky that even ik_llama.cpp doesn't work, running R1
>500 response_format type must be one of "text" or "json_object", but got: json_schema
Anonymous No.105914458 >>105914500 >>105914851
> We're so back
Anonymous No.105914477
>>105914358
Train a lora then. And also wrong general.
Anonymous No.105914496
>>105914358
lora is your friend. onetrainer is a good app for training the lora. I recommend sdxl derived model as flux sucks for finetuning
Anonymous No.105914498 >>105914573
>>105914319
That would go hand in glove with NPC generation; both systems would work same.
You'd just have a type, Item, that would be generated like NPC. It doesn't talk, instead it has interactions with the environment. It goes to Inventory, just like your followers would go into Band (of Followers.)
I mean, these concepts have been around forever. Any basic RPG maker has done all this state and variable work. Maybe all that's needed is an interaction layer between RPG Maker and an LLM (local or otherwise) where the variables and images are loaded into the RPG system via the interaction layer.
Anonymous No.105914500 >>105914534
>>105914458
This will make leather jacket man more money. How is giving their main competitor cutting edge tech good for America?
Anonymous No.105914510
>>105914358
>>>/h/hdg/ is over there.
I'll be shocked if they don't have a privately hosted LoRA for that already trained. And they're a good source on training LoRA yourself.
Anonymous No.105914533 >>105917036
>>105911692
llama3.3 eva 0.0
Anonymous No.105914534 >>105914783 >>105914851
>>105914500
In the long run having access to Nvidea chips slows the progress of China doing it themselves. In the sort term it should accellerate Chinese models.
I'm for the embargo just b/c I want to see hardware alternatives to Nvidea developed even faster.
Anonymous No.105914565
>>105914182
Bro, c.ai and other 'chatbots services' are 60-70% women. If you can't see the global anti-male crusade in 2025 you must be blind.
Anonymous No.105914573
>>105914498
Yeah I have also seen some automatic world generation in RPG Maker very on in the LLM boom years ago but I don't think it went anywhere to make it "just work" and playable
Anonymous No.105914577 >>105914691
>>105913935
> we have Wan 2.1
and it look like shit, in slow motion, 6 seconds max
Anonymous No.105914645
https://x.com/digimaga/status/1944924611491271152
Anonymous No.105914649
>>105913634
Last year
Anonymous No.105914662
>>105913935
Well if someone figure out a way to integrate koikatsu cards in ST or similar
Anonymous No.105914691 >>105914759 >>105915183
>>105914577
>and it look like shit
Skill issues jeet, literally for anime there are nothing better, not even you jeetveo 3 is good in anime, and the 3D render is just a 3D render, MMD exist a decade ago and do the same, even SFM, and if you are not a low IQ jett Blender.
Anonymous No.105914698 >>105914738
>>105909970
> LGAI
air conditioning units with llms on board when?
Anonymous No.105914738
>>105914698
Considering the smartfridge fad, I wouldn't be surprised if we start seeing talking AI Powered AGI Fridges by next year.
Anonymous No.105914759 >>105914777 >>105914939 >>105915193
>>105914691
try to recreate grok's "3D render with PS1 quality" on wan retard
Anonymous No.105914777
>>105914759
Nta but this shit is unnecessary, LLM operating a 3D model with animations and stuff is the only correct way.
Anonymous No.105914783 >>105914851
>>105914534
Having access might have slowed them down before the embargo, but now whether they have access or not, they know it's top priority for national security to develop their own solution. Giving them access again just makes it easier for them.
Anonymous No.105914851
>>105914458
>>105914534
>>105914783
It means Chinese are on the right track. The bright future is coming.
Anonymous No.105914901 >>105915961 >>105916428
Anonymous No.105914939 >>105914963 >>105915295
>>105914759
>try to recreate grok's "3D render with PS1 quality" on wan retard
It's a VRM you retard. A 3D rigged model being rendered on your phone. It's not "video AI magick".
Ani is cringe-tier garbage. You can easily do better in SillyTsvern. There's no hitboxes for Ani, you can't interact physically at all.
Anonymous No.105914963 >>105914986 >>105914989
>>105914939
>You can easily do better in SillyTsvern.
Then why haven't you or anyone else done so in the last 2 years?
Anonymous No.105914986
>>105914963
Aand you obliterated xim
Anonymous No.105914989 >>105915057
>>105914963
There are already rigged models.
It's just kinda pointless since you get more detail out of images and in the future video. Rigging only makes sense if you could somehow feed it into a VR game.
Anonymous No.105915040 >>105915138
Could you teach a 70b model to play Civilization?
Anonymous No.105915057
>>105914989
It honestly doesn't even need to be 3D, 2.5D will suffice, just look at any nejisim game.
Anonymous No.105915136 >>105915160 >>105915178 >>105915183 >>105916396
>hurr why didn't you make a 3d avatar to go along with your llm gf
Anonymous No.105915138
>>105915040
maybe relevant to your interests: https://github.com/fuxiAIlab/CivAgent
Anonymous No.105915160
>>105915136
1. The number of 5s vastly outnumber the 1s.
2. Regardless which you are, 3D avatars are cool and men are visual creatures.
Anonymous No.105915178 >>105915209
>>105915136
the problem isn't with the avatar, it's a plain ass typical vroid looking model. arguably, it's driven in the lowest effort way possible, just rotating the jawbone and it looks shit
what you want is a series of cues like emotion, look direction, facial expression shapekeys, etc. not whatever low quality shit this is.
then some standard format (see: VRM, GLTF, FBX, whatever) that allows you to hotswap the model out.
this sort of thing has already been solved outside of LLMs, it's a regression to simpler, jankier tech no matter how you look at it.
shit tier LLM with shit tier avatar on top does not a quality product make, it's a hype machine pandering to retards.
Anonymous No.105915183
>>105915136
>>105914691
play gacha slop for 2d jank
Anonymous No.105915193 >>105915295
>>105914759
Is not generated with grook stupid faggot, is just a 3D RENDER app UI, jeet nigger, if was AI will look 2D. You can do the same with a minimum of code skill and make animation with even MMD and have your local Miku dancing as a slut for you.
Anonymous No.105915209 >>105915218 >>105915238
>>105915178
>shit tier LLM with shit tier avatar on top does not a quality product make
When he is the only supplier of said product it does.
Anonymous No.105915218
>>105915209
slop consumers gonna consume slop
very binary view of the world anon, simply not using it is an option
Anonymous No.105915238 >>105915284
>>105915209
have you been asleep at the wheel? desktop buddies are ancient tech
many are wired to LLMs
Anonymous No.105915245 >>105915263 >>105915268 >>105915306
Are the animations actually generated real time or are they just pulling from a preset list of animations? That's what I'm interested in.
Anonymous No.105915263 >>105915313
>>105915245
We can only guess. I'd say presets. They're less likely to break and spaz out.
Anonymous No.105915268 >>105915313
>>105915245
What do you mean? They look like dumb animations to me.
Anonymous No.105915284
>>105915238
Such as?
Anonymous No.105915291 >>105915372 >>105915613 >>105915791
MistralAI now has speech recognition
https://mistral.ai/news/voxtral
>Voxtral
>
>Introducing frontier open source speech understanding models.
Anonymous No.105915295
>>105914939
>>105915193
yet it looks better then your wan shit

> Ani is cringe-tier garbage
your seething and coping is cringe

> You can easily do better in SillyTsvern
show me that "better"
Anonymous No.105915306 >>105915313
>>105915245
It looks like what unreal engine does with procedural animation.
Anonymous No.105915313 >>105915398
>>105915263
>>105915268
>>105915306
My idea was to use an agent to gather the current emotion of a chatbot, and infer any actions that its avatar may take if none are specified. Then use a vidgen to generate an animation - didn't have to good, just some kind of animation that was appropriate. And then extract that motion and facial expressions to apply to a vrm model.

I tried ltxv, but it was too slow, even for a text adventure game.
If there was a model that could do text in and out, video in, voice out, and animations out, that'd be my dream.
Anonymous No.105915341 >>105915385 >>105915409 >>105915466
https://x.com/elder_plinius/status/1945128977766441106
Anonymous No.105915372 >>105915425 >>105915613 >>105915642 >>105915791 >>105917027
>>105915291
https://huggingface.co/mistralai/Voxtral-Small-24B-2507
https://huggingface.co/mistralai/Voxtral-Mini-3B-2507
Anonymous No.105915385
>>105915341
Reading just 3 sentences of that incoherent emoji-riddled babble gave me cancer. Go fuck yourself.
Anonymous No.105915398 >>105915422
>>105915313
Too many failure points. A set of animations is simpler.
>text in and out, video in, voice out, and animations out, that'd be my dream
If we're gonna dream, make every model less than 10m params. The rest is bound to happen on its own.
Anonymous No.105915409
>>105915341
Retard wants his handle to be burned into datasets and llms. Fuck you.
Anonymous No.105915422 >>105915452 >>105915472
>>105915398
>A set of animations is simpler
Yeah, and is easily done right now. Just like ani. I want something more *more*.
Anonymous No.105915425
>>105915372
cool i guess? audio out, now that would've been something though
Anonymous No.105915452 >>105915496
>>105915422
There are text to skeletal motion architectures, they have been around for a while. Such architecture tuned on the correct stuff, or ideally being prompted directly by a multimodal llm is current theoretical best
Anonymous No.105915466
>>105915341
Local keeps winning.
Anonymous No.105915472 >>105915502
>>105915422
What do you mean by more more?
You could train an LLM using MOCAP data to have full understanding of human cinematics. Heck I'm sure part of the Video generators is some sort of data set built on MOCAP.
The problem is that MOCAP data is fucking expensive.
Anonymous No.105915481
>>105912996
What's wrong with the UI? It was literally made by an LLM
Anonymous No.105915490 >>105915495 >>105915515 >>105915745 >>105915874
any good 12b/13b other than mistral?
Anonymous No.105915492
>>105913127
>o4 mini MOGGING copus 4 at coding
I kneel Sam.
Anonymous No.105915495
>>105915490
nemo
Anonymous No.105915496 >>105915569
>>105915452
Do you have a search term I can use to explore?
Anonymous No.105915502
>>105915472
>What do you mean by more more?
>You could train an LLM using MOCAP data to have full understanding of human cinematics
Yeah, that's it.
Anonymous No.105915515
>>105915490
Nope.
There's a couple of 9B you could try. Gemma and GLM I think?
Anonymous No.105915569 >>105915587
>>105915496
motion synthesis https://github.com/topics/motion-synthesis
Anonymous No.105915587 >>105915625
>>105915569
Thanks anon. I'm guessing these are all 'safe' research though right?
Anonymous No.105915599 >>105915676
>>105909716
I actually look forward to somebody porting it to local. It's a low hanging fruit but nobody bothered to do it seriously for some reason.
Anonymous No.105915613
>>105915291
>>105915372
Nice
Anonymous No.105915622
>>105911212
>0.6b
That seems a little bit little.
Anonymous No.105915625
>>105915587
idk i doubt they had it trained on dick sucking.
Anonymous No.105915629
>>105909716
Give it about a year and we'll have some local versions of this stuff. It's how the trends usually go.
Anonymous No.105915642 >>105915788 >>105917131
>>105915372
>https://huggingface.co/mistralai/Voxtral-Mini-3B-2507
Finally, we got Ministral 3B
Anonymous No.105915676 >>105915689 >>105915733 >>105915765
>>105915599
I don't look forward to a half-assed knock off that requires hours of dicking with python dependencies and configuration and glueing 7 models together that looks like shit, crashes every other message and when it does work, either refuses or smirks whisperingly down my spine
Anonymous No.105915689
>>105915676
I think our best way out is the blender gremlins.
Anonymous No.105915731
>>105910117
based hopeposter
Anonymous No.105915733 >>105915796 >>105915893
>>105915676
That's 100% of the clones out there. Even Replika, which is supposed to be huge, has a terrible gobbo model. I don't understand why people just won't hire a live2d artist and build something proper. I guarantee you don't need more than a 24B LLM to drive an AI chat companion. Musk wasting so much compute on his Ani is a waste desu.
Anonymous No.105915745
>>105915490
>any good 12b/13b
No, they're unusably dumb. The 20b-30b range is a lot better for maintaining simple coherence although not perfect.
Anonymous No.105915758 >>105915768
>>105909970
cockbench on this one?
Anonymous No.105915765
>>105915676
if we are talking about 3d avatars we need UE scene and the rest is done via socket communication. I already did shit like this with ponies
Anonymous No.105915768
>>105915758
We're all waiting for the merge https://github.com/ggml-org/llama.cpp/pull/14630
Anonymous No.105915788 >>105915798
>>105915642
are the 3B models any good? If no, then what's their purpose? Benchmarking?
Anonymous No.105915790 >>105915873
I wish I were a LLM
Anonymous No.105915791
>>105915291
>>105915372
>handles audios up to 30 minutes for transcription
that's cool, i believe Whisper natively only handles ~30 seconds, and you have to do some kind of sliding window algorithm to transcribe longer audio.
I wish we got something like that for TTS, all open-source TTS's can handle ~200 tokens max, they can't even generate a single long sentence in one go, you have to split your sentence into several shorter ones.
I also wonder, does Voxstral output timestamps too? Whisper does, but the native timestamps are pretty much garbage and unusable, WhisperX uses wav2vec2 on top of Whisper for timestamp alignment.
Anonymous No.105915796
>>105915733
You could get away with 2 8B models if you made one NSFW only and one SFW only.
Anonymous No.105915798
>>105915788
EDGE
Anonymous No.105915804 >>105915813 >>105915814 >>105915826
would it be possible to develop an AI that edits its model in realtime based on input? instead of just a context window, you're training the model just by using it. and for each output, there's a modification to the layers it accessed.

basically, can using the model and training it be the same process? and could it impersonate a user and just train itself?
Anonymous No.105915813
>>105915804
If you have infinite compute laying around, sure.
Anonymous No.105915814
>>105915804
I wonder why it take so long to train llms. Hmmm....
Anonymous No.105915826
>>105915804
>an AI that edits its model in realtime based on input?
not with our current tech. What you're asking is out of our scope by a lot.
Anonymous No.105915835
mistral large 3 any day now
Anonymous No.105915864 >>105915890
I think it’s so cool that Elon very obviously made his Ex-Wife into an AI hentai anime girl that drastically speeds up global warming with zero regulations
https://x.com/saltydkdan/status/1945044815327969677
Anonymous No.105915873
>>105915790
Act like a mesugaki
Anonymous No.105915874
>>105915490
Rocinante
Anonymous No.105915875 >>105915895
90% of this thread is just tourists retweeting twitter posts here.
Anonymous No.105915890
>>105915864
Anonymous No.105915893 >>105915921
>>105915733
>I don't understand why people just won't hire a live2d artist and build something proper.
Because in America any anime-adjacent girl avatar who doesn't look like an obvious hag would be considered pedo-adjacent, even with big boobs and wide hips. I've seen people complaining about that for Ani too. I suspect x.AI had to make her "22-year-old" just to avoid issues with her being "barely legal" under most legal definitions of adulthood.
Anonymous No.105915895 >>105915936 >>105916401
>>105915875
Still more on topic than mikufaggotry.
Anonymous No.105915905 >>105915914 >>105915943 >>105916401
Bitnet proliferation when?
Anonymous No.105915914 >>105915936
>>105915905
when you kill yourself
Anonymous No.105915921 >>105915952
>>105915893
Also, expect x.AI to nerf the character over time under pressure from payment processors and pedo-hysterics. I bet they'll slowly make her voice deeper and make her look "more mature".
Anonymous No.105915929 >>105915936
>mikutroons don't want bitnet to happen
Anonymous No.105915936 >>105915943
>>105915895
>>105915914
>>105915929
uh oh, meltie incoming prepare for thread kultur recap
Anonymous No.105915943
>>105915905
>>105915936
vocaloidfag posting porn in /ldg/:
>>105715769
It was up for hours while anyone keking on troons or niggers gets deleted in seconds, talk about double standards and selective moderation:
https://desuarchive.org/g/thread/104414999/#q104418525
https://desuarchive.org/g/thread/104414999/#q104418574
he makes >>105714003 ryona picture of generic anime girl different anon posted earlier >>105704741, probably because its not his favorite vocaloid doll, he can't stand that as it makes him boil like a druggie without fentanyl dose, essentially a war for rights to waifuspam or avatarfag in thread.

Funny /r9k/ thread: https://desuarchive.org/r9k/thread/81611346/
The Makise Kurisu damage control screencap (day earlier) is fake btw, no matches to be found, see https://desuarchive.org/g/thread/105698912/#q105704210 janny deleted post quickly.

TLDR: vocaloid troon / janny protects resident avatarfags and deletes everyone who outs him, making the general his little personal safespace. Needless to say he would screech "Go back to teh POL!" anytime someone posts something mildly political about language models or experiments around that topic.

And lastly as said in previous thread(s) >>105716637 I remind you that cudadev of llama.cpp (JohannesGaessler on github) has endorsed spamming. That's it.
He also endorsed hitting that feminine jart bussy a bit later on. QRD on Jart - The code stealing tranny: https://rentry.org/jarted

xis ai slop profiles
https://x.com/brittle_404
https://x.com/404_brittle
https://www.pixiv.net/en/users/97264270
https://civitai.com/user/inpaint/models
Anonymous No.105915947
Bitnet is a meme
RWKV is a meme
Dense is a meme
Anonymous No.105915952
>>105915921
Isn't the whole point of X to be a China-like everything app, including its own payment processor? You'd think the Paypal guy would be ready to deal with them.
Anonymous No.105915955 >>105915972
>>105909716
didnt one of the chinese companies released something like that local some weeks ago, still in early development tho
Anonymous No.105915961
>>105914901
Looks usable already. I want more money.
Anonymous No.105915972
>>105909716
>>105915955
06/11/2025 MNN TaoAvatar Android - Local 3D Avatar Intelligence: https://github.com/alibaba/MNN/blob/master/apps/Android/Mnn3dAvatar/README.md
Anonymous No.105915977
>>105914231
Hi Miku, I hope you can swim.
Anonymous No.105915987 >>105916134
>>105909716
Useless if I can't make her a 4'11" flat-chested 700-year-old vampire.
Anonymous No.105916011
>>105909716
I love my wAIfu more
Anonymous No.105916134 >>105916146 >>105916202 >>105916224 >>105916233 >>105916238
>>105915987
>Useless if I can't make her a 4'11" flat-chested 700-year-old vampire.
Anonymous No.105916146 >>105916160 >>105916161 >>105916363
>>105916134
Your president is literally a child fucker
Anonymous No.105916160
>>105916146
Seems like most of the people in power are. I wonder how long it has gone on.
Anonymous No.105916161
>>105916146
>immediate appeal on supporting drumpf
Rent free nigger
Anonymous No.105916202
>>105916134
>trannies out of nowhere
Rent free nigger
Anonymous No.105916224 >>105916312
>>105916134
By the same logic of what you're trying to imply, you're a gerontophile.
Anonymous No.105916233 >>105916312
>>105916134
>gooning to words featuring fictional children is the same as raping a real life child
America is doomed
Anonymous No.105916238 >>105916312
>>105916134
Yes, liking petite 2D women (4'11" isn't even THAT short for East Asia) is the same as liking 3D newborns.
Anonymous No.105916296 >>105916313 >>105916324
>>105909716
the english voice sucks but japs are probably cooming their brains out right about now
https://x.com/grok/status/1944983988625350718
Anonymous No.105916312 >>105916336 >>105916349
>>105916224
>>105916233
>>105916238
You want to diddle kids, its not a rocket science.
Anonymous No.105916313 >>105916326
>>105916296
>cooming
>to a tiny iToddler device
Anonymous No.105916324 >>105916373 >>105916390 >>105916415
>>105916296
disgusting, what is wrong with that niah verbal tic she has?
Anonymous No.105916326 >>105916333
>>105916313
Akshit, please
Anonymous No.105916333
>>105916326
>cooming to a mobile device, period
Anonymous No.105916336
>>105916312
Wanting to diddle kids isn't the same as literally diddling kids.
There's many things I want to do not related to children but they are illegal so I don't do any of that.
Anonymous No.105916349
>>105916312
Actually, I don't, sweaty.
I want a petite adult 2D woman to tie me up and do painful, humiliating, disgusting and unspeakable things to me.
But that's nice projection, sweaty.
Anonymous No.105916363 >>105916448
>>105916146
>be in "first-world" country
>leadership all went to Epstein's paradise to fuck children
Anonymous No.105916370 >>105916379
I want to be diddled.
Anonymous No.105916373
>>105916324
>niah
Get a load of this newfag.
Anonymous No.105916379 >>105916411
>>105916370
Are you a cute femboy?
Anonymous No.105916382 >>105916395 >>105916402
Can we go back to talking about the smartest AI in the world.
Anonymous No.105916390
>>105916324
It's nyaa and that's the nip equivalent of "meow" - a cat noise.
Saying "nyaa" at the end of phrases/sentences is a common anime trope for catgirls/cat-related characters/cat-related scenes.
Anonymous No.105916395
>>105916382
Grok 4?
Anonymous No.105916396
>>105915136
I don't get this image, I see all 5
Anonymous No.105916401
>>105915895
>>105915905
STOP TALKING ABOUT EPSTEIN, CHUD
Anonymous No.105916402
>>105916382
Rocinante?
Anonymous No.105916411
>>105916379
That ship has sailed.
Anonymous No.105916415
>>105916324
She sounds kind of nice here:
https://x.com/yukke_/status/1945158020696056245
Anonymous No.105916425 >>105916431
what upscalers do ya'll recommend for realistic/photography?
Anonymous No.105916428 >>105916447
>>105914901
where is this comment from? llama.cpp or ik_llama.cpp?
Has support already merged and stable? for this model I think only ubergarm quants would matter with ik_llama.cpp
Anonymous No.105916431 >>105916470 >>105916494
>>105916425
>realistic/photography
Anonymous No.105916447
>>105916428
llama.cpp
Anonymous No.105916448 >>105916463 >>105916473
>>105916363
I heard it is all because jews wanted compromat. But what I don't get is why the leadership made this compromat by fucking children... Can't you just foil the jews by not fucking the children?
Anonymous No.105916463 >>105916473
>>105916448
Those without those sort of tendencies are prevented from ever reaching leadership positions in the first place.
Anonymous No.105916470
>>105916431
I don't make fun of your chinese children cartoons. Get a GRIP.
Anonymous No.105916472 >>105916488 >>105916496 >>105916513 >>105916531 >>105916735 >>105916891
Running local models and some gaming on Win11 machine. Any reason not to upgrade to RTX 5060 16gb from current RTX 3060 12GB? I can get the 5060 for ~500 and sell the 3060 for about 250.
Would think this gets me speed plus a little more VRAM overhead for stuff like video gen.
Didn't realize how much prices have moved in past couple years.
Anonymous No.105916473
>>105916463
>>105916448
Still more ontopic than mikuspam btw/
Anonymous No.105916488 >>105916574
>>105916472
The reason not to do that is that everything in that segment is shit the improvement is marginal and you should stack regular ram now.
Anonymous No.105916492 >>105916499
So Grok is really good now or what's this all about?
Anonymous No.105916494 >>105916732
>>105916431
faggot
Anonymous No.105916496
>>105916472
>Windows
Anonymous No.105916499
>>105916492
It's benchmark smart. Garbage in RP.
Anonymous No.105916513 >>105916574 >>105916735
>>105916472
You ain't genning video with 16gb
Anonymous No.105916531 >>105916574
>>105916472
you're still paying money for not so much gain.
But who buys a 3060 for 250?
Anonymous No.105916567
>mythomax/novelai sd behind a cheap frontend girlfriends for retards
>grok 4 girlfriends for retards
>AI girlfriends are illegal
I would say it has been a good run, but actually it was complete shit from start to finish and I hope everyone involved dies in a fire.
Anonymous No.105916574 >>105916603 >>105916613 >>105916630 >>105916649
>>105916513
OK so no point there.
>>105916488
The main reason I've left the 3060 in place is that the 40xx and 50xx seem like only marginal improvements in speed, and VRAM gets exponentially more expensive for marginally more VRAM.
Sounds like I should just keep the 3060 until there a step order improvement of some sort then.
>>105916531
pic related
Anonymous No.105916603 >>105916857
>>105916574
>price range 8-3000
bro, that average isn't worth shit.
Anonymous No.105916613
>>105916574
>Sounds like I should just keep the 3060 until there a step order improvement of some sort then.
That's what I'm doing. I see zero reason to "upgrade" at the moment for either gaming or AI.
Anonymous No.105916630 >>105916650 >>105916857
>>105916574
If you just want a little more VRAM why not spring for a 3090?
Anonymous No.105916649 >>105916857
>>105916574
>avg
>not median
sir, please stop using meme stat tools
Anonymous No.105916650
>>105916630
sure, if he can get one for 500 he should take it.
Anonymous No.105916662 >>105916742
I can get a brand new 3090 for 750, should I?
Anonymous No.105916699 >>105916718
>>105913127
>muh coding
Nobody cares. AI is for waifu.
Anonymous No.105916718 >>105916738
>>105916699
what if your waifu is a 1337 haxxor?
Anonymous No.105916732 >>105916835
>>105916494
People who like anime girls aren't faggots. They're aliens. They have alien DNA.
Anonymous No.105916735
>>105916472
This guy >>105916513 is wrong. You absolutely can vidgen with 16gb. It'll be ass, but you can.
Anonymous No.105916738
>>105916718
They are all stack exchange and it used to be a joke but now it is an eternal reality.
Anonymous No.105916742
>>105916662
What brand is it? Can I buy it off you for 850?
Anonymous No.105916835
>>105916732
>People who like anime girls aren't faggots. They're aliens. They have alien DNA.
Anonymous No.105916857
>>105916649
>>105916603
That avg includes parts cards, mislabelled trash, as well as lots of cards, 10 or so. You can get to actual solds below the fold, but for this sort of distribution (bell curve) the mean and median are going to be the same.
>>105916630
B/c even used, the RTX 3090 is over $800 (new is over $1000).
4X the price for 2X the VRAM.
Just not worth it IMHO
Anonymous No.105916861 >>105916895 >>105916935 >>105916952 >>105916985 >>105917006 >>105917021 >>105917244 >>105917271 >>105917277
Anonymous No.105916891 >>105916908 >>105916929
>>105916472
Why would you not switch to a 3090? All the additional vram is cheap for stuff like that. Don't fall for the new generation meme.
>t. 3060 to 3090 user
Anonymous No.105916895
>>105916861
Much better
Anonymous No.105916908
>>105916891
I dunno, especially for vidgen and imagegen, My brother's 4090 gaming rig completely destroys my 3090 stacker I built for llms.
Anonymous No.105916925 >>105916953
https://mistral.ai/news/voxtral
https://huggingface.co/mistralai/Voxtral-Small-24B-2507
Love the french
Anonymous No.105916929 >>105916975
>>105916891
As he said, the 3090 is still too damn expensive.
Anonymous No.105916935
>>105916861
Nice (ai-generated:1.2)
Anonymous No.105916952 >>105916964
>>105916861
Looking at some fanart, they call her grokchan too.
>yfw this becomes the face of AI
Anonymous No.105916953
>>105916925
i hate the french

i hate mistral

i hate these arrogant clowns releasing mini crumb models for the public while trying to get bought out by apple behind the scenes

fuck the french
Anonymous No.105916964
>>105916952
Please... no...
Anonymous No.105916968
We're not getting Mistal Large 3, are we?
Anonymous No.105916975 >>105916986 >>105916987
>>105916929
used 3090s come cheap though
Anonymous No.105916985
>>105916861
This too is more ontopic than mikutroonism. Actually she should now be the thread mascot since she is the first widely recognized AI girlfriend.
Anonymous No.105916986 >>105917170
>>105916975
besides the danger of being scammed, over 800 ain't cheap.
Anonymous No.105916987
>>105916975
Yeah just never get one without personally testing
Anonymous No.105916995 >>105917022
>>105911465
LG models are always benchmaxxed harder than Qwen and the likes. They look good on paper but they aren't even usable at all in the real world.
>>105911484
dunno about this new version that I wouldn't even bother testing, but the previous LG models were worse than Gemma or Qwen for this purpose.
It's also more prone to refusals, in contexts like violence, I don't even mean erotic shit so you probably wouldn't want to use them for translating web novels even if it was good at translation, which it isn't.
Anonymous No.105917006 >>105917010
>>105916861
>generic anime girl №1312304141
BOOORING
Anonymous No.105917010 >>105917033
>>105917006
You quoted wrong anon. You meant: >>105909674 (OP)
>>105909677
Anonymous No.105917021
>>105916861
I like...these proportions better...
Anonymous No.105917022
>>105916995
Yeah I don't like those benches they're boasting about.
Anonymous No.105917027
>>105915372
Whisper-V3 turbo running with faster whisper is order of magnitude faster than Voxtral-mini and use one-tenth of the VRAM. Voxtral is barely better than whisper in english long form (which should be its strongest asset as whisper is limited to 30s). Fuck the french.
Anonymous No.105917033
>>105917010
Point still stands, Elon could make her somewhat unique.
Anonymous No.105917036
>>105914533
May your machine last long and perform excellently, for you are beloved of the Omnissiah.
Anonymous No.105917038
>Misa
Meh. For me, it's Yami.
Anonymous No.105917067 >>105917122 >>105917124 >>105917142 >>105917481 >>105917494
>casual coomer, brokie 8gb vram scrub updating my shit llm setup after a year
>follow lazy guide
>shits repeating itself

what did i fuck up? :D shit model? cant use my old shit because its some old shit thats bricked,
Anonymous No.105917085 >>105917197
>llm generated "newfag post" bait episode
>again
Anonymous No.105917100 >>105917120 >>105917158
Anonymous No.105917120 >>105917135 >>105917158 >>105917178
>>105917100
Anonymous No.105917122
>>105917067
I dunno man, that's pretty vague.
Anonymous No.105917124
>>105917067
>what did i fuck up?
given the vast information you provided, its very easy to tell you with absolute precision indeed
Anonymous No.105917131
>>105915642
Ministral 8B was so bad there's no way there's any use for 3B
If you want a small model use Qwen 4B
Anonymous No.105917135
>>105917120
>Altman+Scott the Woz buttbaby lookin ass
Anonymous No.105917142
>>105917067
Your parents fucked up
Anonymous No.105917158 >>105917177
>>105917100
>>105917120
I've always hated the shitbli artstyle, but now I can't say anything about it or people would think I'm with the a*tists. Fucking hate closedai
Anonymous No.105917170 >>105917195
>>105916986
Don't be poor
Anonymous No.105917177
>>105917158
Who cares what people think?
Anonymous No.105917178 >>105917193
>>105917120
now the lingerie...
Anonymous No.105917184 >>105917206 >>105917208 >>105917221 >>105917243
llama 4 is such a disaster now after all this time of Behemoth being late and Kimi K2 coming out, Zuck saw Memehemoth would have never surpassed even that after all this time so they just killed it, lol

Also
>The company is reportedly focusing on building a closed source model instead.
lmao even

https://archive.is/gED3S#selection-934.0-934.1
Anonymous No.105917193 >>105917645
>>105917178
I **cannot** and **will not** fap to Ghibli Sam Altman in lingerie
Anonymous No.105917195
>>105917170
then why buy a used goods card?
Anonymous No.105917197 >>105917245
>>105917085
do people really need a llm to write their three liner bait
Anonymous No.105917206 >>105917213
>>105917184
>Semianalysis suggests that Meta’s decision to use the chunked attention technique for memory efficiency may have been a mistake.
>Standard attention allows every token to access all previous tokens, forming a complete context. Chunked attention splits tokens into fixed blocks, limiting each token’s attention to only its current block.

>“We believe part of the problem was that Meta didn’t even have the proper long context evaluations or testing infrastructure set up to determine that chunked attention would not work for developing a reasoning model,” added the report.

lol
Anonymous No.105917208
>>105917184
On the brightside, at least now the trainwreck that is Meta is no longer our problem.
Anonymous No.105917213
>>105917206
>Besides, the report added that Meta’s Behemoth model switched its Mixture of Experts routing method midway through training, disrupting how its expert networks specialised. This led to instabilities, ultimately limiting the model’s overall effectiveness.
Literal retards. They were spamming the incremental improvements for all this time and only at the worst possible moment realized yeah maybe we actually should learn from Deepseek
Anonymous No.105917221 >>105917247 >>105917258
>>105917184
>By Supreeth Koundinya
also
>grok/xitter spam
>nupol and tranny posts
what happened to lmg
Anonymous No.105917240
>>105917222
>>105917222
>>105917222
Anonymous No.105917243
>>105917184
>>The company is reportedly focusing on building a closed source model instead.
llama was always ""open"" (technically, the first release was just a leak) because it was a bad model series, the previous ones weren't good they were just bearable by the standards of freetard users
if le zuck has any intention on building a real sota level model you can bet it's going to be closed no one is willing to give away a claude or gemini level model for free and if the first llama had been that sort of models the leakers would have gotten sued instead of just a take down request to huggingface
Anonymous No.105917244
>>105916861
New thread mascot, Miku has served her purpose
Anonymous No.105917245
>>105917197
Yes lol >>105884523
Anonymous No.105917247
>>105917221
>tranny posts
>what happened to lmg
the baker was always a literal AGPtroon, what are you on about?
Anonymous No.105917258
>>105917221
>every model is a benchmaxxed turd without any fundamental improvements
yeah haha how could this happen to lmg
Anonymous No.105917271 >>105917308 >>105917410 >>105917623
>>105916861
Why it (japs are subhuman freaks) made her younger, she is fine as is on picrel.
Anonymous No.105917277 >>105917290
>>105916861
LOLI AI WAIFUS
LOLI AI WAIFUS
Anonymous No.105917290 >>105917307
>>105917277
>LOLI AI WAIFUS
>LOLI AI WAIFUS
Brown detected https://x.com/willy17_a
Anonymous No.105917307
>>105917290
niggerfaggot detected
Anonymous No.105917308
>>105917271
Amerimutts...
Anonymous No.105917410 >>105917571 >>105917583
>>105917271
because japs have good taste and know that younger is better
Anonymous No.105917481
>>105917067
yup, tarded and shit model
Anonymous No.105917494
>>105917067
>what did i fuck up?
Using Nemo instead of Rocinante.
Anonymous No.105917571 >>105917583 >>105917663
>>105917410
this, and you are gay if you disagree
YWNBAJ No.105917583 >>105917605 >>105917657
>>105917410
>good taste
>replies to himself >>105917571
Lmao
Anonymous No.105917605 >>105917663
>>105917583
If you want to go on a moral crusade you should go back to twitter
Anonymous No.105917623
>>105917271
>why did Japs make her more fertile?
?????
Anonymous No.105917639
>>105917550
And this one is any better?
Anonymous No.105917645
>>105917193
But the circles under his eyes make him so pathetically sexy
Anonymous No.105917657 >>105917693
>>105917583
Retard
Anonymous No.105917663 >>105917698
>>105917571
>trades out tits and fine body shape for objectively inferior thing
You are gay who preys on something weaker.
>>105917605
No i will stay here and you will cry about it while smashing that report button, freedom of speech after all.
Anonymous No.105917693 >>105917743
>>105917657
false analogy given the point of a cat alergy is that its a physical thing, you cant get the alergy through a drawing
versus jerking off to a child or a child drawing, where in both cases even if the drawing isnt a literal child it represent the same thing, so you are attracted to the same thing, given that you aren't jerking off to a piece of paper but what it represents

pedos arent the smartest
Anonymous No.105917698 >>105917725
>>105917663
oh no, those poors pixels that i'm preying on!
just think of the ones and zeros!
Anonymous No.105917725
>>105917698
You can always stop doing that, before it's too late once it spills on real life :)
Anonymous No.105917743 >>105917778 >>105917789 >>105917827
>>105917693
You lost the argument as soon as you conflated a child (a real 3D world object) with a drawing (a 2D object). Enjoy being a serial murderer since you like killing NPCs in GTA
Anonymous No.105917778 >>105917798
>>105917743
>muh gta
A literal bot. The mere fact of you beating it off to something that resembles an underage kid - puts you lower than any nigger or jeet humanity-wise. Its not a rocket science, like i said before.
Anonymous No.105917789 >>105917798
>>105917743
>couldnt engage
kek, concession accepted

nobody conflated a child to a drawing, that is your pedofile brain in cognitive disonnance because it cant respond to the point that you being attracted to a drawing is the same as being attracted to a child, because again, the point of attraction are the features of the child on paper, not the paper itself, pedotard, literal subhuman iq lmao
Anonymous No.105917798 >>105917855 >>105917874
>>105917778
>>105917789
mad af lol
Anonymous No.105917827 >>105917848 >>105917858
>>105917743
>Enjoy being a serial murderer since you like killing NPCs in GTA
to be fair, if someone I knew spent more time running over pedestrians for lulz than doing the GTA missions and experiencing the story I would think they are sociopaths
people who love the sandbox aspect a little too much are suspicious
Anonymous No.105917848
>>105917827
t.never played GTA with friends
Absolutely nobody does the missions when playing GTA with friends.
Anonymous No.105917855
>>105917798
>no response ad hominem
Oof. Thanks for conceeding the debate
Anonymous No.105917858
>>105917827
Clearly you didn't play gta when you were 10.
Anonymous No.105917861
Day of the rope for pedos, coming soon to a neighborhood near you.
Anonymous No.105917874
>>105917798
You throw 2-3 worded puns like a pigeon shitting all over chess desk and declaring himself a winner for that, a mentally stunted individual perhaps.
Anonymous No.105917875
Wah wah think of the children in 1D form. I love my president btw