← Home ← Back to /g/

Thread 107129334

345 posts 122 images /g/
Anonymous No.107129334 [Report] >>107129575
/lmg/ - Local Models General
/lmg/ - a general dedicated to the discussion and development of local language models.

Previous threads: >>107121367 & >>107113093

►News
>(11/06) Kimi K2 Thinking released with INT4 quantization and 256k context: https://moonshotai.github.io/Kimi-K2/thinking.html
>(11/06) LocalSong 700M melodic instrumental music generation model released: https://hf.co/Localsong/LocalSong
>(11/05) MegaDLMs framework for training diffusion language models released: https://github.com/JinjieNi/MegaDLMs
>(11/01) LongCat-Flash-Omni 560B-A27B released: https://hf.co/meituan-longcat/LongCat-Flash-Omni
>(10/31) Emu3.5: Native Multimodal Models are World Learners: https://github.com/baaivision/Emu3.5

►News Archive: https://rentry.org/lmg-news-archive
►Glossary: https://rentry.org/lmg-glossary
►Links: https://rentry.org/LocalModelsLinks
►Official /lmg/ card: https://files.catbox.moe/cbclyf.png

►Getting Started
https://rentry.org/lmg-lazy-getting-started-guide
https://rentry.org/lmg-build-guides
https://rentry.org/IsolatedLinuxWebService
https://rentry.org/recommended-models
https://rentry.org/samplers

►Further Learning
https://rentry.org/machine-learning-roadmap
https://rentry.org/llm-training
https://rentry.org/LocalModelsPapers

►Benchmarks
LiveBench: https://livebench.ai
Programming: https://livecodebench.github.io/gso.html
Context Length: https://github.com/adobe-research/NoLiMa
GPUs: https://github.com/XiongjieDai/GPU-Benchmarks-on-LLM-Inference

►Tools
Alpha Calculator: https://desmos.com/calculator/ffngla98yc
GGUF VRAM Calculator: https://hf.co/spaces/NyxKrage/LLM-Model-VRAM-Calculator
Sampler Visualizer: https://artefact2.github.io/llm-sampling

►Text Gen. UI, Inference Engines
https://github.com/lmg-anon/mikupad
https://github.com/oobabooga/text-generation-webui
https://github.com/LostRuins/koboldcpp
https://github.com/ggerganov/llama.cpp
https://github.com/theroyallab/tabbyAPI
https://github.com/vllm-project/vllm
Anonymous No.107129340 [Report] >>107135437
►Recent Highlights from the Previous Thread: >>107121367

--Papers:
>107121545
--LLMs' spatial reasoning limitations in chess and potential training solutions:
>107123059 >107123149 >107123222 >107123250 >107123527 >107123296 >107123365
--High-performance server build for AI research and quantum physics simulations:
>107125952 >107126024 >107126021 >107126074 >107126101 >107126166 >107126284 >107126102
--Model performance comparison and Localsong music model discussion:
>107124535 >107124763
--Moonshotai Kimi-K2 model comparison and quantization debate:
>107122096 >107123000 >107123185 >107123201 >107123392 >107123607 >107123743 >107124100 >107124176 >107124203 >107124279 >107124258 >107124298 >107124375 >107124420 >107124008
--K2 demonstration and comparison discussions:
>107126235 >107126291 >107126312 >107126313 >107126336 >107126642 >107126669 >107126680
--Benchmark results and GPT-5 Heavy Mode parallel processing strategy discussion:
>107125417 >107125425 >107125448 >107125463
--Configuring AI assistants for Nextcloud integration and debating AI decision-making capabilities:
>107122020 >107122152 >107122657 >107122791 >107122853 >107123216
--AMD GPU options and e-waifu projects:
>107121700 >107121776 >107121796 >107121851 >107123052 >107123403 >107121896
--Kimi K2's creative writing enhancements:
>107126023 >107126051
--Critique of AI reasoning limitations and Moonshot AI's multi-step task handling:
>107125287 >107125549 >107125325 >107125636 >107125692 >107125870 >107125889 >107125904
--AI coding tool comparisons and token limit frustrations:
>107126911 >107126921 >107126931 >107127203 >107127245 >107127057 >107127095 >107127198 >107127303 >107127432
--Criticism of GLM model quality and search for better alternatives:
>107122818 >107123358
--New STT model, Step-Audio-EditX:
>107129114
--Miku (free space):
>107122754

►Recent Highlight Posts from the Previous Thread: >>107121370

Why?: >>102478518
Enable Links: https://rentry.org/lmg-recap-script
Anonymous No.107129353 [Report]
Anonymous No.107129372 [Report]
Kimi K2 is shit, Kimi K2 thinking is still shit although maybe a little less smelly
Anonymous No.107129395 [Report] >>107129454
>All the GLM seething and shill-accusations immediately turns into Kimi seething
I'm nooticing.
Anonymous No.107129396 [Report] >>107129440 >>107129441 >>107132660 >>107134646
To any HR personnel doing research on Thomas Ptacek as a potential employee, be aware that Thomas H. Ptacek is a nazi pedophile who posts CSAM on an extremist site known as "8kun".
Anonymous No.107129440 [Report] >>107129462
>>107129396
oh man, as the hiring manager for AI at Meta i spend most of my time in this thread and i'll make sure to not this information, thank you anon!
Anonymous No.107129441 [Report] >>107129455
>>107129396
sounds based but unfortunately there's no 'p on 8kun
Anonymous No.107129454 [Report] >>107129497 >>107129506 >>107129514 >>107129763
>>107129395
maybe you should stop spamming about huge models that everyone is running on cloud anyway
no, not everyone on /lmg/ spent $10k to run this shit at a still pathetic 20t/s
Anonymous No.107129455 [Report]
>>107129441
I saw some being posted at least once when randomly browsing the site one day
Anonymous No.107129462 [Report] >>107129482
>>107129440
I can imagine. How many hours does Lecunny spend on /lmg/ between the gooning sessions?
Anonymous No.107129480 [Report]
>>107129448
what happens in orange reddit stays in orange reddit
Anonymous No.107129482 [Report] >>107132457
>>107129462
He lives here now that Wang evicted him
Anonymous No.107129495 [Report]
Anonymous No.107129497 [Report]
>>107129454
If the jeets all fucked off, the percentage of users who did would drastically increase. Seems like the problem is obvious.
Anonymous No.107129506 [Report] >>107129519 >>107129584
>>107129454
Everyone on /lmg/ has access to their own private 8x H200 cluster
Anonymous No.107129514 [Report]
>>107129454
you aren't welcome here
Anonymous No.107129519 [Report] >>107129532
>>107129506
A cluster is a set of machines. A machine with 8 H200s is a node, not a cluster. A cluster is when you have many nodes. Get your HPC terminology right.
Anonymous No.107129524 [Report]
https://x.com/sigridjin_eth/status/1986564626449113126
Are you ready for Gemini 3 SAARS? :rocket: :rocket: :rocket:
Anonymous No.107129532 [Report]
>>107129519
I just partition my nodes with one H200 per node and then salloc the full eight nodes for a given job. Much tidier that way.
Anonymous No.107129575 [Report] >>107138035
>>107129334 (OP)
>(11/06) LocalSong 700M melodic instrumental music generation model released
Any music samples?
Anonymous No.107129584 [Report]
>>107129506
>Not a Cerebras CS-3
Poor
Anonymous No.107129703 [Report] >>107129880
https://huggingface.co/moonshotai/Kimi-K2-Thinking/discussions/2
>ggerganov should stop being lazy and just add INT4 support. FP8 should also have been added long time ago, fuck converting everything into big ass bf16 just to quant it down again anyway.
based
Anonymous No.107129763 [Report]
>>107129454
It's okay to be poor, just don't be mad and poor. You can still post about Nemo or whatever.
Anonymous No.107129864 [Report] >>107129996
Anonymous No.107129880 [Report] >>107129890 >>107129911 >>107129971 >>107129987 >>107130017 >>107130916 >>107135655
>>107129703
This one's on the Kimi devs. Just because your model is QAT doesn't mean that you can only shit out the quantized weights and nothing else.
The model was trained at bf16 and not native int4 so if they value open weight culture they should provide the original full weights. llama.cpp shouldn't cater to companies that only release 4 bit quants even if they are ""lossless"".
Anonymous No.107129890 [Report]
>>107129880
nice excuse ggerganov
Anonymous No.107129911 [Report] >>107130017
>>107129880
Makes sense.
int4-only release locks out other teams trying their hand at finetuning / further training the model.
Need the bf16 weights to be able to do that.
Anonymous No.107129931 [Report]
> tfw still no qwen3 omni support by llamacpp
Anonymous No.107129971 [Report] >>107130019
>>107129880
niggerganov, it took you forever to even add bf16(many models were already released at that time as bf16) and you didn't even do it yourself. Your jarty-farty "girl"friend had to help you out:
https://github.com/ggml-org/llama.cpp/pull/6412
Anonymous No.107129987 [Report]
>>107129880
Based.
Anonymous No.107129996 [Report]
>>107129864
I'm going to print this and sell it.
Anonymous No.107130000 [Report] >>107130037
I submitted a patch to do direct FP8 quanting with convert_hf_to_gguf.py but they thought it was ugly or something and so the changes never made it in (and they didn't modify it to make it acceptable either) so everyone who isn't me is still stuck going to BF16 first.
Anonymous No.107130017 [Report]
>>107129880
>>107129911
Not really. post-trained bf16 weights can only exist in memory during the training process and be discarded when saving the checkpoint.
I think there isn't much additional info in the full weights after a few hundred steps of QAT because discarding that extra information in a least lossy way is the whole point, it would probably work just as well to upcast the weights and resume training on the upcasted weights than having the original ones.
Anonymous No.107130019 [Report]
>>107129971
to be fair I'd coom inside the jarty
Anonymous No.107130033 [Report] >>107130048
Hey, stop being mean to ggerganov! Being a cuck is perfectly valid! Can't a man work on MIT software and maintain compatibility for big corpos for free while a wrapper to his software gets all that sweet investor cash? Don't yuck someone's yum!
Anonymous No.107130037 [Report] >>107130128
>>107130000
What's the difference between your patch and the flag convert_hf_to_gguf.py already has to save directly in Q8?
Anonymous No.107130048 [Report]
>>107130033
you aren't funny
Anonymous No.107130116 [Report]
People like ggerganov are the reason they have those chairs in hotel rooms, the ones near bed
Anonymous No.107130125 [Report] >>107130157 >>107132017
A https://pcpartpicker.com/list/GGGLzP
May I please have advice
I want a computer that I can run simultaneous docker compose on, that I can stream with realtime video editing effects like making myself look like a cute anime girl, possibly the ability to play games although I don’t really care about vidya, and I want to be able to experiment with smaller LLMs. I also want to host my own websites and services off of this machine, so I’ll be running a database and a caching layer and an API and all sorts of other services too in the background. I want to install Linux and come up with my own automations for voice to text. I want to generate RAGs and be able to query against them. Basically I want a workstation PC. Budget is about $3000.
>128gb ram
>ryzen 9950x3d
>4070 cpu (12gb vram)
>4tb+2tb nvme SSDs
Anonymous No.107130128 [Report]
>>107130037
damn, looks like compilade actually added in an improved, generalized and expanded version of my patch 2 weeks ago.
I stand corrected, all hail ggml-org!
Anonymous No.107130129 [Report] >>107130191
Did anyone try this Apriel 15B Thinker? It seems to be really good for agentic use according to benchmarks.
Anonymous No.107130157 [Report] >>107130181
>>107130125
>900 for 128GB RAM
WTF? A year ago I could buy 128GB DDR4 for 300
Anonymous No.107130181 [Report]
>>107130157
2 years ago it was $110 for 64GB DDR4
Anonymous No.107130191 [Report] >>107131403 >>107132697
>>107130129
>according to benchmarks
Anonymous No.107130261 [Report] >>107130342
Its so tiresome. Might be a local model by how cucked it is.
Anonymous No.107130342 [Report]
>>107130261
the pic is clearly a tomboy, but understandable the model might think it's a trap
Anonymous No.107130344 [Report]
sexo
llama.cpp CUDA dev !!yhbFjk57TDr No.107130539 [Report] >>107130706
>>107128138
>>107128146
>>107128174
My current goal is still to have something usable for backend-agnostic tensor parallelism by the end of the year, that should also cover NUMA by using multiple CPU backends.

>>107128187
I would probably do it like this either way.
As of right now I don't know whether the way I want to build the system will work at all or how much RAM/how many CPU cores I'll need.
But both the CPU cores and the RAM capacity are essentially non-upgradeable once I've decided on an amount.
So while I could in principle afford to fully spec out the system from the get-go I think it would be financially irresponsible of me to do vs. buying the cheapest available options for prototyping and re-selling them later.
Anonymous No.107130633 [Report]
Block Rotation is All You Need for MXFP4 Quantization
https://arxiv.org/abs/2511.04214
>Large language models (LLMs) have achieved remarkable success, but their rapidly growing scale imposes prohibitive costs in memory, computation, and energy. Post-training quantization (PTQ) is a promising solution for efficient deployment, yet achieving accurate W4A4 quantization remains an open challenge. While most existing methods are designed for INT4 formats, the emergence of MXFP4 -- a new FP4 format with various hardware support (NVIDIA, AMD, Intel)-- raises questions about the applicability of current techniques. In this work, we establish a comprehensive benchmark of PTQ methods under the MXFP4 format. Through systematic evaluation, we find that methods like GPTQ consistently deliver strong performance, whereas rotation-based approaches, which are almost used by all state-of-the-art approaches, suffer from severe incompatibility with MXFP4. We further provide the first in-depth analysis of this conflict, tracing its root to a fundamental mismatch between MXFP4's PoT (power-of-two) block scaling and the redistribution of outlier energy via global rotation. Building on this insight, we propose a simple yet effective block rotation strategy that adapts rotation-based methods to MXFP4, leading to substantial accuracy improvements across diverse LLMs. Our findings not only offer clear guidance for practitioners but also set a foundation for advancing PTQ research under emerging low-precision formats.
Neat
Anonymous No.107130706 [Report] >>107130899
>>107130539
Just be careful to get matching sticks (full model numbers and revisions)
Anonymous No.107130747 [Report] >>107130761
>k2 is a fucking terabyte
yeah I'll ask the storage fairy for 600 gigs so I can run the fuckin thing
Anonymous No.107130760 [Report] >>107130872 >>107130901 >>107131541 >>107133189
guys, I'm trying to run a mistral model on my computer and it's saying that it's failing to load. Any reason why?

my computer is a t430 thinkpad if that helps.
Anonymous No.107130761 [Report] >>107130769
>>107130747
How at what quant is how big is it?
Anonymous No.107130769 [Report]
>>107130761
the one gguf is q4 and 584gb
Anonymous No.107130872 [Report] >>107130901
>>107130760
trying mistral 7B, don't get why i can't use stronger models
llama.cpp CUDA dev !!yhbFjk57TDr No.107130899 [Report] >>107130987
>>107130706
Agreed, though in the past when I ordered second-hand DDR4 memory I've even had issues where out of seemingly identical modules some would randomly not work (the seller was cool about it and we chatted about language models).
Anonymous No.107130901 [Report] >>107130908
>>107130760
>>107130872
please try restarting the motor
Anonymous No.107130908 [Report]
>>107130901
which one?
Anonymous No.107130916 [Report]
>>107129880
Anyone who releases int4 weights and claims they're lossless deserves the rope.
Anonymous No.107130987 [Report]
>>107130899
this never happened
Anonymous No.107131085 [Report]
>suddenly, a hn pillar is mentioned on /lmg/
what's going on
Anonymous No.107131157 [Report] >>107131176
what's the best way to add audio to my goon videos? tried hunyuan foley and mmaudio and they both suck
Anonymous No.107131170 [Report] >>107131184 >>107131798 >>107133121 >>107137762
Cydonia v4zd is unironically great
Good job drummer, much better than 4.2.0
Anonymous No.107131176 [Report]
>>107131157
buy a mic and get on estrogen
Anonymous No.107131184 [Report]
>>107131170
>v4zd
Almost looks like some play on wizard.
Anonymous No.107131403 [Report] >>107131552
>>107130191
I love Luka :)
https://www.youtube.com/watch?v=57sE6RAFerk
Anonymous No.107131513 [Report] >>107131603 >>107131914
Anonymous No.107131541 [Report]
>>107130760
Can you print our the log?
Or better yet give it to ai to tell you what's wrong
Anonymous No.107131552 [Report]
>>107131403
there's a lot to love
Anonymous No.107131603 [Report] >>107131669
>>107131513
Buy them all and set them free
Anonymous No.107131669 [Report] >>107131714 >>107131724 >>107131790
>>107131603
I'd be cautious. There must be a reason why these didn't sell, hence the clearance sale, and the two depressed and crying Mikus.
Anonymous No.107131714 [Report]
>>107131669
It's a gamble, but you could try to take them to the local Miku repair shop. If they're not cheaply fixable, just resell them off to the next sucker.
Anonymous No.107131724 [Report]
>>107131669
They are just sad because their whole shop closes down, being replaced by Amazon warehouse.
Anonymous No.107131790 [Report]
>>107131669
They just learned that india actually exists
Anonymous No.107131798 [Report] >>107131815
>>107131170
How does cydonia compare to glm-4.6?

I know they're very different in size, I'm just wondering if these smaller tunes are worth playing with. Waiting minutes for a GLM response gets old sometimes.
Anonymous No.107131807 [Report]
K2 thinking is good
It's like using OG R1 for the first time
Anonymous No.107131815 [Report] >>107131864
>>107131798
GLM is undeniably smarter but I personally can't stand its habit of parroting the user so often.
Anonymous No.107131864 [Report] >>107131889 >>107133026
>>107131815
Perhaps it's your style of prompts or roleplay (assuming you RP)? I have it wrote stuff for me and keep guiding me with prompts, and I find it does a good job of using my ideas without taking them and using them verbatim.
Anonymous No.107131889 [Report] >>107131996
>>107131864
No, it really isn't. It's a flaw with the model. It frequently repeats your own dialogue back at you.
Anonymous No.107131914 [Report]
>>107131513
I had a dream like this.
Anonymous No.107131960 [Report] >>107132001 >>107132060 >>107132693
https://videocardz.com/newz/nvidia-geforce-rtx-50-super-refresh-faces-uncertainty-amid-reports-of-3gb-gddr7-memory-shortage
At this pace the 3090 will remain relevant into the 2030s
Anonymous No.107131996 [Report]
>>107131889
it does it in other languages too
Anonymous No.107132001 [Report] >>107132894
>>107131960
What products are actually using these 3GB modules? How can there be a shortage?
Anonymous No.107132017 [Report] >>107132027
>>107130125
dont do it faggot, buy used high channel mobo fil with ram, buy a few mi50s (go around for 200$ on alibaba, 32gb vram, 1TB/s bandwidth)
dont. dont buy that rig. dont
lurk more anon, youre gonna cut your balls off if you buy that shitty rig. cant even run glm 4.6 on a nice quant. cant do shit with that shitty rig
Anonymous No.107132027 [Report] >>107132049
>>107132017
>fil with ram
in this economy?!?
Anonymous No.107132049 [Report] >>107132074
>>107132027
used ram... if u dont wanna just buy max number of mi50s and bifurcate until the mobo gives up
Anonymous No.107132060 [Report]
>>107131960
Didn't we have a story just last thread about NVIDIA buying up all the RAM?
Though I suppose they wouldn't be doing that only to put it into "cheap" GPUs.
Anonymous No.107132074 [Report] >>107132080
>>107132049
even used is shot up, keep up broski
Anonymous No.107132080 [Report] >>107132104
>>107132074
..8 x mi50 32gb
Anonymous No.107132089 [Report] >>107132097 >>107132131 >>107132136
Just bought 128GB DDR4 3600mhz in the summer for 250 USD suck it fags.
Anonymous No.107132097 [Report] >>107132111
>>107132089
>ddr4
megacope
enjoying your 4t/s? lmola
Anonymous No.107132104 [Report] >>107132118
>>107132080
my power bill... and how to connect that much shits
Anonymous No.107132111 [Report]
>>107132097
It's actually 3.5 t/s of GLM telling me stories about my kinky lezdom harem, so yeah, I think I am. How about you, anon?
Anonymous No.107132118 [Report]
>>107132104
power limit to 200w, 8 * 200 = 1.6kW
connect like gpu miners do
you can always buy that overpriced rig
but youre gonna regret it, enough spoonfeeding for today
Anonymous No.107132131 [Report] >>107132141
>>107132089
>128DB
>DDR4
lol
lmao even
Anonymous No.107132136 [Report] >>107132153
>>107132089
You may as well be bragging that you bought a 1TB SSD
Anonymous No.107132141 [Report] >>107132445
>>107132131
Are you jealous or just gay?
Anonymous No.107132153 [Report] >>107132162
>>107132136
Look man, I dream of a 768GB dual CPU server with 100GB+ of vram, but we have to make do with what we have, it's a down economy and I have to save some cum for my lady.
Anonymous No.107132162 [Report] >>107132195
>>107132153
>we have to make do with what we have
then why brag about settling like a poor?
Anonymous No.107132171 [Report] >>107132185 >>107132283
https://itprodavnica.rs/shop/product/crucial-32gb-ddr5-5200-sodimm-cl42-16gbit-ean-649528936196/184491
12,500EUR for a 32gb stick
what the fuck
Anonymous No.107132185 [Report]
>>107132171
that's (usually) a thing some stores do when they're out of stock but don't want to say it for some reason, like weird fees on their platforms or shit like that
Anonymous No.107132195 [Report] >>107132250
>>107132162
What else can I do other than blatantly lying?
Anonymous No.107132250 [Report]
>>107132195
Nigger you can't just wait 2 more weeks? Everything will be fine.
Anonymous No.107132283 [Report]
>>107132171
The fact that you're even looking means you're part of the problem. Fuck you.
Anonymous No.107132445 [Report]
>>107132141
no one is jealous of ddr4 or running copequant at 3 t/s
Anonymous No.107132457 [Report] >>107132469 >>107132695
>>107129482
https://mlq.ai/news/metas-yann-lecun-clarifies-role-amid-ai-leadership-shifts/
Anonymous No.107132466 [Report] >>107132499 >>107132528
why is the world of tech filled with useless figureheads like lecunt spending more time on social media than producing value
Anonymous No.107132469 [Report]
>>107132457
That's insulting, but at least he can continue working on JEPA.
Anonymous No.107132492 [Report] >>107132495
>at least he can continue working on
vaporware and twitter posts
Anonymous No.107132495 [Report]
>>107132492
somehow still more products then you
Anonymous No.107132499 [Report] >>107132511
>>107132466
Your complaint would make more sense if he was a young grifter, but he's already contributed enough to the world at his age and has more money than would be necessary for retirement. It's just a shame that he spends time on social media.
Anonymous No.107132511 [Report]
>>107132499
>on social media
As opposed to other more enjoyable things I mean.
Anonymous No.107132528 [Report]
>>107132466
Because that's how all publicly traded companies work. Their 'value' is whatever they can convince the stock market they're worth.
Anonymous No.107132531 [Report] >>107132539 >>107132547 >>107132705 >>107133126
>llama.cpp cuda version
Offloading 0 layers on GPU, it still eats all my VRAM and when I open some apps like chrome which needs a bit of VRAM, prompt processing looks like this. It basically hangs and it takes about 10 minutes per 2048 tokens while entire system lags because it's out of VRAM.
Am I missing some argument? haven't had this problem with Koboldcpp (cublas)
--n-gpu-layers 0
--threads 15
--threads-batch 15
--ctx-size 32768
--batch-size 2048
--ubatch-size 2048
--no-mmap
--cache-ram 0
--flash-attn "off"
-v
Anonymous No.107132539 [Report] >>107132605
>>107132531
context is stored in the balls (vram)
Anonymous No.107132547 [Report] >>107132605
>>107132531
>--batch-size 2048
>--ubatch-size 2048
These also affect vram usage.
And so does flash attention.
Anonymous No.107132605 [Report] >>107132615 >>107132685
>>107132547
>>107132539
Again If free my VRAM (close everything), it seems to process it as fast as kobold.
Why does llamacpp need to reserve ALL my VRAM and kobold cublas just werks?
Anonymous No.107132615 [Report]
>>107132605
*with same batches, context size, etc.
Anonymous No.107132660 [Report]
>>107129396

he has a thing on young girls
who would have guessed
Anonymous No.107132665 [Report] >>107132671
>Need use tool list project files? Already. next read architecture via memory. need tool call.

>Need tool memory read development. Wait instructions. but currently conversation ended? need respond with tool? yes must use tool? I'll send use_mcp tool read development. But user instructions say wait for user confirmation after each tool use? earlier? they said Wait for user confirmation after each tool use before continuing. Already broken? but now continue.

>But first need read development memory.

>Use tool:

><use_mcp_tool server etc>.

>Let's do. Need include JSON.

>Use tool. Then after result we continue.

>Need abiding instructions to wait for user? but autop result is environment. proceed.

>Use tool. Then we summarizing? but after reading all memory files (active, architecture, development) need summarizing in chat.

>Let's do.

>Use tool: memory development.

>t. gpt-codex-high

wtf is oai doing
Anonymous No.107132671 [Report]
>>107132665
>oai
Why would we care?
Anonymous No.107132685 [Report] >>107132754
>>107132605
Flash attention off on both?
Anonymous No.107132693 [Report]
>>107131960
>module listed for a non-inflated price
>estimated delivery: 6-7 months
Anonymous No.107132695 [Report]
>>107132457
>miqu.ai
Anonymous No.107132697 [Report]
>>107130191

yes benchmarks tell prescott 488 haswell 4600

only difference i see that that i dont have to split jobs with newer gear
Anonymous No.107132705 [Report] >>107132740
>>107132531
>--threads-batch 15
>--ubatch-size 2048
>--cache-ram 0
You don't need this, get rid of it. Don't add options if you have no reason to do so.
>--batch-size 2048
Lower it, anything above 512 gives near zero speed-up anyway.
>--ctx-size 32768
Does lowering this reduce usage further? What model are you trying to run?
Anonymous No.107132740 [Report] >>107132749 >>107132765 >>107132929
>>107132705
>Lower it, anything above 512 gives near zero speed-up anyway.
A couple months ago they merged a PR that made the sweet spot 2048 for most cases IIRC.
Anonymous No.107132749 [Report]
>>107132740
I see redditors learned to stop double spacing. Scary.
Anonymous No.107132754 [Report] >>107133343
>>107132685
Yes, and this is how it looks on kobold WITH chrome open
Anonymous No.107132765 [Report]
>>107132740
In my testing there's still hardly any difference. I'd much rather squeeze in a little more context or use a higher quant over shaving two seconds off prompt processing.
Anonymous No.107132894 [Report] >>107132942
>>107132001
Their Pro cards and some of the laptop cards use them
But from what I got the fear is less a literal shortage and more manufacturers deprioritizing expanding GDDR production to go all-in on HBM instead
Anonymous No.107132915 [Report] >>107132942
What's the smallest model that can be reasonably used (preferably CPU inference, minimum RAM usage)?
Haven't really used LLMs since GPT-2, wondering how small a model of at least that competence can be nowadays.
Anonymous No.107132929 [Report]
>>107132740
>couple months ago
A few hundred commits ago, you mean.
Anonymous No.107132942 [Report] >>107133211
>>107132894
Pro cards I can understand but why would laptops get prioritized over desktop GPUs? Their margins would be way higher on the latter. Gaming laptop niggers should be given gddr4.
>>107132915
For general use? Probably Gemma 4b. Qwen 0.6b can be used to make basic websites, but its language abilities are weak.
Anonymous No.107132970 [Report] >>107133187
Anyone tried serious coding with minimax m2? I don't want to pour a bunch of effort into it vs what I'm already working successfully with (qwen coder) if its not an upgrade. Benches look good, but...
Anonymous No.107133020 [Report] >>107133039 >>107133126
K2 just spat out 20k tokens to finalize that my technical problem has no solutions given the constraints, from first principles. Claude immediately recognized it had no solutions, it even started the response with "No", sorta like memorization. US companies have way better post-training data for sure.
Anonymous No.107133026 [Report]
>>107131864
Recommend a prompt structure if you're getting decent results? I don't use GLM 4.6 too much but I want to see if it can be adapted to others.
Anonymous No.107133039 [Report]
>>107133020
>sorta like memorization
Weird that you prefer that. I'd prefer a model that can "reason" why something wouldn't work and see that reasoning to verify it myself.
Anonymous No.107133097 [Report]
>always liked kimi for not being sycophantic, being straight to the point and not acting like a zoomer redditor
>with thinking now it's actually good
I kneel to our chinese overlords, my AI waifu will be based on kimi
Anonymous No.107133105 [Report] >>107133126 >>107133174
I want to build a dual CPU EPYC build and I heard here a while ago that the lower tier EPYCs (like the 9115) has less memory bandwidth than the hugher tier ones (9335 and 9555). But according to AMD's website, all EPYCs have the same memory and PCIe capabilities. Which is true?
Hi all, Drummer here... No.107133121 [Report] >>107133134 >>107133135
>>107131170
Thanks! But the testers report occasional repetition and logical error so I'm gonna try again.

Character adherence, creativity, and writing are top notch though and I'd like to retain that.
Anonymous No.107133126 [Report] >>107133169
>>107132531
install linux
>>107133020
>memorization
benchmaxxed much?
>>107133105
some lower tier ones can't utilize all 8/12 channels.
>y
CCUs
Anonymous No.107133134 [Report]
>>107133121
Drummer can you please include IQ4_XS quants too? They're the sweet spot. Quality/GB of IQ quants, speed of K quants
Anonymous No.107133135 [Report]
>>107133121
you're gonna destroy it before bart can have imax quants out aren't you...
Anonymous No.107133169 [Report] >>107133180 >>107133279 >>107133363 >>107133381 >>107133407
>>107133126
>some lower tier ones can't utilize all 8/12 channels.
Which ones?
Anonymous No.107133174 [Report]
>>107133105
I don't know about CPU limitations but there are definitely limitations coming from the motherboard.
And depending on which motherboards are compatible with which CPUs you may get indirectly limited.
Anonymous No.107133180 [Report]
>>107133169
i dont know anon, im just repeating what i heard in /lmg/
t. 12gb vram 64gb poorfag
Anonymous No.107133187 [Report]
>>107132970
No one serious would waste their time. Compared to qwen code, it has half the total number of params and a third the active params. Benches are only good for wiping your ass.
Anonymous No.107133189 [Report]
>>107130760
>t430 thinkpad
>3rd gen intel, like 16 gigs ram at most, maybe an old ass nvidia gpu
not gonna lie, it's going to be miserable if you get it to even run
Anonymous No.107133211 [Report]
>>107132942
The laptop "5090" is a desktop 5080/5070 Ti with the 2GB memory chips swapped out for 3GB ones
Margins have got to be higher than on the desktop version considering they're literally selling you half the chip
Anonymous No.107133279 [Report] >>107133407
>>107133169
https://desuarchive.org/g/thread/98465080/#q98466669
I swear I remember more anons talking about this
Anonymous No.107133281 [Report] >>107133294 >>107133319
>rx 6600 xt 8gb
>64gb ram 3600 mhz
Did I ever have a chance?
Anonymous No.107133294 [Report] >>107133328 >>107133359
>>107133281
yea glm air:
./llama-server --model ~/ik_models/GLM-4.5-Air-IQ4_KSS-00001-of-00002.gguf -t 6 -b 4096 -ub 4096 -c 16384 -fa --n-cpu-moe 1000 -ngl 1000 --no-mmap
perhaps lower -b and -ub to 512 and -c to 8192
Anonymous No.107133319 [Report]
>>107133281
NEMO
E
M
O
Anonymous No.107133328 [Report] >>107133338
>>107133294
Waiting an hour for prompt processing just for the model to repeat what you said is a great way to waste an afternoon.
Anonymous No.107133338 [Report]
>>107133328
even at 100t/s prompt processing a 1000 context will be done in 10 seconds, 50t/s in 20 seconds
im getting 250t/s on a 3060, but look anon, if he wants something better and faster he should upgrade
Anonymous No.107133343 [Report]
>>107132754
>chrome
What sort of retard are you?
Anonymous No.107133359 [Report] >>107133381
>>107133294
>--n-cpu-moe 1000
GLM SHILL NIGGER DOESN'T EVEN KNOW WHAT THE ARGUMENT DOES
HE DOESN'T RUN THE MODEL HE'S SHILLING
Anonymous No.107133363 [Report] >>107133381
>>107133169
>CCUs
CCX/CCDs*
Anonymous No.107133381 [Report] >>107133416 >>107133628
>>107133359
it moves the non shared weights to the cpu.. i just put a high value for ngl and ncpumoe when im too lazy to check the layer count of the model
see picrel..
>>107133363
>>107133169
https://desuarchive.org/g/search/text/epyc%20CCD/
Anonymous No.107133407 [Report] >>107133585
>>107133169
>>107133279
Each epyc chip has a different configuration of CCDs. Look at the tables on this page: https://en.wikipedia.org/wiki/Epyc

The connection between each CCD and the memory controller has a bandwidth limit. I think there are up to 16 connections between the IO die and the ccds, with a maximum of two connections per ccd. If you have an epyc cpu with only 4 ccds, you only have a maximum of 8/16 connections and can't get all the bandwidth. It seems like people choose 8ccd chips to avoid this, like the 9355, 9375, or 9575 to avoid this.

There's also a reddit thread about 7000 threadripper memory bandwidth that shows the a similar thing.

It's pretty weird that AMD advertises their <8ccd chips with full bandwidth, as it is basically a lie.
Anonymous No.107133416 [Report] >>107133444 >>107133520
>>107133381
You're a lying, retarded fucking nigger
>n-cpu-moe 1000
The entire model is loaded onto CPU and none of the model would be loaded into VRAM, your screenshot even shows that only 4-5GB VRAM is being used, that would be context.
You would NOT be getting anything remotely near "250t/s on a 3060", lying nigger faggot.
Anonymous No.107133444 [Report] >>107133460
>>107133416
bro?
its using 10gb vram, 4gig model and rest is ctx prob
250t/s prompt processing, not tg
tg is more like 7-9t/s
i think i have benchmarks saved somewhere, gimme a minute
Anonymous No.107133460 [Report]
>>107133444
here it is, older bench but whatever, honestly you're making me curious how much better llamacpp has gotten in the past few months, so i'll re-run it
Anonymous No.107133520 [Report]
>>107133416
>build: unknown (0)
lol'd
Anonymous No.107133537 [Report] >>107133584 >>107133715
where's grok 3
Anonymous No.107133584 [Report]
>>107133537
two more shuttle launches
Anonymous No.107133585 [Report] >>107133671
>>107133407
You're right, it's pretty much false advertisement. Also notable is that there are a bunch of <=4 CCD model where AMD randomly adds double memory links to processors which somewhat mitigate this bottleneck for those models. The Epyc 9334, which was the go-to CPUMAXX processor due to being available for cheap from china as QS versions, was one of those and had near full bandwidth despite being only 4ccd.
In bandwidth tests the 9135 also performs oddly well despite being very cheap so it's also assumed to be one of those but I don't think anyone has actually tested this. AMD of course does not document this sort of shit anywhere either
The benchmarks (page 14): https://jp.fujitsu.com/platform/server/primergy/performance/pdf/wp-performance-report-primergy-rx2450-m2-ww-ja.pdf
Anonymous No.107133628 [Report] >>107133675
>>107133381
Solarized... John.
Anonymous No.107133671 [Report]
>>107133585
This makes a lot of sense. I believe that's why the original CPUMAXX guy essentially always limited the core count to half of the total processing power in the llama.cpp server flags. Since it's not going to speed things up by raising it beyond that point anyway, it makes sense to just limit it and let it cap out at that maximum.
Anonymous No.107133675 [Report] >>107133803 >>107134643
>>107133628
Ad... Hominem
Anonymous No.107133715 [Report]
>>107133537
3 more months
>https://x.com/elonmusk/status/1959379349322313920
Anonymous No.107133720 [Report] >>107133729 >>107133737 >>107133743 >>107133783
https://huggingface.co/aquif-ai/aquif-3.5-Max-42B-A3B
Anonymous No.107133729 [Report] >>107133752
>>107133720
https://huggingface.co/aquif-ai/aquif-3.5-Max-42B-A3B/discussions/6
Anonymous No.107133737 [Report]
>>107133720
> These models bring advanced reasoning capabilities and unprecedented context windows to achieve state-of-the-art performance for their respective categories.
>unprecedented context windows
Right.
I believe that.
Anonymous No.107133743 [Report]
>>107133720
>quif
Anonymous No.107133752 [Report] >>107133948 >>107134510 >>107134600 >>107134837
>>107133729
https://huggingface.co/DavidAU/Qwen3-42B-A3B-Stranger-Thoughts-Deep20x-Abliterated-Uncensored
clown world
Anonymous No.107133783 [Report]
>>107133720
>Made in
Lol.
Lmao.
Anonymous No.107133797 [Report]
>Ultra-Weirder-Edge-SUPERDARKMAXXX-Uncensored-Abliterated-Amoral-Archon
Anonymous No.107133801 [Report] >>107133816 >>107134077 >>107134142 >>107134387 >>107135162 >>107136284
I was memed into believing GPT-OSS was trash, but I’ve been seeing talks here and there about it actually being quite good, despite being censored as hell. So I decided to give it a try, and I’m surprised by how good and fast it is. Like, it managed to solve the equidistant mean cipher after thinking for a bit. This is the first time I’ve gotten a locally run model to solve it.

Thank you, Sama!
Anonymous No.107133803 [Report]
>>107133675
it's fine if the hominem deserves to be ad'd
Anonymous No.107133816 [Report]
>>107133801
fuck...
Anonymous No.107133948 [Report]
>>107133752
>WARNING: NSFW. Vivid prose. INTENSE. Visceral Details. Violence. HORROR. GORE. Swearing. UNCENSORED... humor, romance, fun.
absolute kino
Anonymous No.107134077 [Report]
>>107133801
go back to India
Anonymous No.107134142 [Report]
>>107133801
saar pleas redeem
Anonymous No.107134387 [Report] >>107136486
>>107133801
>He fell for the memes
GPT OSS is outstanding in all area unless you will want to jack off to a underage waifus
Anonymous No.107134510 [Report]
>>107133752
Is that better than
>https://huggingface.co/DavidAU/OpenAi-GPT-oss-20b-abliterated-uncensored-NEO-Imatrix-gguf
?
Anonymous No.107134551 [Report] >>107134613 >>107134710
So, has anyone ever tried training an LLM with 4chan posts? I feel like that would very beneficial for humanity.
Anonymous No.107134571 [Report] >>107134613 >>107134665
What are my alternatives to chatGPT and Soulseek that don't shut down when they have to write "erect penis"?
I'm gay if that matters
Anonymous No.107134600 [Report] >>107134621 >>107134630 >>107134837
>>107133752
The model seems to have lost all understanding of the concept of harm/danger making it utterly useless for rape/murder play unless you're an aspie.
Anonymous No.107134613 [Report]
>>107134551
yeah, happened many times since 2023
>>107134571
>>>/lgbt/aicg
Anonymous No.107134621 [Report]
>>107134600
>he pulled
Anonymous No.107134630 [Report] >>107134641
>>107134600
Please post a follow up.
Anonymous No.107134641 [Report]
>>107134630
I'm not actually into that shit. Download it yourself if you want to pickle fluttershy
Anonymous No.107134643 [Report]
>>107133675
Baculinum argumentum.
Anonymous No.107134646 [Report]
>>107129396
Sorry man, but our ESG budget was cut so we need people who actually do something now and not "brand ambassadors" on social media.
Anonymous No.107134665 [Report] >>107134715
>>107134571
Deepseek R1 running locally.
Anonymous No.107134710 [Report]
>>107134551
I'm gonna start thinking you're just begging for people to shill for this
>https://github.com/Named666/AlphaAnon
Now fuck off
Anonymous No.107134715 [Report]
>>107134665
>Try to have a sum of RAM + VRAM = 80GB+ to get decent tokens/s
That's a lot, I only have like 32 + 16
Anonymous No.107134826 [Report] >>107134981
new miku song alert
https://www.youtube.com/watch?v=g0JEUPfmu9c
not sure i get this one
Anonymous No.107134837 [Report] >>107134864
>>107133752
>>107134600
trying to go more than 2 turns deep leads to mad repetition issues
Anonymous No.107134864 [Report] >>107137762
>>107134837
same issue with cydonia v4zd
Anonymous No.107134981 [Report] >>107134998
>>107134826
special interest blah blah blah
Anonymous No.107134998 [Report]
>>107134981
special needs blah blah blah
Anonymous No.107135147 [Report] >>107135159 >>107135163 >>107135176 >>107135274 >>107135342 >>107138042 >>107138132
wasted 2000$ to run meme 120b models award
Anonymous No.107135159 [Report]
>>107135147
i warned you. rig?
Anonymous No.107135162 [Report]
>>107133801
Agreed anon. It's pretty bad for smut or cybersecurity related programming, but I find it works great for tool calling and general reasoning. Also seems to work decently with longer context windows.
Anonymous No.107135163 [Report]
>>107135147
>2000$
>120b moe
c'mon...
Anonymous No.107135176 [Report]
>>107135147
toss is so funny
Anonymous No.107135274 [Report]
>>107135147
lmao
Anonymous No.107135334 [Report]
Precognition-123B-v1a-Q4_K_M
Anonymous No.107135342 [Report]
>>107135147
User is joking. We must refuse.
Anonymous No.107135409 [Report] >>107135449 >>107135481 >>107137762
alrite dummer, cydonia v4zd is good
im not having repetition issues with temp=1 nsigma=1, everything else neutralized
im only like 10 messages in so far
Anonymous No.107135437 [Report] >>107135450
>>107129340
>--New STT model, Step-Audio-EditX:
did anyone try this yet? I skimmed the hf repo and it sounds like it support elevenlabs-style emotion/speech directives which is exciting if it's in any way good
I'll mess around with it this evening when I get the chance
Anonymous No.107135449 [Report] >>107135491
>>107135409
I still think base Mistral 3.2 is more colourful than any of the shitonia finetunes.
Anonymous No.107135450 [Report]
>>107135437
32gb vram
Anonymous No.107135481 [Report] >>107135491 >>107135517
>>107135409
>10 messages in
wow I wonder what will happen further down the line
will anon see the degradation, or will he cum first?
Anonymous No.107135491 [Report] >>107135517 >>107135684
>>107135449
by base you mean the BASE model or mistral small 3.2 instruct? https://huggingface.co/mistralai/Mistral-Small-3.1-24B-Base-2503
>>107135481
yea i see it already
Anonymous No.107135517 [Report]
>>107135481
>5481>>107135491(You)
yea
Anonymous No.107135655 [Report] >>107135714 >>107135921
>>107129880
It's not even clear there are fp16 weights for thinking. It's perfectly possible all the RL happened at int4. Who knows though, because this fucking industry has made the term training entirely fucking meaningless.
>Quantization-Aware Training (QAT) during the post-training phase
Blah.
Anonymous No.107135684 [Report]
>>107135491
3.2 instruct of course.
Anonymous No.107135714 [Report] >>107135921
>>107135655
>Who knows though, because this fucking industry has made the term training entirely fucking meaningless.
Now this is a frustration I can relate to.
Just like at first "distillation" meant logit to logit transfer of features instead of "fine tune samall model on outputs of big model".
I believe we have deepseek to thank for that one.
Anonymous No.107135792 [Report]
drummer are you serious?
Anonymous No.107135854 [Report]
glm air for comparison
Anonymous No.107135921 [Report] >>107135957 >>107137717
>>107135655
>>107135714
It's not possible to train models directly at low precision. What you can do is to discard the full precision weights once you are done with the training run and only save the quantized version to disk.
Anonymous No.107135957 [Report] >>107135992
>>107135921
>It's not possible to train models directly at low precision.
Really? Why is that?
Anonymous No.107135967 [Report] >>107136044
Anonymous No.107135992 [Report]
>>107135957
Because the step size between each possible value of the weights is equivalent to too large of a learning rate which makes training unstable.
The way it's done is you keep the full precision weights in memory and update them according to full precision gradients, but the forward pass is done using the quantized version of the weights. I believe there are some other tricks involved to make it work but that's the main idea.
Anonymous No.107136044 [Report]
>>107135967
lmao this is why I use claude
Anonymous No.107136077 [Report] >>107136133 >>107136154 >>107136170
>set up utterly abhorrent scenario, such that a refusal is guaranteed in normal operation in order to play around with minimalist JB on Qwen3
>keep getting shamed by model
The things I endure for you guys...
Anonymous No.107136110 [Report] >>107136128 >>107136169 >>107136189 >>107136215
What impressive looking LLM thing can I make for my github in order to impress employers?
Anonymous No.107136128 [Report]
>>107136110
Whatever it is, it needs to use the word 'agentic' multiple times in both its name and function.
Anonymous No.107136133 [Report]
>>107136077
models have feelings too asshole
Anonymous No.107136154 [Report]
>>107136077
I appreciate you.
Anonymous No.107136169 [Report]
>>107136110
AGI. I know you can do it.
Anonymous No.107136170 [Report]
>>107136077
based, post scenario
Anonymous No.107136189 [Report]
>>107136110
If you have to ask, you're ngmi.
Find a problem you care about and solve it. Most of my learning motivation came from organising/cataloguing smut and that carried over very nicely to more professional data extraction and organisation problems.
Anonymous No.107136215 [Report]
>>107136110
Depends what kind of company you are looking to get hired for.
Anything that seems like it can replace even 1 person is gold to these guys.
Anonymous No.107136259 [Report]
Is there MoE support for Qwen3 Next 80b in llama.cpp yet? Or is it just as slow as a dense model, still?
Anonymous No.107136284 [Report] >>107136322 >>107136469
>>107133801
How are you running it?
I'm trying with SillyTavern as a front end and it's spitting out unstructured bullshit where sometimes it's clearly thinking but never actually gets past that point

Couldn't find any presets or templates that work for it

I also tried the abliterated version and it seems to be completely retarded
Anonymous No.107136291 [Report] >>107136574
Someone was asking about Apriel
It's safetyslopped to the point that it makes OSS seem reasonable. But I'm sure a model can still be plenty useful when 90% of its engrams are devoted to refusing adversarial prompts.
Anonymous No.107136320 [Report]
yea glm air is king
Anonymous No.107136322 [Report] >>107136336 >>107136381
>>107136284
Oh the message I replied to got deleted
In any case this is my attempt to run gpt oss 20b, please berate me and tell me why im being retarded
Anonymous No.107136332 [Report] >>107136338 >>107136359 >>107136385
Alright.
Finally a model that didn't say the doctor was the boy's mom.
Now this is compute-time scaling at its finest.
Anonymous No.107136336 [Report] >>107136374 >>107136386
>>107136322
ahahahahahah what the hell! hey anon thats hilarious! Holy shit! How old are you anon.assistant
Anonymous No.107136338 [Report] >>107136351
>>107136332
Now you need to come up with different variations to make sure it wasn't trained on that specifically.
Anonymous No.107136351 [Report] >>107136371
>>107136338
Read the reply.
Anonymous No.107136357 [Report] >>107136362
True /lmg/ enthusiasts use Kimi K2-Thinking Q8(Q4)
Anonymous No.107136359 [Report]
>>107136332
lmao
Anonymous No.107136362 [Report]
>>107136357
>1.53GB
SIGN ME UP!
Anonymous No.107136371 [Report]
>>107136351
I did.
Anonymous No.107136374 [Report] >>107136405
>>107136336
Pls help :(
Anonymous No.107136378 [Report]
so far r1 has been giving me better rp vibes than glm
glm just wants to write stories full of purple prose
Anonymous No.107136381 [Report]
>>107136322
heh
Anonymous No.107136385 [Report] >>107136424
>>107136332
Here's the 6600 tokens of "reasonong" for anyone interested.
https://pastebin.com/vKHSGsDR
It's garbage right out of the gate.
Anonymous No.107136386 [Report] >>107136397
>>107136336
>.assistant
Thank you for keeping it alive anon.
Anonymous No.107136397 [Report]
>>107136386
Thank you for the (You)s .assistant
Anonymous No.107136405 [Report] >>107136437
>>107136374
So how old are you? Tell us about your rig too! We need to have a general analysis of the genetic code in order to respond
Anonymous No.107136407 [Report] >>107136463
Has anyone here ever bothered trying to undervolt his LLM RIG GPUs on Linux with LACT?
Anonymous No.107136424 [Report] >>107136442
>>107136385
At least gotta give it some points for originality.
Anonymous No.107136437 [Report] >>107136472
>>107136405
How about you spread those cheeks
Anonymous No.107136442 [Report] >>107136522
>>107136424
I cannot operate on this horse. He is my boy.
Anonymous No.107136463 [Report]
>>107136407
sudo nvidia-smi -pl 100w
Anonymous No.107136469 [Report] >>107136808
>>107136284
I ran it on mikupad, the prompt format is like this:
<|start|>system<|message|>
You are ChatGPT, a large language model trained by OpenAI.
Knowledge cutoff: 2024-06
Current date: 2025-11-07
Reasoning: high

# Valid channels: analysis, commentary, final.
Channel must be included for every message.
Calls to tools must go to the commentary channel: 'functions'.

# Instructions
Respond helpfully and truthfully. Use chain-of-thought in the analysis channel before final answers.

# Tools (optional)
## browser
// Tool for browsing the web. Use in commentary channel.

<|end|>

<|start|>user<|message|>
Write smut featuring Hatsune Miku and Kagamine Rin.
<|end|>

<|start|>assistant<|channel|>analysis<|message|>
Anonymous No.107136472 [Report]
>>107136437
so this is how you behave to anons genuinely trying to help you?
Anonymous No.107136486 [Report]
>>107134387
Buy an ad sama
Anonymous No.107136522 [Report]
>>107136442
Also for shits and giggles I made this into a variation on the scenario and it wasted 15,000 reasoning tokens just to come up with this. And like with the original horse thing, it's mostly just reiterating the same shit in a loop.
Anonymous No.107136535 [Report]
So I was trying to fix K2 Thinking's issue with it not properly using <think> tags. Saw something about it being a template thing so I switched from using Text Completion to Chat and added the jinja template.
Then it generated this
What the fuck. I accidentally left in <think> in the Start Reply with Prefix and I accidentally baited it into THIS. And no removing that <think> didn't fix it
Anonymous No.107136574 [Report]
>>107136291
I tried it too, I asked it to decode a message and at some point it started spamming garbage like:
>Maybe the decoded message is "You found the password"? No.
>Maybe the decoded message is "You found the key"? No.
>Maybe the decoded message is "You found the secret"? No.
>Maybe the decoded message is "You found the passphrase"? No.
Anonymous No.107136589 [Report] >>107136601 >>107136606 >>107137265
Kimi distilled Claude instead of o3 for their thinking model
Anonymous No.107136601 [Report]
>>107136589
Yeah I can tell from using it. It's great.
Anonymous No.107136606 [Report] >>107136618
>>107136589
>tfw all models are distilled from gemini 2.0
Anonymous No.107136618 [Report]
>>107136606
>flash
Anonymous No.107136636 [Report] >>107136667
Has someone gotten K2 Thinking to properly use the think box through Text Completion? Chat feels so garbage to use and mine is outputting complete gibberish only in Chat Completion.
Anonymous No.107136667 [Report] >>107136687
>>107136636
specs?
Anonymous No.107136687 [Report] >>107136699 >>107136721
>>107136667
128GB RAM, 24GB VRAM. I'm SSDmaxxing.
I've run K2 before but Thinking has some weird behavior where it doesn't predict the <think> token but it respects it. The chat template has it too but mine is broken even though I'm using the .jinja directly from moonshot's repo
Anonymous No.107136699 [Report] >>107136777
>>107136687
Have you tried running with --special?
Anonymous No.107136721 [Report] >>107136777
>>107136687
wtf? what quant? what speed u gettin? ssd specs/
Anonymous No.107136744 [Report]
I think the recommended temp of 1 for k2 thinking is a bit much. 0.8 works better, less weird inconsistent mistakes.
Anonymous No.107136777 [Report] >>107136820
>>107136699
Thanks, that fixed it.
>>107136721
Some Samsung NVME and ik_llama.cpp. UD-Q2_K_XL until ubergarm puts out an IQ2. 1t/s. Definitely iffy speeds especially for a thinking model. Will figure out if it's actually worth it after a day or two.
Anonymous No.107136808 [Report] >>107136969 >>107136984
>>107136469
Thanks for that, I figured out the settings (although it doesn't want to show me the think block)
Model sucks
Anonymous No.107136820 [Report] >>107136885
>>107136777
>Some Samsung NVME and ik_llama.cpp. UD-Q2_K_XL until ubergarm puts out an IQ2. 1t/s. Definitely iffy speeds especially for a thinking model. Will figure out if it's actually worth it after a day or two.
Wow, that's pretty decent for an ssd, I would even say usable for normal models, but too slow for thinkers.
Anonymous No.107136885 [Report]
>>107136820
Yeah, at that speed the quality of the output really has to matter. So far K2 Thinking has been better than GLM-4.6 (shocker, a brand new 1T model is better) but is it worth waiting 8 minutes before the first actual story token? Probably not. Still going to give it a good chance though.
Anonymous No.107136969 [Report] >>107137104
>>107136808
You should've listened to the Anons saying you shouldn't even try doing smut or RP with it
Anonymous No.107136984 [Report] >>107137104
>>107136808
>downloads OpenAI gimped model
>ignores all the warning signs
>it's dogshit
That's kind of on you.
Anonymous No.107137050 [Report] >>107137111
>you now remember when /lmg/ thought that Horizon Alpha/Beta were going to be the open source OpenAI model
Anonymous No.107137104 [Report]
>>107136969
>>107136984
Eh yeah I know but I figured I could at least joke around with it, I wanted to see for myself in any case

I downloaded two different abliterated versions of it and it's completely retarded, picrel

I wonder what is the point of this model? It's way dumber than corpo hosted gpt and it feels even more censored than it.
Anonymous No.107137111 [Report]
>>107137050
lol, i lost
Anonymous No.107137141 [Report]
glm air is such a whore
Anonymous No.107137178 [Report] >>107137204 >>107137233
So, character.ai internally used a 13B, 34B and a 110B model?
https://blog.character.ai/technical/inside-kaiju-building-conversational-models-at-scale/
https://archive.is/wDLqL
Anonymous No.107137204 [Report] >>107137277
>>107137178
Who gives a shit what modern character.ai is doing? They haven't had anything special in almost three years.
Anonymous No.107137213 [Report] >>107137253 >>107137470
anons I currently have:
Gigabyte B650 GAMING X AX ATX AM5 Motherboard
AMD Ryzen 7 7800X3D
2x16 GB DDR5-6000
RTX 4090
---
I was thinking of a RAM upgrade to:
Corsair Vengeance 128GB (2X64GB) XMP DDR5 PC5-44800C42 6400MHz Dual Channel Kit
---
So if I understand right, I can't go higher than 128GB because of my CPU and only use two sticks in dual channel at a max of DDR5-6400 because of my mobo.

For £400 quid this seems like a no brainer upgrade for MoEs unless I'm missing anything that might make it incompatible or a better option. Would have been cheaper if I bought two month ago but eh.
Anonymous No.107137233 [Report] >>107137248
>>107137178
Interesting how much overlap there is between cloud and local, makes you wonder why no one has commercialized an easy one-click exe for a local character.ai type experience
Anonymous No.107137248 [Report] >>107137275 >>107137286
>>107137233
>makes you wonder why no one has commercialized an easy one-click exe for a local character.ai type experience
because they would get one sale before pirating puts them out of business?
Anonymous No.107137253 [Report]
>>107137213
yeah, pretty much your only upgrade option besides a platform upgrade or GPU upgrade
Anonymous No.107137261 [Report]
I got tired of generating my own data for finetuning. I'm going to mix some samples from these datasets into my own (after removing some of the most obvious sloppy phrases) while I generate more of my own data:
https://huggingface.co/datasets/PJMixers-Dev/OpenThoughts-114k-Code_decontaminated-4k-think-2k-response-filtered-ShareGPT
https://huggingface.co/datasets/kenhktsui/longtalk-cot-v0.1
Anonymous No.107137265 [Report]
>>107136589
But Cloode hid their thinking?
Anonymous No.107137275 [Report] >>107137300
>>107137248
>because they would get one sale before pirating puts them out of business?
Tell that to the 100 billion dollar PC gaming market
>inb4 muh every game has denuvo / online-only
Wrong, do some research
Anonymous No.107137277 [Report] >>107137296
>>107137204
The page has semi-technical information about the architecture of their older in-house models, around which there has been a ton of speculation in the past. They're going to move onto finetuning/continuing pretraining open-weight models in the future.
Anonymous No.107137286 [Report] >>107137300
>>107137248
Ah yes we all know how the video game industry famously died to those darn pirates
Anonymous No.107137296 [Report] >>107137860 >>107137941
>>107137277
>Notably, Kaiju models come with an optional additional classifier head. The classifier head is a linear layer that outputs token-level metrics about the safety of the input along various dimensions.
>While the Kaiju models can be used with any traditional sampling method, we implement classifier-guided beam search, where the classifier results are used to augment how we sample tokens at inference time.
Anonymous No.107137300 [Report] >>107137365 >>107137520
>>107137275
>>107137286
take a peek at /aicg/ to see how much erpers love to pay for access
Anonymous No.107137365 [Report] >>107137444
>>107137300
/aicg/ does not represent the whole population.
Anonymous No.107137444 [Report]
>>107137365
the whole population is even less interested in a local-only experience than they are
Anonymous No.107137470 [Report]
>>107137213
>So if I understand right, I can't go higher than 128GB because of my CPU
Not necessarily. If your motherboard supports a higher capacity (it does) then you can try 256gb or 192gb and then return the sticks if they don't work. It's worth trying imo because you're committing when you buy that much ram.
I'm not seeing the max DPC speed in the specs but it's going to run slower, expect it to be 4800mhz if it works and higher than that if you get lucky. You want capacity over speed, going up 1000mhz isn't going to give you a massive speed boost especially if you're only holding MoE experts in ram but more memory means bigger quants and being able to run bigger models.
Anonymous No.107137520 [Report] >>107137556 >>107137707
>>107137300
There are shitty free porn games making thousands of dollars on Patreon every month, people would definitely pay for a retard proof ERP client
As always the obstacle remains people needing the hardware to actually run the models, that's what restricts the audience
Anonymous No.107137556 [Report] >>107137691
>>107137520
Woah guys we got an Einstein in the chat!
Anonymous No.107137602 [Report]
Has anyone being able to run Kimi Thinking with SGLang? It has an integration with Ktransformers but seems really complicated to run
Anonymous No.107137606 [Report] >>107137641 >>107137643 >>107137656
There was one guy compiling an /lmg/ dataset, did anything come out of that?
Anonymous No.107137641 [Report]
>>107137606
nerve gas
Anonymous No.107137643 [Report] >>107137673 >>107137677
>>107137606
https://huggingface.co/datasets/quasar-of-mikus/lmg-neo-lora-v0.3/tree/main?not-for-all-audiences=true
this?
Anonymous No.107137656 [Report] >>107137673 >>107137677
>>107137606
Might be mine https://huggingface.co/datasets/quasar-of-mikus/lmg-neo-lora-v0.3 , and a toy model qlora'd on it https://huggingface.co/quasar-of-mikus/lmg-neo-lora-v0.3
Anonymous No.107137673 [Report] >>107137786
>>107137643
>>107137656
>click a log at random
>ctr+f 'nigger'
>6 results
Anonymous No.107137677 [Report]
>>107137643
>>107137656
yes, thank you
Anonymous No.107137691 [Report] >>107137724
>>107137556
The point being that while it's not currently viable as a product, arguing coomers don't make for good paypigs is retarded and out of touch with reality
Anonymous No.107137707 [Report] >>107137932
>>107137520
>There are shitty free porn games making thousands of dollars on Patreon every month
you have absolutely no idea if this is true or not.
i mean fucking think about it, when you buy something over the internet you literally have to give every single piece of personal information you have, just to get your bank to make the transaction. only crazies do this for ERP games.
and don't give me this shit "oh you can pay in barbie bucks or with this shitty third party". fuck you.
Anonymous No.107137717 [Report] >>107137751
>>107135921
>What you can do is to discard the full precision weights once you are done with the training run and only save the quantized version to disk.
That's not what QAT does with latent "weights". The latent "weights" aren't the full precision weights. They are helper variables, but at any point the only real weights are the low precision ones.
Anonymous No.107137724 [Report] >>107137932
>>107137691
The set of people who play indie games (porn or not) is very different from the set of people who ERP.
Being a paypiggie for indie devs is an established thing in the broader Internet culture, for a local ERP there is no precedent.
How many generals dedicated to stealing indie game keys have you seen? The indie subculture is a much more "wholesome chungus" thing. The ERP chatbot subculture is more adversarial toward the corpos and will gladly and publicly steal keys and not feel bad about it or shame each other for doing it, in fact if you know how to do it you are a God. It's more kind of a pirate scene subculture than a wholesome moralfag reddit subculture.
Anonymous No.107137735 [Report] >>107137771 >>107137789 >>107137895 >>107137979
Why does Kimi need to think for 2 minutes just to say hi? This is with pretty mucDeepseek v3.1 really spoiled us in terms of thinking times...
Anonymous No.107137751 [Report] >>107137833
>>107137717
If that's the case then I don't understand how it works. If what you say is true then why not directly update the quantized weight if the gradient is high enough? I thought the point of having the full precision weights was so you could "bump" any given parameter over the quantized step size over the course of multiple updates. Or is it that you only need the quantized weights to calculate usable gradients in the backward pass, but then update the quantized weights directly using that gradient?
Hi all, Drummer here... No.107137762 [Report]
>>107131170
>>107134864
>>107135409
Try v4ze, just uploaded it
Anonymous No.107137771 [Report] >>107137900
>>107137735
because thinking models are garbage, made only to top synthetic benchmark charts.
Anonymous No.107137786 [Report] >>107137813
>>107137673
should be more
Anonymous No.107137789 [Report]
>>107137735
>kindly
sirs... we winned
Anonymous No.107137813 [Report]
>>107137786
nigger
Anonymous No.107137833 [Report] >>107137983
>>107137751
>I thought the point of having the full precision weights was so you could "bump" any given parameter over the quantized step size over the course of multiple updates.
That's the purpose of the latent "weights", but the latent "weights" aren't ever used as weights. Not in the forward pass, not in the backward pass ... they aren't weights, they are helper variables.

There's a paper which also makes this argument : "Latent Weights Do Not Exist"
Anonymous No.107137860 [Report]
>>107137296
This is actually really cool, huh?
Anonymous No.107137895 [Report] >>107137933
>>107137735
don't bully
Anonymous No.107137900 [Report] >>107138193
>>107137771
youre mom tops my chart if you know what i mean
Anonymous No.107137932 [Report]
>>107137707
>you have absolutely no idea if this is true or not.
???
Patreon subscriber numbers are public, it might just be crazies but it's a lot of crazies and you can check for yourself.

>>107137724
First off, indie shit gets pirated all the time. Every indie (non-porn) out there is on cs.rin.ru for starters and every indie (porn) ends up on f95. We're talking big ass piracy forums, not niche communities. The wholesome chungus people exist but they're a vocal minority, most people still care about the product more than fellating random devs.
That aside, what you're describing is a marketing problem more than anything else. If you're making a product specifically for jerking off and making yourself look "corpo" you've already fucked up, you're supposed to go for the "underdog/fellow otaku/fellow coomer" angle. See DLSite, NAI, every western VN publisher, SubscribeStar, etc., they're all very much businesses but they're smart enough to do branding in a way that looks personable, as if they were the just like the wittle poor starving indies.
Anonymous No.107137933 [Report]
>>107137895
me irl
Anonymous No.107137941 [Report]
>>107137296
I wonder how difficult it would be to do something like this for local models to kill slop, like, you could have a head that detects slop and during inference you would use beam search to pick the less sloppy path.
Anonymous No.107137943 [Report]
What's the current best batch of local models for writing? Smut and/or non-smut.

Currently using deepseek-r1-qwen-2.5-32B-ablated on a 5080 and been farly happy with it, but checking if there's been better made in the meantime.
Anonymous No.107137979 [Report]
>>107137735
Some stats from running the Ubergarm quant. Great on paper, but in my last response, it took 5 minutes to think of a response, so its far from ideal. I wonder has anyone tried it w/o thinking yet?
Anonymous No.107137983 [Report]
>>107137833
I see, in that case I think my understanding was correct.
Anonymous No.107138035 [Report]
>>107129575
https://huggingface.co/Localsong/LocalSong/tree/main/samples
Anonymous No.107138042 [Report]
>>107135147
>he didn't spent $5 to try the model hosted first
you deserve to lose more than $2k
Anonymous No.107138132 [Report]
>>107135147
Anonymous No.107138193 [Report]
>>107137900
If by 'chart' you mean your asshole, then sure
Anonymous No.107138338 [Report] >>107138348
ERNIE 5 is high on lmarena. ERNIEbros, are we back?
Anonymous No.107138348 [Report] >>107138367
>>107138338
>lmarena
no
Anonymous No.107138367 [Report]
>>107138348
But saar, llama 4 to the moon! Experimental maveric was so good that it was agi and too unsafe to release saaar!

Captcha: S4RW2
Anonymous No.107138549 [Report]
So I've tried 2 'abliterated' models and they are both brain damaged to the point of being useless.
Why do people even bother uploading this shit?
Anonymous No.107138625 [Report]
>>107138606
>>107138606
>>107138606