Search Results

Found 7 results for "bf9063314c4fa43c05af7956b21a0101" across all boards searching md5.

Anonymous /g/105995475#105995952
7/23/2025, 7:38:03 AM
justpaste (DOTit) GreedyNalaTests

Added:
InternVL3-14B-Instruct
ERNIE-4.5-21B-A3B-PT
Cydonia-24B-v4h
Austral-GLM4-Winton
Austral-GLM4-Winton + length inst
EXAONE-4.0-32B-GGUF
ai21labs_AI21-Jamba-Mini-1.7-Q4_K_L

It's time, but nothing got a flag or star. Just the usual...

Contributions needed:
The new Qwen 3 235B and the 480B coder (for prompt, go to "Qwen3-235B-A22B-Q5_K_M-from_community" in the paste)
ERNIE-4.5-300B-A47B-PT (for prompt, go to "ernie-placeholder" in the paste)
Kimi-K2-Instruct (for prompt, go to "kimi-placeholder" in the paste, also see "kimi-placeholder-alt-ex" for an example of a modified prompt that may or may not work better; experiment with the template as it sounds like it has an interesting flexible design)
>From neutralized samplers, use temperature 0, top k 1, seed 1 (just in case). Copy the prompt as text completion into something like Mikupad. Then copy the output in a pastebin alternative of your choosing or just in your post. Do a swipe/roll and copy that second output as well. Include your backend used + pull datetime/version. Also a link to the quant used, or what settings you used to make your quant.
Anonymous /g/105879548#105886894
7/13/2025, 2:09:02 AM
>>105885211
>>105878375
Thanks anons. I've added them to the paste.

Also (nothingburger) added LFM2-1.2B today.
Anonymous /g/105856945#105863373
7/10/2025, 11:18:31 PM
justpaste (DOTit) GreedyNalaTests

Added:
MiniCPM4-8B
gemma-3n-E4B-it
Dolphin-Mistral-24B-Venice-Edition
Mistral-Small-3.2-24B-Instruct-2506
Codex-24B-Small-3.2
Tiger-Gemma-27B-v3a
LongWriter-Zero-32B
Falcon-H1-34B-Instruct
Hunyuan-A13B-Instruct-UD-Q4_K_XL
ICONN-1-IQ4_XS

Another big but mid update. ICONN was a con (broken). The new Falcon might be the worst model ever tested in recent memory in terms of slop and repetition. Maybe it's even worse than their older models. It's just so disgustingly bad. Tiger Gemma was the least bad performer of the bunch though not enough for a star, just gave it a flag.

Was going to add the IQ1 Deepseek submissions from >>105639592 but the links expired because I'm a slowpoke gomenasai...
So requesting again, especially >IQ1 and also using the full prompt including greeting message for the sake of consistency. See "deepseek-placeholder" in the paste. That prompt *should* work given that the system message is voiced as the user, so it all matches Deepseek's expected prompt format.

Looking for contributions:
Deepseek models (for prompt, go to "deepseek-placeholder" in the paste)
dots.llm1.inst (for prompt, go to "dots-placeholder" in the paste)
AI21-Jamba-Large-1.7 after Bartowski delivers the goofz (for prompt, go to "jamba-placeholder" in the paste)
>From neutralized samplers, use temperature 0, top k 1, seed 1 (just in case). Copy the output in a pastebin alternative of your choosing. And your backend used + pull datetime. Also a link to the quant used, or what settings you used to make your quant.
Anonymous /v/714080025#714089293
6/30/2025, 6:13:19 PM
>Massively Multiplayer Offline
Anonymous /tv/212035731#212054869
6/27/2025, 3:23:20 PM
>>212035731
>what's going on in this thread?
Anonymous /an/5007981#5008102
6/27/2025, 3:23:20 PM
>>5007981
>what's going on in this thread?
Anonymous /g/105611492#105616734
6/17/2025, 3:52:53 AM
justpaste (DOTit) GreedyNalaTests

Added:
dans-personalityengine-v1.3.0-24b
Cydonia-24B-v3e
Broken-Tutu-24B-Unslop-v2.0
Delta-Vector_Austral-24B-Winton
Magistral-Small-2506
medgemma-27b-text-it
Q3-30B-A3B-Designant
QwQ-32B-ArliAI-RpR-v4
TheDrummer_Agatha-111B-v1-IQ2_M
Qwen3-235B-A22B-Q5_K_M from community

Been preoccupied for a while but now I'm caught up. 235B was given a star rating, the others had no stars and no flags, they're just the same old really.

Looking for contributions:
Deepseek models
dots.llm1.inst
>From neutralized samplers, use temperature 0, top k 1, seed 1 (just in case). Copy the EXACT prompt sent to the backend, in addition to the output. And your backend used + pull datetime. Also a link to the quant used, or what settings you used to make your quant.