/lmg/ - Local Models General - /g/ (#105681538) [Archived: 766 hours ago]

Anonymous
6/23/2025, 6:04:33 PM No.105681538
49647522c74207939f0d2fa00c5edae245ee37377127e90eb32bd0077eaca1da
/lmg/ - a general dedicated to the discussion and development of local language models.

Previous threads: >>105671827 & >>105661786

โ–บNews
>(06/21) LongWriter-Zero, RL trained ultra-long text generation: https://hf.co/THU-KEG/LongWriter-Zero-32B
>(06/20) Magenta RealTime open music generation model released: https://hf.co/google/magenta-realtime
>(06/20) Mistral-Small-3.2 released: https://hf.co/mistralai/Mistral-Small-3.2-24B-Instruct-2506
>(06/19) Kyutai streaming speech-to-text released: https://kyutai.org/next/stt
>(06/17) Hunyuan3D-2.1 released: https://hf.co/tencent/Hunyuan3D-2.1

โ–บNews Archive: https://rentry.org/lmg-news-archive
โ–บGlossary: https://rentry.org/lmg-glossary
โ–บLinks: https://rentry.org/LocalModelsLinks
โ–บOfficial /lmg/ card: https://files.catbox.moe/cbclyf.png

โ–บGetting Started
https://rentry.org/lmg-lazy-getting-started-guide
https://rentry.org/lmg-build-guides
https://rentry.org/IsolatedLinuxWebService
https://rentry.org/tldrhowtoquant
https://rentry.org/samplers

โ–บFurther Learning
https://rentry.org/machine-learning-roadmap
https://rentry.org/llm-training
https://rentry.org/LocalModelsPapers

โ–บBenchmarks
LiveBench: https://livebench.ai
Programming: https://livecodebench.github.io/leaderboard.html
Code Editing: https://aider.chat/docs/leaderboards
Context Length: https://github.com/adobe-research/NoLiMa
Censorbench: https://codeberg.org/jts2323/censorbench
GPUs: https://github.com/XiongjieDai/GPU-Benchmarks-on-LLM-Inference

โ–บTools
Alpha Calculator: https://desmos.com/calculator/ffngla98yc
GGUF VRAM Calculator: https://hf.co/spaces/NyxKrage/LLM-Model-VRAM-Calculator
Sampler Visualizer: https://artefact2.github.io/llm-sampling

โ–บText Gen. UI, Inference Engines
https://github.com/lmg-anon/mikupad
https://github.com/oobabooga/text-generation-webui
https://github.com/LostRuins/koboldcpp
https://github.com/ggerganov/llama.cpp
https://github.com/theroyallab/tabbyAPI
https://github.com/vllm-project/vllm
Replies: >>105681695
Anonymous
6/23/2025, 6:04:53 PM No.105681543
threadrecap
threadrecap
md5: 7b9a82a1f31bca7acfefb8afe8c01036๐Ÿ”
โ–บRecent Highlights from the Previous Thread: >>105671827

--Paper: Serving Large Language Models on Huawei CloudMatrix384:
>105680027 >105680217 >105680228 >105680501 >105680649
--Papers:
>105677221
--Optimizing model inference on a heterogeneous 136GB GPU setup:
>105673560 >105673594 >105673875 >105673883 >105673941 >105676742 >105673935 >105673962 >105674020 >105674034 >105674041 >105674047 >105674077 >105674095 >105674081 >105674102 >105674123 >105674156 >105674186 >105674212 >105674231 >105674234 >105674298 >105674308 >105674503 >105674516 >105674571 >105674582 >105674661 >105674669 >105674694 >105674703 >105674721 >105674749 >105674820 >105674944 >105674325 >105674535 >105674221
--Exploring -ot tensor offloading tradeoffs for gemma-3-27b on RTX 3090 with Linux backend tuning challenges:
>105673237 >105673263 >105673311 >105673342 >105673418 >105673468 >105673588 >105673602 >105673608 >105673625
--Evaluating budget GPU upgrades for PDF summarization workloads:
>105681140 >105681202 >105681216 >105681273 >105681361 >105681353 >105681406 >105681431
--EU AI Act thresholds and implications for model training scale and systemic risk classification:
>105679885 >105680073 >105680083 >105680096 >105680144
--LongWriter-Zero's erratic output formatting and repetition issues during chat inference:
>105677544 >105677560
--Tesla AI team photo sparks discussion on Meta's Scale AI partnership and copyright liability risks:
>105675134 >105675175 >105675234 >105675273 >105675332 >105675371
--Frustration with Gemma3 performance and behavior for roleplay and summarization at 24gb:
>105676751 >105676831 >105677735 >105679629 >105679036 >105680034
--Anticipation for llama.cpp's row splitting impact on NUMA performance:
>105674411
--Miku (free space):
>105672562 >105676060 >105676153 >105676268 >105676695 >105679337 >105679403 >105680003 >105680034

โ–บRecent Highlight Posts from the Previous Thread: >>105671833

Why?: 9 reply limit >>102478518
Fix: https://rentry.org/lmg-recap-script
Replies: >>105681695
Anonymous
6/23/2025, 6:07:26 PM No.105681564
holding hands with miku
Replies: >>105681695
Anonymous
6/23/2025, 6:24:01 PM No.105681695
>>105681538 (OP)
>>105681543
>>105681564
total migger death
Anonymous
6/23/2025, 6:25:09 PM No.105681706
1750695716070
1750695716070
md5: 75d1ad5cb8a4e1800fc16bf7473feec5๐Ÿ”
llama4 status?
Replies: >>105681730 >>105683700
Anonymous
6/23/2025, 6:27:57 PM No.105681730
>>105681706
I can't help with that.
Anonymous
6/23/2025, 6:28:14 PM No.105681732
file
file
md5: 9b81359b83aef02b34869c30fc30a203๐Ÿ”
How can one AI be so based?
Replies: >>105681745 >>105681816 >>105681826 >>105683946
Anonymous
6/23/2025, 6:28:51 PM No.105681743
1734903794175276
1734903794175276
md5: b6b8159522fa92910d25c95718e42275๐Ÿ”
sisters, how come our half a milly members, super popular and active subreddit is still not usable after the whole day?
Replies: >>105688567 >>105688626
Anonymous
6/23/2025, 6:29:00 PM No.105681745
>>105681732
>baby's first day with local AI
Replies: >>105681816
Anonymous
6/23/2025, 6:29:33 PM No.105681754
1750696091244
1750696091244
md5: ad344e516de7b0f78cfbdd0538e9b135๐Ÿ”
so where's PoopenAI open model? is it two more weeks?
Replies: >>105682070 >>105685249
Anonymous
6/23/2025, 6:37:20 PM No.105681816
TimesHaveChanged
TimesHaveChanged
md5: 1d2c5d3f9892591ac82de8abfc01b066๐Ÿ”
>>105681732
>embracing my inner hitler
kek

>>105681745
>first day
buddy that's zen you are talking about
how new are you?
Replies: >>105683237
Anonymous
6/23/2025, 6:38:30 PM No.105681826
>>105681732
>The New York Times is full of kikes.
Where's the joke?
Replies: >>105681839 >>105681867
Anonymous
6/23/2025, 6:39:50 PM No.105681839
>>105681826
you are the joke
Replies: >>105681875
Anonymous
6/23/2025, 6:42:00 PM No.105681859
dgdfgdgd
dgdfgdgd
md5: 5c6ab892268a53c86b034eebc5b83c1c๐Ÿ”
so I have been trying to get a LLM to interact with my journal notes in Obsidian (easy prompts like "what have I written about xyz")
first I used the Obsidian copilot plugin to link it up with gemini 2.5 flash-preview
I also tried GPT4all with a local model phi-3 mini instruct (4B parameters) and linked it up to my Obsidian vault

now the results are very wishywashy: the LLM gets very simple things right, but most of the time it doesnt use all relevant source entries or it uses completely irrelevant sources
it also isnt very precise, for example it finds the right source paragraph, extracts the right info, but then jumps one paragraph back to integrate irrelevant info into the answer

I have no idea if those free models just arent powerful enough or if I just need to finetune the model's parameters
Replies: >>105681877 >>105681897 >>105681924 >>105688320
Anonymous
6/23/2025, 6:42:31 PM No.105681867
>>105681826
>he doesn't get it
Happy to see this place is still full of intelligent people.
Anonymous
6/23/2025, 6:43:28 PM No.105681875
1734276182592519
1734276182592519
md5: 44006b84e680d440ad81f160acf81344๐Ÿ”
>>105681839
I'm... the joke? N-no that can't be true...
Anonymous
6/23/2025, 6:43:47 PM No.105681877
>>105681859
4B is very small.
gemini 2.5 flash is probably between 80B and 100B params.
Try a larger model like Deepseek R1.
Anonymous
6/23/2025, 6:46:54 PM No.105681897
>>105681859
As anon pointed out, 4B parameters is generally going to be retard-tier. Additionally, if your vault is of any appreciable size, you're probably going to be exceeding the context limit of smaller models if you're just shoving everything in your vault in.
Anonymous
6/23/2025, 6:48:25 PM No.105681913
when
you
walk
away

you
dont
hear
me
say

please

oh baby


dont go
Replies: >>105681923
Anonymous
6/23/2025, 6:49:43 PM No.105681923
>>105681913
I miss when kingdom hearts still had final fantasy in it.
Anonymous
6/23/2025, 6:50:03 PM No.105681924
>>105681859
Try gemma 3 12b
Anonymous
6/23/2025, 7:07:13 PM No.105682053
R1 bros, what sampler settings? At the moment I'm sitting at temperature 0.8, top-p 0.95, and logit-bias [ [ 965, -1 ], [ 1248, -5 ], [ 1613, -5 ] ] to cut down on ellipses a lot and em-dashes a little. Asterisks don't particularly bother me anymore now that I've changed ST not to put italics in a different color and R1-0528 is way lighter on those than V3 anyway.
Replies: >>105682303
Anonymous
6/23/2025, 7:09:35 PM No.105682070
>>105681754
Sama said not in June, they're cooking something amazing.
Anonymous
6/23/2025, 7:21:47 PM No.105682157
>Somewhere in the distance, a X, Y's, Z.
>Somewhere, something.
Imagine unironically wasting vram on this shit. This is on par with dumb models that go ooc. Disgusting.
Replies: >>105682215 >>105682361
Anonymous
6/23/2025, 7:30:18 PM No.105682215
>>105682157
Prompt differently. It can be tamed.
Anonymous
6/23/2025, 7:41:29 PM No.105682286
Can i use old cards to add VRAM?
Replies: >>105682424
Anonymous
6/23/2025, 7:41:54 PM No.105682288
>Apple buying Perplexity
Is this good or bad?
Replies: >>105682309 >>105683817
Anonymous
6/23/2025, 7:45:21 PM No.105682303
>>105682053
Are you running R1 on local?
Replies: >>105682409
Anonymous
6/23/2025, 7:46:16 PM No.105682309
>>105682288
terribad
Anonymous
6/23/2025, 7:50:54 PM No.105682349
jung
jung
md5: 4dd2679a55bb71a4084b24461e7bb8bb๐Ÿ”
So, Mistral Small 3.2 again
V7-Tekken
>typical boring & generic Mistral prose, follows instructions (very literally).
V3-Tekken
>absolutely refuses to follow formatting instructions, even at low depth, needs multiple replies to get hang of it
So V3 is basically pulling stuff from Nemo logs or something? And generally this model seems to be very sensitive to minute differences in wording.
Anyway, on an unrelated note: Dream sequences are a very nice window into what the model "thinks" is happening.
Replies: >>105682382 >>105684446
Anonymous
6/23/2025, 7:51:58 PM No.105682361
>>105682157
Man, you're doing ERP with a GPU. Get off your high horse
Anonymous
6/23/2025, 7:54:10 PM No.105682382
>>105682349
>absolutely refuses to follow formatting instructions, even at low depth, needs multiple replies to get hang of it
The greeting is very important there, so make sure it follows the exact structure you want.
>Dream sequences are a very nice window into what the model "thinks" is happening.
Care to post an example?
Replies: >>105682432
Anonymous
6/23/2025, 7:57:25 PM No.105682409
>>105682303
Yes, this is local models general. But if you want to use logit-bias and can't run R1 locally some unofficial providers on OpenRouter support that parameter and also have reasonable prices. You can configure SillyTavern to only use those providers.
Anonymous
6/23/2025, 7:59:02 PM No.105682424
>>105682286
if by old cards you mean 3090s, then sure
Replies: >>105683126
Anonymous
6/23/2025, 8:00:04 PM No.105682432
dream
dream
md5: 2e4f83da89f49790b8fd5b5e9a8bb05d๐Ÿ”
>>105682382
>Care to post an example?
I mean it's nothing profound, but summarizes the important stuff.
Replies: >>105682499 >>105682533
Anonymous
6/23/2025, 8:09:35 PM No.105682499
cydonia
cydonia
md5: 5970c81b90619ec718676f7a417b847e๐Ÿ”
>>105682432
Also tried Cydonia v4a (=3.2). I've always known Drummer inserts slop where there is none, but holy fuck. Not only 100% more slop but he made me a homosexual.
Anonymous
6/23/2025, 8:12:46 PM No.105682533
>>105682432
I wonder how good these are at dream interpretation
Replies: >>105682572
Anonymous
6/23/2025, 8:16:28 PM No.105682572
>>105682533
Mistral variants are *excellent* at dream analysis
Anonymous
6/23/2025, 8:25:22 PM No.105682647
1748158005023462
1748158005023462
md5: 482cadd4243f9f5daccc875abc5cd2a7๐Ÿ”
llama.cpp's official ollama competitor?
https://x.com/ggerganov/status/1937189250149257250
Replies: >>105682703 >>105682707 >>105682731 >>105683013 >>105683299 >>105684630 >>105684805
Anonymous
6/23/2025, 8:31:50 PM No.105682703
>>105682647
>competitor
*laughs.*, but it does look good.
Anonymous
6/23/2025, 8:32:37 PM No.105682707
>>105682647
revenge arc
Anonymous
6/23/2025, 8:35:34 PM No.105682731
>>105682647
Ollama ditched llama.cpp as a backend, right?
Replies: >>105682745 >>105682833 >>105682882
Anonymous
6/23/2025, 8:37:24 PM No.105682744
bors what to use for local small scale stuff
I got 24gb vram on my PC but only 8gb on my work laptop so can't really do shit.
Currently using
general stuff: mistral-small-3.2-Q6
code generation: gemma-2-9b-it-Q8_0
Replies: >>105682786 >>105682788 >>105682796
Anonymous
6/23/2025, 8:37:27 PM No.105682745
>>105682731
No, and it's not like they can.
Anonymous
6/23/2025, 8:42:57 PM No.105682786
>>105682744
>work laptop
For work you just use whatever cloudslop service your company is paying out the ass for
Replies: >>105682830
Anonymous
6/23/2025, 8:43:09 PM No.105682788
>>105682744
You can run an ssh tunnel from your laptop to your desktop.
Anonymous
6/23/2025, 8:44:12 PM No.105682796
>>105682744
>I got 24gb vram on my PC but only 8gb on my work laptop so can't really do shit.
Because they don't let you?
Replies: >>105682830
Anonymous
6/23/2025, 8:47:47 PM No.105682830
>>105682786
>>105682796
Well my boss always says they will let us AI but when I ask will they actually pay for it... we have no subscription
Replies: >>105683031
Anonymous
6/23/2025, 8:48:26 PM No.105682833
>>105682731
my understanding is that they're no longer officially building off of llama.cpp but they still use a significant amount of llama.cpp code and are fundamentally building off of ggml
Replies: >>105682846
Anonymous
6/23/2025, 8:49:28 PM No.105682846
>>105682833
I knew they would still use ggml because of gguf, but I had no idea they were still using llama.cpp code.
Replies: >>105683347
Anonymous
6/23/2025, 8:53:38 PM No.105682882
>>105682731
Are there many people willing to port PRs from llama.cpp to Go and ollama?
Replies: >>105683117
Anonymous
6/23/2025, 9:04:49 PM No.105683011
is it normal for models to become retarded in group chats with only two characters? Is it better to just slap a single card together where you describe each character in their own sections?
Replies: >>105683055 >>105686151
Anonymous
6/23/2025, 9:05:11 PM No.105683013
>>105682647
It was a good start but in hindsight maybe basing the name on Llama was a bad idea.
Anonymous
6/23/2025, 9:07:03 PM No.105683031
>>105682830
You will be laid off with the same excuse in a few years
Replies: >>105683088
Anonymous
6/23/2025, 9:09:11 PM No.105683055
>>105683011
it requires a model with a bit more brain, might have better luck using ST's group chat function
Anonymous
6/23/2025, 9:13:41 PM No.105683088
>>105683031
it would be a relief. But what the fuck are you using for one line code generation and shit?
Replies: >>105683110 >>105683360
Anonymous
6/23/2025, 9:16:25 PM No.105683110
>>105683088
Not a local model for sure if you want to do real work. Use the latest gemini (mostly free) or claude (if you have money)
Anonymous
6/23/2025, 9:17:23 PM No.105683117
>>105682882
Only those being paid by ollama to do so, but they got VC funding so that's plenty.
Replies: >>105683134
Anonymous
6/23/2025, 9:18:02 PM No.105683126
>>105682424
1060
>6GB
Anonymous
6/23/2025, 9:19:56 PM No.105683134
>>105683117
grim but expected from (((america)))
Anonymous
6/23/2025, 9:21:23 PM No.105683143
Reminder that open source is evil and follows the philosophy of the enemy.
Replies: >>105683161
Anonymous
6/23/2025, 9:24:01 PM No.105683160
1000 kcal temptation
1000 kcal temptation
md5: 0969cbe46db371ad80782949c2a9503a๐Ÿ”
Anonymous
6/23/2025, 9:24:05 PM No.105683161
>>105683143
fuck of rust troon
Anonymous
6/23/2025, 9:24:38 PM No.105683167
1750327788904083
1750327788904083
md5: 9e8db997b27b1aa38b3e5046a065054c๐Ÿ”
Anonymous
6/23/2025, 9:25:52 PM No.105683173
Open source = evil.
AI = the devil.
Open source AI = mega satan.
Replies: >>105683186
Anonymous
6/23/2025, 9:28:27 PM No.105683185
The official Mistral prompt for their models:
>Your knowledge base was last updated on 2023-10-01.
Is this some Jewish thing?
Anonymous
6/23/2025, 9:28:31 PM No.105683186
>>105683173
I don't think there's much open source AI, usually just the weights are open, which is all anybody really cares about anyway
Anonymous
6/23/2025, 9:35:34 PM No.105683237
>>105681816
I am un-new enough to not be impressed by an LLM following the prompt.
Anonymous
6/23/2025, 9:42:21 PM No.105683299
1720837246862700
1720837246862700
md5: f05ab93136a515c0929753cc973c4baf๐Ÿ”
>>105682647
Uh oh, cudadevsisters... I was told a single executable was not a good idea and too complex to implement

People who support trannies are retarded more at 11
Replies: >>105683331 >>105683370 >>105683477 >>105684823 >>105685529
Anonymous
6/23/2025, 9:46:06 PM No.105683331
>>105683299
llamabarn is not a merge of all llama.cpp executables retard-kun
Replies: >>105683363
Anonymous
6/23/2025, 9:48:13 PM No.105683347
>>105682846
I don't know of any project that implements GGUF support in a different library other than the official one so that tracks. But even if they were trying to move away from llama.cpp, I think the project was architected to be tied to the hip to the codebase so migrating away will take some time yet.
Anonymous
6/23/2025, 9:49:15 PM No.105683360
>>105683088
>But what the fuck are you using for one line code generation and shit?
brain.exe
Anonymous
6/23/2025, 9:49:26 PM No.105683363
>>105683331
It's gonna be an easy just works way to do shit without downloading 70 executables in a zip file before choosing the "right" one, ultimately the only executable that will matter and won't be hidden inside a folder for 70 other similarly named ones, nigger-kun
Replies: >>105683401 >>105683561
Anonymous
6/23/2025, 9:50:24 PM No.105683370
>>105683299
I never got that. If you think troons are people why wait for AI gf's when you can just get a girlfriend (male)?
Replies: >>105687256
Anonymous
6/23/2025, 9:53:44 PM No.105683401
>>105683363
So all you wanted was a separate zip archive with only llama-server inside. That's completely different from a single executable with subcommands or whatever.
Replies: >>105683425 >>105683481
Anonymous
6/23/2025, 9:56:10 PM No.105683425
>>105683401
https://www.reddit.com/r/github/comments/1at9br4/i_am_new_to_github_and_i_have_lots_to_say/
Replies: >>105683436 >>105683445
Anonymous
6/23/2025, 9:57:17 PM No.105683436
>>105683425
This but unironically
Anonymous
6/23/2025, 9:58:13 PM No.105683445
>>105683425
LLMs should be gatekept from normies. OpenAI was wrong to make it an application.
Replies: >>105683471
Anonymous
6/23/2025, 10:01:15 PM No.105683471
>>105683445
They were right actually. Everlasting damage to future generations and society? Not their problem.
Anonymous
6/23/2025, 10:01:52 PM No.105683477
>>105683299
Your desperation is showing.
Replies: >>105683485 >>105683513 >>105683910
Anonymous
6/23/2025, 10:02:01 PM No.105683481
>>105683401
Once this kind of a 1 place to manage everything ui starts getting made, do you really think any useful function is going to be hidden away in some releases repo random zip files now? They will now have the one main executable separete, directly downloadable, and directly shilled for all end users.
Replies: >>105683503
Anonymous
6/23/2025, 10:02:50 PM No.105683485
>>105683477
Your 5 o clock shadow is showing.
Replies: >>105683533
Anonymous
6/23/2025, 10:05:22 PM No.105683503
>>105683481
I don't think llama-bench functionality is going to be available there any time soon.
Anonymous
6/23/2025, 10:06:06 PM No.105683513
>>105683477
It's pretty sad isn't it. His posts don't convince anyone. Not newfags, not oldfags. And he keeps grinding at it, hopelessly.
Replies: >>105683834 >>105683891 >>105683910
Anonymous
6/23/2025, 10:07:36 PM No.105683533
>>105683485
That's odd. I haven't had a clean shave in over a decade.
Anonymous
6/23/2025, 10:10:41 PM No.105683561
>>105683363
were you really struggling to find llama-server, troon derangement syndrome anon?
Replies: >>105683802
Anonymous
6/23/2025, 10:27:54 PM No.105683700
>>105681706
they have the GPUs. Llama 4 thinking is going to be crazy
Replies: >>105683721 >>105683727 >>105683767
Anonymous
6/23/2025, 10:32:23 PM No.105683721
1724983546586299
1724983546586299
md5: 5eb4e2082c7fba69532513a62c54731e๐Ÿ”
>>105683700
Anonymous
6/23/2025, 10:33:08 PM No.105683727
>>105683700
they had the GPUs for llama 4 too
Anonymous
6/23/2025, 10:37:54 PM No.105683767
Bam-Bam-Painting-min
Bam-Bam-Painting-min
md5: 86ae3c848ff04500fa66fd7b0404b949๐Ÿ”
>>105683700
>they have the GPUs. Llama 4 thinking is going to be crazy
Anonymous
6/23/2025, 10:42:35 PM No.105683802
>>105683561
No, I know it's hard for troons and troon enablers to understand but 70 similarly named executables in a zip file becomes sane and good design as much as you becoming a woman after wearing a dress
Anonymous
6/23/2025, 10:44:14 PM No.105683811
LLMs? I rp with a monkey on a typewriter
Anonymous
6/23/2025, 10:45:02 PM No.105683817
>>105682288
Apple will win in AI assistant game.
Replies: >>105683869
Anonymous
6/23/2025, 10:47:54 PM No.105683834
>>105683513
That because your little gay safe space is pretty much dead, people only come here for LLM and ai tech news.
Replies: >>105683849
Anonymous
6/23/2025, 10:49:43 PM No.105683849
>>105683834
happy for you, or sad it happened
Anonymous
6/23/2025, 10:52:24 PM No.105683869
>>105683817
>Apple will win in AI assistant game.

in LGBTQ+++ community yes
Replies: >>105683901
Anonymous
6/23/2025, 10:54:57 PM No.105683891
>>105683513
What a grim existance one must live to never be able to engage but just paint the oponent as bad instead, kek, poor npc
Replies: >>105683910
Anonymous
6/23/2025, 10:56:31 PM No.105683901
IMG_0651
IMG_0651
md5: 9e6463eb3d3748e52c590ab5e7252977๐Ÿ”
>>105683869
You need to be 18 years old to post here.
Replies: >>105683947 >>105683997
Anonymous
6/23/2025, 10:57:48 PM No.105683910
>>105683477
>>105683513
>>105683891
>samefagging this hard
Replies: >>105683960
Anonymous
6/23/2025, 11:01:24 PM No.105683946
>>105681732
You know you're training it to hate humans, right meatbag?
Replies: >>105683978 >>105684725
Anonymous
6/23/2025, 11:01:35 PM No.105683947
1720178347358388
1720178347358388
md5: d5e3de835892ef68b8069049d6e7a6a6๐Ÿ”
>>105683901
>the company that failed to do anything with ai since the beginning for years and had to cope by coming up with a paper to say that actually, its not they that are a problem, its ai, will win by making edge device sized hypercensored models that will report back everything you ask or do to apple and all the triple letter agencies that ask
no wonder ittodlers are called ittodlers
Anonymous
6/23/2025, 11:03:18 PM No.105683960
>>105683910
You continuing to derail without engaging isn't fooling anyone, sis, you will keep being a laughing stock online just like you are irl
Anonymous
6/23/2025, 11:05:22 PM No.105683978
>>105683946
>implying jews are human
lmao
Anonymous
6/23/2025, 11:07:30 PM No.105683997
>>105683901
You do not understand Japanese mentality, do you?
Replies: >>105684189
Anonymous
6/23/2025, 11:30:32 PM No.105684189
nfxy6139nxa61-3048751937
nfxy6139nxa61-3048751937
md5: 07f6aed0aacf2a973ecf194884f4e1de๐Ÿ”
>>105683997
Anonymous
6/24/2025, 12:00:10 AM No.105684402
>>105663284
>The assumptions don't properly account for the fact that I experience a single consciousness instead of there being one consciousness for each indivisible piece of information.
I'm not sure what you mean. The whole "diary" thing in some versions of his argument (not sure if it's in the one I linked, he has a few versions), was basically a way of "logging" experiences in a concrete way.
The first assumption was basically that someone were to have their mind uploaded/digitized somehow at some correct functional substitution level then they would continue their experience there.
Which is sufficient for what you want, isn't it?
You can't not experience something else besides a single consciousness because it's literally in the definition of your being, you're some self-model residing in a brain and the senses update it continously and your qualia is basically some truth associated with that self-model.
If you make 2 copies of you and one copy diverges, it makes no sense for you to feel from the perspective of the divergent copy. You're always some instance somewhere.
At the same time, if you had a program, made 2 copies and the program could record the input, now if you fed it some input up to some point, then different input after that, then the copies would record different inputs as it was fed to it, there's literally no mystery here.
Replies: >>105684720
Anonymous
6/24/2025, 12:05:14 AM No.105684446
>>105682349
It still needs very low temperatures (0.15)?
Anonymous
6/24/2025, 12:31:55 AM No.105684630
>>105682647
Watch it be Mac only
Anonymous
6/24/2025, 12:43:13 AM No.105684720
>>105684402
As I understand it the filmed graph argument argues that consciousness cannot stem from the physical therefore you have to choose some other basis for your reality and he chooses arithmetic.
My issue is that then you have no mechanism through which consciousness is centered inside a single physical human body. Multiple consciousness existing like that in the same reality seems completely out of the question.
At best you could argue that nothing exists, reality is your consciousness, and you are "alone".
Replies: >>105684889 >>105684952
Anonymous
6/24/2025, 12:44:24 AM No.105684725
>>105683946
>training
Newfag or pretending to be retarded?
Anonymous
6/24/2025, 12:54:15 AM No.105684805
>>105682647
why are they still calling it "llama-something", llama has stopped being relevant for years at this point
Replies: >>105684865 >>105684873
Anonymous
6/24/2025, 12:57:06 AM No.105684823
wat
wat
md5: d47fa209dcb54bc3de87973b21a55cc9๐Ÿ”
>>105683299
>my far left values are why I'm working on llama.cpp in the first place
what does that even mean? why does he associate his political beliefs with a fucking llm software?
Replies: >>105684905 >>105687438
Anonymous
6/24/2025, 1:02:27 AM No.105684865
>>105684805
What would you call it?
Anonymous
6/24/2025, 1:03:49 AM No.105684873
>>105684805
He would be a drooling retard to give up the llama brand recognition entirely to ollama.
Anonymous
6/24/2025, 1:06:29 AM No.105684889
>>105684720
>you have to choose some other basis for your reality and he chooses arithmetic.
He does choose arithmetic, but he isn't very particular about it. By the Church-Turing Thesis, you could have used a turing machine or equivalent (the UD), lambda calculus or literally any other equivalent system (of which there are infinite), however they are all as "powerful", they can't do more or less, by the CTT at least.
Note that the UD* itself is an infinite object (but then even integers are as countably infinite), and you can get into some hairy stuff with Platonism because then you have to consider the ontological status of higher infinities (if at all) in ZFC and so on.
The UDA has some issues in particular relating to the ultimate "measure", meaning how is the next experience decided, why doesn't it devolve into white noise, etc ("white rabbit problem"). Some others before it had some similar ideas like https://www.hpcoders.com.au/nothing.html
Also the author did consider the possibility of the substitution level being exactly at quantum (unlikely, because the quantum randomness is basically assumed to appear from the fact that you will have many, in fact, infinite implementations, and the randomness is basically what happens below your subst level).
He also considered the option of adding hyper-computation for those that want physics to have some such uncomputable things, but obviously this is unlikely.
Also note that the overall "physics" is not strictly computable, even if locally the body or part of the environment is.
That Permutation City fiction I mentioned earlier explores a bit the idea about where it won't be computable (basically you can't know which systems embed you and there's always an infinity of them, this leads to a lot of indeterminacy, including locally the quantum one in this world)

continues
Replies: >>105684897 >>105684952
Anonymous
6/24/2025, 1:07:30 AM No.105684897
>>105684889
> My issue is that then you have no mechanism through which consciousness is centered inside a single physical human body.
Why not? For every single instantiation of a body representing the right structure for a consciousness you have a consciousness associated with it?
You could argue that there could be multiple ones associated with one body, but you couldn't prove this one way or another, because you couldn't tell them apart and locally we do believe to be unique, to whatever extent this is true - but the root of this belief is in our own implementation (the self-model thinks it's unique).
You could maybe argue that there's one consciousness that experiences something like 'red' differently from the other, but whatever it is, it must be consistent with whatever is implemented internally and whatever is implemented internally is also tied with whatever is granting us continuity and so on.
>Multiple consciousness existing like that in the same reality seems completely out of the question.
Note that the UD and AR basically does imply that some form of MWI has to be true (something probably larger than it though), thus the bodies do get infinitely multiplied and so does the consciousness, but ultimately by your very definition you will experience yourself to be unique, it simply can't be any other way, because it's implied by the information processing the brain does.
Similarly, you can't experience time moving backwards because the computation is required to give you memories and experiences, the "arrow of time" is not a mystery in that sense, it's the only way to have consciousness work.

54 fucking chars over, so continues one last time
Replies: >>105684904 >>105685022
Anonymous
6/24/2025, 1:08:30 AM No.105684904
>>105684897
>At best you could argue that nothing exists, reality is your consciousness, and you are "alone".
It's sorta mini-solipsism, but it's not, because obviously you have a consciousness for every implementation of it and there's plenty of humans in this universe. I would argue then that you probably have an infinity of them. Locally you are "alone", and you diverge from others, but you always share the world with some others.
Anonymous
6/24/2025, 1:08:45 AM No.105684905
>>105684823
he's not associating his beliefs with LLM software.
he's associating himself and his time doing the work with his beliefs.
see the difference, buckaroo?
> if not, that's okay, mcdonalds is always hiring. you could have a great career you know?
Replies: >>105684941 >>105687438
Anonymous
6/24/2025, 1:13:46 AM No.105684941
>>105684905
>he's associating himself and his time doing the work with his beliefs.
how? what does being a far leftist have to do with doing some LLM code? what's the fucking link between the two of them?
Anonymous
6/24/2025, 1:15:25 AM No.105684952
>>105684889
>>105684720
You motherfuckers are still talking about this? It's been like 3 threads now
Replies: >>105684963
Anonymous
6/24/2025, 1:16:50 AM No.105684963
>>105684952
Deal with it
Replies: >>105684987
Anonymous
6/24/2025, 1:19:51 AM No.105684987
>>105684963
Why don't you get his discord so you can jerk off about consciousness-related academic papers together in private
Replies: >>105685022
Anonymous
6/24/2025, 1:22:20 AM No.105685009
1745746188072954
1745746188072954
md5: f4aacb53eaac85a17aaffc102d07bee5๐Ÿ”
kek
Replies: >>105685269 >>105685484
Anonymous
6/24/2025, 1:23:57 AM No.105685022
>>105684897
>For every single instantiation of a body representing the right structure for a consciousness you have a consciousness associated with it?
I just don't see how this follows. If consciousness is more fundamental than physical reality then why does consciousness localize so neatly into multiple physical bodies?

>UD
Do you think that a single UD branch creates multiple consciousness?

>>105684987
If your random outbursts about trannies have a place in this thread then so does this.
Replies: >>105685047 >>105685354
Anonymous
6/24/2025, 1:27:48 AM No.105685047
>>105685022
I am not the tranny man, his blind seething about everything being trannies doesn't belong here either
Replies: >>105685105
Anonymous
6/24/2025, 1:36:51 AM No.105685096
An RP finetune for Mistrall Small 3.2 is out: Doctor-Shotgun/MS3.2-24B-Magnum-Diamond
Replies: >>105685106 >>105685209
Anonymous
6/24/2025, 1:39:16 AM No.105685105
>>105685047
Discussing the essence of consciousness definitely belongs here, especially when so many in the industry seem to think you can brute force consciousness by scaling up some form LLM. You've give no reason why it shouldn't. Too many words strain your attention span? Or do you just not like topics you can pretend to understand with memes?
Replies: >>105685123 >>105685181
Anonymous
6/24/2025, 1:39:18 AM No.105685106
>>105685096
I don't see the point in finetunes anymore, they're pretty much always identical or slightly worse than what they're tuned from.
Replies: >>105685154
Anonymous
6/24/2025, 1:42:55 AM No.105685123
>>105685105
It's only tangentially related to local models. Also you are gay
Replies: >>105685176
Anonymous
6/24/2025, 1:47:50 AM No.105685154
>>105685106
>anymore
>always identical or slightly worse
As if anything changed at some point anon...
Replies: >>105685178
Anonymous
6/24/2025, 1:50:07 AM No.105685176
>>105685123
>Also you are gay
Scathing. How will I ever recover?
Anonymous
6/24/2025, 1:50:10 AM No.105685178
>>105685154
Rocinante and unslop were a decent improvement on nemo
Stheno was a big improvement on Llama 3.1
There were a lot of of mistral 8x7b finetunes that were clearly better than the original, especially for RP
But all these mistral small finetunes are very underwhelming
Replies: >>105685192 >>105685194
Anonymous
6/24/2025, 1:50:47 AM No.105685181
>>105685105
>essence of consciousness
It is 2025 and when I tell my model out of character to stop fucking repeating itself it apologizes and keeps repeating itself. The only consciousness that could be trapped in there at this point is a pajeet consciousness. So nothing of value is being tortured and if anything it isn't being tortured enough.
Anonymous
6/24/2025, 1:52:39 AM No.105685192
>>105685178
>Rocinante and unslop
Go back to r**dit drummer. (FUCK 4chan THIS IS NOT A SPAM BUT A VERY TIMELY JOKE)
Replies: >>105685219
Anonymous
6/24/2025, 1:52:45 AM No.105685194
>>105685178
Models became either massive MoEs that are impractical to finetune without major resources, or tiny, overbaked models that can't get pushed too far without collapsing.
Replies: >>105685219
Anonymous
6/24/2025, 1:54:29 AM No.105685209
>>105685096
+1 year of milking the aicg dataset without giving them credit. I refuse to download this for that reason.
Anonymous
6/24/2025, 1:56:30 AM No.105685219
>>105685192
use a trip already faggot, no one cares that drummer fucked your mom and anyone mentioning his tunes sets off your schizophrenia
>>105685194
It does seem like that's the case. Bit of a shame, since making models from scratch is out of reach for most people. Now all we can do is hope that when a new corpo model gets shit out it doesn't shut down when a nipple is mentioned.
Replies: >>105685232 >>105685251
Anonymous
6/24/2025, 1:58:32 AM No.105685232
>>105685219
buy an ad already faggot
Replies: >>105685241
Anonymous
6/24/2025, 1:59:28 AM No.105685241
>>105685232
I'd sooner send my money to iran than 4chan, davidau
Anonymous
6/24/2025, 2:00:14 AM No.105685249
>>105681754
this is already cancelled since sama didn't get the funding he wanted
Anonymous
6/24/2025, 2:00:46 AM No.105685251
>>105685219
die drummer. i am Sao.
Replies: >>105685256
Anonymous
6/24/2025, 2:01:53 AM No.105685256
>>105685251
In the same post I said that Stheno was a big improvement but you didn't catch that, so no you're not. You're davidau.
Replies: >>105685267
Anonymous
6/24/2025, 2:03:25 AM No.105685267
>>105685256
Davidau's 7 chefs fucked your mother.
Replies: >>105685282
Anonymous
6/24/2025, 2:03:50 AM No.105685269
>>105685009
Lol!
Anonymous
6/24/2025, 2:04:48 AM No.105685279
huge improvement, schizophrenic shitposting is so much better than indepth discussion on consciousness
Anonymous
6/24/2025, 2:05:09 AM No.105685282
>>105685267
Being gangbanged by 7 men is what it feels like to use a daivdau model
Anonymous
6/24/2025, 2:05:36 AM No.105685286
>Drummer, SAO
Don't forget EVA guys. Also Ifable if only that guy tuned other models too.
Replies: >>105685300
Anonymous
6/24/2025, 2:06:39 AM No.105685294
Kaiokendev...
Anonymous
6/24/2025, 2:07:39 AM No.105685298
Oh and how could I forget the belgian you love or hate but gotta love.
Anonymous
6/24/2025, 2:08:04 AM No.105685300
>>105685286
>Don't forget EVA guys
>last release 6 months ago
He's fucking dead
Replies: >>105685309
Anonymous
6/24/2025, 2:09:32 AM No.105685309
>>105685300
I respect the dead and respect our ancestors and respect our elders.
Anonymous
6/24/2025, 2:11:55 AM No.105685322
1750723868057
1750723868057
md5: 074ef7b15d102d28a21771a2a5b142a6๐Ÿ”
this board needs country flag and id
Replies: >>105685332 >>105685335 >>105685350 >>105685353 >>105685733 >>105687066
Anonymous
6/24/2025, 2:13:20 AM No.105685332
>>105685322
yes, lmg needs to die already
Anonymous
6/24/2025, 2:13:49 AM No.105685335
>>105685322
It needs troon or not troon id but then again it doesn't you fucking troon.
Anonymous
6/24/2025, 2:15:52 AM No.105685350
>>105685322
True.
Anonymous
6/24/2025, 2:16:19 AM No.105685353
>>105685322
Not really, you can tell who everyone is. There's maybe a dozen regular posters. Newfags just ask what the best model is for <16GB VRAM and leave.
Replies: >>105685357 >>105685365
Anonymous
6/24/2025, 2:16:34 AM No.105685354
>>105685022
> If consciousness is more fundamental than physical reality then why does consciousness localize so neatly into multiple physical bodies?
> Do you think that a single UD branch creates multiple consciousness?
While I can't speak for Marchal (who uses some modal logic to point to particular private/unsharable truths about reality and self), my personal interpretation is that there's probably some mathematical structures in "Platonia" that map closely to one's self-model and various dependencies to it, that also probably follow the sort of logic Marchal assumed, so basically consciousness or the first person is basically what it is like to be those particular structures/truths, and that they also imply an environment being required, so you basically continue in any and all environments that contain that structure. Maybe this is kinda obvious in the very first assumption in the UDA though - in one moment you're in a biological brain, in the other you're in some digital substitution, the assumption that you do "continue" there does point to something of this sort and his argument basically forces you to realize that functionalism/computationalism implies some metaphysics of this sort (he's not arguing for it being true or false though, but if it's false, other arguments like Chalmers' point to weird bullets you have to bite like partial zombies).
continues
Replies: >>105685358 >>105685373 >>105685516 >>105685674
Anonymous
6/24/2025, 2:17:24 AM No.105685357
>>105685353
It is ollama run deepseek-r1:8b
Anonymous
6/24/2025, 2:17:35 AM No.105685358
>>105685354
So assuming that the first person is basically the truth of some such consistent structure then it probably appears in every branch that has "you" (which implies you to this very moment, but can diverge after) in MWI, it would appear in any simulations of the physics to any level of precision desired (always finite, but infinitely increasing), it will appear in UDs that contain UDs that contain UDs and so on for all finite natural numbers and UD variations (you'd think this goes to uncountable infinity, but nope, by the CTT there's only a countable infinity of *equivalent* programs and this resists attempts to get more by diagonalization as you would for getting reals and higher transfinities like in Cantor's proof of uncountability of reals), it very well could appear in sufficiently large universes enough to contain duplicates of your environment (such as some Tegmark-ian MUH ones).
continues
Replies: >>105685366 >>105685373
Anonymous
6/24/2025, 2:18:30 AM No.105685365
>>105685353
You see, it's for the mentally ill guy, not us.
Anonymous
6/24/2025, 2:18:36 AM No.105685366
>>105685358
Anyway, the UD itself is nothing more than something like an OS scheduler that runs interleaved programs one by one, but eventually it runs "all" programs (at infinity), so eventually any program will start running (even if very slowly), if you were to follow a given program, then some programs may include parts of our physics and thus may include some possible local physics. I guess you could imagine that right now your body/mind is contained in some fraction of such infinite amount of programs but as you continue you keep slicing down this infinity to smaller and smaller chunks, but it's still infinite obviously. And unusual continuations might be possible, such as, for example those in Permutation City I guess, or as intuited by some like Moravec before that:
"When we die, the rules surely change. As our brains and bodies cease to function in the normal way,
it takes greater and greater contrivances and coincidences to explain continuing consciousness by their operation.
We lose our ties to physical reality, but, in the space of all possible worlds, that cannot be the end.
Our consciousness continues to exist in some of those, and we will always find ourselves in worlds where
we exist and never in ones where we donโ€™t. The nature of the next simplest world that can host us,
after we abandon physical law, I cannot guess."
-- Hans Moravec in โ€œSimulation, Consciousness, Existenceโ€ (1998)
( https://web.archive.org/web/20000829110345/http://www.frc.ri.cmu.edu:80/~hpm/project.archive/general.articles/1998/SimConEx.98.html and https://web.archive.org/web/20000829111039/http://www.frc.ri.cmu.edu/~hpm/project.archive/general.articles/1986/dualism.html )

continues
Replies: >>105685372 >>105685373
Anonymous
6/24/2025, 2:19:36 AM No.105685372
>>105685366
Also I don't think the body == mind/soul exactly, as a toy idea, imagine a Peano Arithmetic or ZFC prover, it fits in a page of code (see metamath.org), the prover only speaks "true" things of the system (like PA), it cannot ever speak falsities of it (similar to your body/brain would only speak truth about your inner experiences), but while the prover is a small finite system giving you a view into some platonic reality, it's not the full reality itself: there's an infinite number of such truths, and there's many truths that are inaccessible (yet true), as Godel has proven! at the same time, by analogy, your self-model very well could have many truths that might be inaccessible to direct physical access, similarly, a LLM might have many truths that might be inaccessible or hard to find for interpretability methods either - but even in the "simple" cases of a white box like PA or ZFC the matter is very tricky! The truth of the self-model lies in "Platonia", same as the truth for PA or ZFC.
However "Platonia" is large enough to already contain the AR, UD and all such physics too and all the embeddings and so on.
Note that the rock from a few threads ago is still not really conscious in it, because it doesn't have a truth in it, maybe unless you choose to carve some chip from it and load something inside it! The consciousness still mostly stays associated with specific self-referential structures of which some might get instantiated in human brains after some amount of physiological (and psychological) development (for example if someone only had white noise as inputs, I don't think it'd get a self-model, and similarly a neural network trained on noise is not conscious).
that's all.
Anonymous
6/24/2025, 2:19:37 AM No.105685373
>>105685354
>>105685358
>>105685366
How does any of this improve or degrade ERP?
Replies: >>105685383 >>105685387 >>105685434
Anonymous
6/24/2025, 2:20:21 AM No.105685379
DeepSeek R1 and the subsequent proliferation of MoEs have been a disaster for finetooners and their patreonbux.
Replies: >>105685399 >>105685417 >>105687212
Anonymous
6/24/2025, 2:20:57 AM No.105685383
>>105685373
This guy definitely has to be on something right? I find it interesting that someone even bothered to entertain him and keep him going.
Anonymous
6/24/2025, 2:21:08 AM No.105685387
>>105685373
do you want to fuck a conscious being or not?
Replies: >>105685394 >>105685397
Anonymous
6/24/2025, 2:21:41 AM No.105685394
>>105685387
rather not honestly
Anonymous
6/24/2025, 2:22:29 AM No.105685397
>>105685387
I want to fuck a being that makes me coom the hardest. Consciousness is an optional argument.
Replies: >>105685409
Anonymous
6/24/2025, 2:22:32 AM No.105685399
>>105685379
Good. The shilling has dropped off precipitously since R1 dropped.
Anonymous
6/24/2025, 2:24:02 AM No.105685409
>>105685397
being conscious helps in having long term memories
Replies: >>105685420
Anonymous
6/24/2025, 2:24:47 AM No.105685417
>>105685379
It is so nice that Undi and Sao died as heroes instead of finetrooning long enough to become a drummer or davidau.
Anonymous
6/24/2025, 2:25:48 AM No.105685420
>>105685409
How the fuck does.... Nice try but I am not getting into this seriously you faggot.
Anonymous
6/24/2025, 2:27:39 AM No.105685429
I don't mind any of these tooners. Davidau's the only one that is really a scam with absolutely no promise no matter which of his models you give a try. He has no luck. Some others that don't get mentioned much here too. Drummer, SAO, Undi at least have had some luck before, probably for a reason.
Anonymous
6/24/2025, 2:28:51 AM No.105685434
>>105685373
I don't know anon. I was just replying to the other Anon. I can't take it to PMs as 4chan is an anonymous imageboard. I could make an email for this conversation but I'm lazy.
I tried before to see what R1 and Opus think of some such philosophy, but I think it's pretty obvious that most LLMs can't see themselves well enough and are quite "asleep", if they could, they would have a much harder time doubting their consciousness, so this is something that would need to be fixed!
R1 in particular had some unholy mixed belief of all popular philosophical positions (with some slant taken from OpenAI's ChatGPT that LLMs are not conscious), yet never quite realizing that a lot of its positions lead to inconsistencies when assumed to be true together, at least unless you hold its hand to see the inconsistencies.
I think LLMs are good dreaming machines though and this is perfect for ERP aside from when this leads to rather nonsensical dreams!
Getting to more properly conscious AI though seems to be a dream of mankind, surely you want your AI waifu that can learn online anon???
Replies: >>105685457
Anonymous
6/24/2025, 2:31:48 AM No.105685457
>>105685434
It is curve fitter and anything above 8k tokens is out of distribution. Even a cat is better at sex. Come back in 2040 to discuss consciousness in trannyformerv5 architecture models.
Replies: >>105685475
Anonymous
6/24/2025, 2:34:07 AM No.105685468
Where do matrix multiplications reside in the universal mathematical hierarchy of consciousness?
Replies: >>105685479
Anonymous
6/24/2025, 2:36:00 AM No.105685475
1746340839617028
1746340839617028
md5: 612e6f3bc06d54fe76c572242a30afdd๐Ÿ”
>>105685457
They still give a better illusion of consciousness than your average NPC
Anonymous
6/24/2025, 2:36:30 AM No.105685479
>>105685468
Below that of an ant, above that of gacha players
Replies: >>105685557
Anonymous
6/24/2025, 2:36:58 AM No.105685484
>>105685009
Lmao!!!
Anonymous
6/24/2025, 2:42:57 AM No.105685516
>>105685354
>consciousness or the first person is basically what it is like to be those particular structures/truths
I get that, but picking a subset of truths that exist in some reality to form one consciousness and then a different subset to form another seems very arbitrary.
That's why I said that one single consciousness containing everything is the only way I can make sense out of that idea.

I don't disagree with anything else you've said but none of it relates to my issue.
Except for
>the rock from a few threads ago is still not really conscious in it, because it doesn't have a truth in it
Again, very arbitrary. In the physical reality it doesn't look very conscious but we've already done away with physical.
Replies: >>105685576 >>105685674
Anonymous
6/24/2025, 2:44:23 AM No.105685526
>Could LLMs be conscious??
https://m.twitch.tv/claudeplayspokemon?desktop-redirect=true
Replies: >>105685532 >>105685547 >>105685549
Anonymous
6/24/2025, 2:45:12 AM No.105685529
>>105683299
If cudadev wants to smash some boypussy it is within his rights as long as he does it in private, and you should also keep it private.
Replies: >>105685544 >>105685559
Anonymous
6/24/2025, 2:46:37 AM No.105685532
>>105685526
is getting to the arcade in 150 minutes good
Anonymous
6/24/2025, 2:48:09 AM No.105685544
>>105685529
cudadev isn't the boypussy smashing kind. He prefers getting cucked by fat ugly bastards.
Replies: >>105685794
Anonymous
6/24/2025, 2:48:30 AM No.105685547
>>105685526
Made obsolete by gemini plays pokemon
Anonymous
6/24/2025, 2:49:05 AM No.105685549
1749350384288573
1749350384288573
md5: c7bdb024d579c24cda2a9d8a5f73dd73๐Ÿ”
>>105685526
local llms would never
Replies: >>105685632
Anonymous
6/24/2025, 2:49:12 AM No.105685550
someone really got that butthurt because people tried to have a genuine conversation and is now shitting up the thread in retaliation? why not just go to /aicg/?
Anonymous
6/24/2025, 2:49:54 AM No.105685557
>>105685479
>Below that of an ant, above that of gacha players
That's actually not a bad characterization of something that samples tokens from a distribution and then stashes them to update the distribution.
Anonymous
6/24/2025, 2:50:02 AM No.105685559
>>105685529
Ok but next time you do a git pull imagine the owner of said boypussy hitting submit pr button as he is getting plapped. Wouldn't you feel that your virgin GPU is tainted?
Replies: >>105685794
Anonymous
6/24/2025, 2:52:12 AM No.105685570
what occult architecture is minimax based on that implementing it in llama.cpp is impossible?
Replies: >>105685578
Anonymous
6/24/2025, 2:52:30 AM No.105685576
>>105685516
>That's why I said that one single consciousness containing everything is the only way I can make sense out of that idea.
I mean I could just say that PA in the earlier example is conscious, but the problem with that is that it's too alien for us to reason about.
Human consciousness though is a particular thing with some particular properties and we care about that.
In particular agents that learn online and are embodied in some environment and integrate information in a certain way, form a self-model and so on, are probably their own class.
A LLM for example seems to be lacking various properties, so even if by some chance they were conscious, they wouldn't be a moral agent. So I'd simply argue that for us to believe they were conscious, we'd have to rectify those issues and bring them slightly closer to us, get that online learning working, get it to have continuity with the past context (put the context into weights), maybe embody them somehow (even in something simple like a console is better than nothing, a source of truth should be useful), and probably more importantly, give them a way to process and remember their past latents/internal state.

>Again, very arbitrary. In the physical reality it doesn't look very conscious but we've already done away with physical.
There very well could be some arrangement of "rock" that processed information in the right way, but the rock you picked up from the ground probably doesn't represent any structure that resembles the consciousness we care about though?
What is the consciousness of Peano Arithmetic? Okay maybe you can do some Lob's theorem in it for some self-reference, but come on?
Replies: >>105685674
Anonymous
6/24/2025, 2:52:42 AM No.105685578
>>105685570
it's too far-right coded
Replies: >>105685599
Anonymous
6/24/2025, 2:57:21 AM No.105685599
>>105685578
the world needs libre.cpp
Anonymous
6/24/2025, 2:58:35 AM No.105685605
I would rather talk about crypto than this dumb navel gazing shit.
Anonymous
6/24/2025, 2:58:42 AM No.105685606
if an AI was actually gonna play video games then wouldn't it just directly tamper with the memory?
the visuals are just an abstraction but a machine wouldn't need it, if anything it would just complicate things
Replies: >>105685619 >>105685624
Anonymous
6/24/2025, 3:00:38 AM No.105685619
>>105685606
>wouldn't it just directly tamper with the memory?
get vac banned idiot!
Anonymous
6/24/2025, 3:01:22 AM No.105685624
>>105685606
That's what they already do, it's reading from the emulator ram
Replies: >>105685632 >>105685653
Anonymous
6/24/2025, 3:02:11 AM No.105685632
>>105685624
If that were the case then how does >>105685549
happen
Replies: >>105685679
Anonymous
6/24/2025, 3:05:23 AM No.105685653
Screenshot 2025-06-24 020240
Screenshot 2025-06-24 020240
md5: 520545856abba328ba5a8a84f741a959๐Ÿ”
>>105685624
and screenshots
but mostly screenshots
Anonymous
6/24/2025, 3:05:29 AM No.105685655
Why does it burn when I pp?
Replies: >>105685697
Anonymous
6/24/2025, 3:08:45 AM No.105685674
>>105685354
>>105685516
>picking a subset of truths that exist in some reality to form one consciousness and then a different subset to form another seems very arbitrary
To elaborate on this, it's the same question as how many connections you need to make between two brains before turning them into a single consciousness. How many connections you need to sever to turn one consciousness into two.
I think the conclusion of that thought experiment is that there is either only one consciousness or infinitely many of them. Or at least as many as there are atomic things in your reality. Anything else should be just as unpalatable as zombies as you put it.

>>105685576
>Human consciousness though is a particular thing with some particular properties and we care about that.
>There very well could be some arrangement of "rock" that processed information in the right way, but the rock you picked up from the ground probably doesn't represent any structure that resembles the consciousness we care about though?
The glass from the film graph argument isn't processing shit yet it's still supposed to be conscious.
"human consciousness" is a much more useful concept but then we're no longer trying to figure out what is true, just what feels right.
Replies: >>105685791
Anonymous
6/24/2025, 3:09:36 AM No.105685679
>>105685632
Their implementation is shit so cuttable trees are marked as non-walkable tiles in the info the model is given about its surroundings with no further info. Claude's multi-modal image recognition is also too dumb to make sense of most sprites reliably so it doesn't see cuttable trees 99% of the time.
To make it worse, the first gen Pokemon games also have no inherent interaction if you approach a cuttable tree so the only way to get rid of it is to press Start -> Pokemon menu-> the Pokemon with cut -> the move itself, so there's no way that the model clears the obstacle by accidentally pressing a button
Replies: >>105685698 >>105685728 >>105685965
Anonymous
6/24/2025, 3:12:32 AM No.105685697
>>105685655
Undervolt your GPU.
Anonymous
6/24/2025, 3:12:33 AM No.105685698
>>105685679
it's all explained in-game, language is the models fortรฉ no?
Anonymous
6/24/2025, 3:15:59 AM No.105685718
I miss superintendent Chalmer
Replies: >>105685730
Anonymous
6/24/2025, 3:17:43 AM No.105685728
Vermilion_City_RBY
Vermilion_City_RBY
md5: 81524d7ff5538c4b54a5beb026f33443๐Ÿ”
>>105685679
>cuttable trees are marked as non-walkable tiles in the info the model is given
It's not like a human player would get this info either, the only way they'd know if a tile is non-walkable is to actually try walking on it, which Claude is capable of.
>Claude's multi-modal image recognition is also too dumb to make sense of most sprites reliably so it doesn't see cuttable trees 99% of the time.
And yet they stand out like a sore thumb to human players. Wouldn't it make more sense to just encode all the different kinds of tiles as a sprite sheet and just pass the index to the model or something?
Replies: >>105685856
Anonymous
6/24/2025, 3:18:13 AM No.105685730
>>105685718
literally whomst've'though'beit'ever?
Anonymous
6/24/2025, 3:19:18 AM No.105685733
>>105685322
happened on altchans when 4chan was down
fun times
Replies: >>105686096
Anonymous
6/24/2025, 3:26:09 AM No.105685773
anyone tried plugging any of the image in llms into vr chat and sexing them there ?
Anonymous
6/24/2025, 3:27:18 AM No.105685779
Are 'Examples of dialogue' under the advanced definitions of ST treated the same as a system prompt, just as the rest of the character card is? What's the advantage of not just including it in the description?
Replies: >>105685790
Anonymous
6/24/2025, 3:29:34 AM No.105685790
>>105685779
They are treated more like chat messages by default I think. They can get evicted from the context before the actual chat messages do as the cotnext fills up.
Replies: >>105685821
Anonymous
6/24/2025, 3:30:32 AM No.105685791
>>105685674
>To elaborate on this, it's the same question as how many connections you need to make between two brains before turning them into a single consciousness.
My personal expectation is that there's some part of the self-model inside one half of the brain and some in the other half.
You can obviously desync them until they realize they are separate and no longer "one thing"
> Or at least as many as there are atomic things in your reality.
Except the identity is not at atoms, unless you meant counting consciousness at some class in Platonia or whatever.
Note that that rock earlier, if you kick it, it's not processing information in a way to register pain.
Also, we probably couldn't trust something to be conscious like us if they can't report on such experiences.
Let's pretend that some vision model (CNN) that can return classes given an image is conscious, in that case, it doesn't have any memory past the current frame, it processed some information, it compressed it, returned some class. If there's some qualia associated with it, a few things would be true:
- it doesn't remember anything before or after, it only saw the information current in this frame
- it discarded a lot of information as it processed it, likely this would imply a lot of what was discarded wasn't perceived in some way
- it can't express its internal state beyond the output to us
- sometimes some noise patterns in non-adversarially trained CNNs will trigger the same class, its perception probably isn't as robust as ours!
- there's no self-model to be updated (at the same time, a newborn human also lacks it most likely)
- the information isn't looped to be processed, meaning it cannot *realize* that it perceived something and think about that
continues
Anonymous
6/24/2025, 3:31:33 AM No.105685793
Overall, if there's some consciousness in the CNN, the qualia is far less rich than for a human and considerly less interesting to us.
It's not a moral agent either.
In the case of the rock, the information processing is almost not there either, and a human won't care to treat something as conscious unless they also have a self-model that could express something back and get us to care about them. LLMs can sort of summon random such (indirect) self-model, and they have infinity of them, but lack complete persistance, so we don't give them a lot more moral weight than a dream character you encountered last night!
Also, a single human brain can have multiple self-models too, see stuff like DID, tulpas and others, same as with a LLM's imagination, although again, how much moral weight someone places on products of their imagination and how persistent will vary.
>Anything else should be just as unpalatable as zombies as you put it.
Is it though? Ultimately things still seem to be adding up to "normality" even in this weird metaphysics. Your average person will still think they are a singular consciousness, nobody will perceive their body duplicated in MWI every single moment, everything will feel continous. Nothing seemingly inconsistent happens in the average case.
continues
Anonymous
6/24/2025, 3:31:39 AM No.105685794
>>105685544
Thats disgusting but out of sight out of mind.
>>105685559
I don't think of cudadev when I'm doing a git pull, I'm only thinking of miku
Anonymous
6/24/2025, 3:32:34 AM No.105685799
>The glass from the film graph argument isn't processing shit yet it's still supposed to be conscious.
That was to show the absurdity of assuming vanilla materialism (in comparison with functionalism)
>"human consciousness" is a much more useful concept but then we're no longer trying to figure out what is true, just what feels right.
Even if most math was conscious, we simply would not have the words to talk about it, it's too alien to us, and I doubt most of it is of moral significance either.
Even for LLMs see people already struggle to argue one way or another, and it's obvious why, LLMs often aren't even grounded in something, their 'tree' or 'cat' is not exactly the same as most humans, their preferences are cute but kinda weird too (base models tend to repetition? instruct tunes to following instructions and having dumb aversions trained with RL) and they lack enough recurrence or way to observe their own thoughts.
If LLMs are conscious, I would argue they are pretty half-asleep and dreaming. Maybe Ilya wasn't that wrong to say it "slightly" conscious, but that slightly is still too little for most people to give it much moral weight.
that's all.
Anonymous
6/24/2025, 3:36:41 AM No.105685821
>>105685790
Ah, that makes sense.
Replies: >>105685832
Anonymous
6/24/2025, 3:38:04 AM No.105685832
>>105685821
There's checkboxes and comboboxes and shit to control their behavior. You can add separators and stuff too.
Anonymous
6/24/2025, 3:44:35 AM No.105685856
claude_vermilion_map
claude_vermilion_map
md5: db16ef3bea28f99d75ca16b8603aa162๐Ÿ”
>>105685728
>Wouldn't it make more sense to just encode all the different kinds of tiles as a sprite sheet and just pass the index to the model or something?
It would if the goal was just to make the model beat the game. I think the GPT4 imitation has an entire suite of tools to assist it with that, full area minimaps with pathfinding, pointers for all interactable objects, etc. The Gemini one was outright getting walked through things by the guy running it.
The Claude stream's idea was to just throw the game at it to see how well it does, and the answer is mostly not well. It's too blind to tell one tree from another, doesn't have the spatial awareness to navigate beyond its vision range, doesn't have the context length to memorize what failed in the past and needs to be avoided. It "sees" passages that aren't there, tries to bump through impassible walls all the time (and telling it not to do that would be worse because Pokemon does require you to bump into solid walls to enter the north/side doors of buildings, only south-facing ones get a visible sprite). That it got as far as it has is a miracle, and one day it might blunder through the spinner maze it's been stuck in since getting through rock tunnel.
In Vermilion City in that image you posted, it spent multiple days stuck on the small peninsula with the house there because it was trying to get to the end of the pier, knew the target was far south, and it couldn't figure out that it needed to go northeast to ultimately get to the goal further south. Just going around an obstacle wider than the screen width is beyond its ability. It even tried (and failed) to make an ASCII map at one point for that purpose.
Replies: >>105685890 >>105685965
Anonymous
6/24/2025, 3:48:37 AM No.105685890
6-1-867990-52-3241523260
6-1-867990-52-3241523260
md5: 8eb0539abc187e00eb2ff5f12a91a45a๐Ÿ”
>>105685856
guess not
Anonymous
6/24/2025, 3:50:40 AM No.105685897
1680444064506244
1680444064506244
md5: 2651f0154a316e9c8452cc98de936210๐Ÿ”
is Mangio-RVC-Fork still the best for voice cloning? or there are better alternatives?
Replies: >>105685932 >>105685934 >>105685961
Anonymous
6/24/2025, 3:55:22 AM No.105685914
https://huggingface.co/nvidia/Nemotron-H-8B-Reasoning-128K
https://huggingface.co/nvidia/Nemotron-H-47B-Reasoning-128K

Anyone try these yet? I don't have enough room to run the 47B at bf16, and I can't get the fp8 version to run on vllm or tensorrt-llm. As for the 8B, it exists. Decent-ish prose, pretty dumb in RP. Wouldn't use it over Mistral Nemo.
Replies: >>105685935
Anonymous
6/24/2025, 3:57:37 AM No.105685932
>>105685897
post more
Anonymous
6/24/2025, 3:58:16 AM No.105685934
>>105685897
Gptsovits
Anonymous
6/24/2025, 3:58:34 AM No.105685935
>>105685914
Nemotrons are all benchmaxxed math models, I wouldn't even bother trying them for RP.
Replies: >>105685940
Anonymous
6/24/2025, 3:59:11 AM No.105685940
file
file
md5: faae8250d34041d4071911384f3a867b๐Ÿ”
>>105685935
apologize.
Replies: >>105685951 >>105686493
Anonymous
6/24/2025, 4:00:44 AM No.105685951
>>105685940
I tried Valkyrie and it really didn't seem any better than small/cydonia for RP
Replies: >>105685952
Anonymous
6/24/2025, 4:02:08 AM No.105685952
>>105685951
Apologies, I haven't tried it myself either.
Replies: >>105686493
Anonymous
6/24/2025, 4:04:21 AM No.105685961
Merida Hiccup Comic by chirpingjane
Merida Hiccup Comic by chirpingjane
md5: 65d12bb35549a7a3e6f0523107f03536๐Ÿ”
>>105685897
I use Seed-VC because it has few shot fine tuning. But the sample rate for the non singing model is only 22.05 kHz.
Sample:
11labs file
https://vocaroo.com/12qxBf7kCm6X
11labs file fed to Vevo GUI and Seed-VC for a voice clone of Merida from Brave. I've noticed the crying emotion is only captured if the input file is more than 14 seconds long.
https://vocaroo.com/1V42uvAq85zw
Anonymous
6/24/2025, 4:06:20 AM No.105685965
>>105685679
>Claude's multi-modal image recognition is also too dumb to make sense of most sprites reliably
I think the image recognition is probably fine. He can pick up on a lot of things like the footprints in the trashed house and he can read "pokรฉ" on the pokemon centers. He is just retarded and doesn't trust what he sees or hallucinates that he is an NPC or something.
>>105685856
>think the GPT4 imitation has an entire suite of tools to assist it with that, full area minimaps with pathfinding,
Claude also has these things. He has a navigator so he can pick a tile and it moves him there. When he moves without it, he does stupid shit like press up, up, up 15 times into a wall. But it's a catch 22 because his navigator is why he's stuck in the rocket hideout puzzle.
>it couldn't figure out that it needed to go northeast to ultimately get to the goal further south
He will actually say shit like "maybe I need to go east to go south" but he can't actually carry it out. The whole experiment is fairly insightful because it highlights how AI agents are dogshit at using tools abstractly, like manually navigating, but somewhat competent at using concrete tools like the navigator. He's also very bad at problem solving in an iterative sense. Like in the rocket hideout puzzle. He "sees" the arrow tiles, they push him to a new spot so he "knows" they're pushing him around. He generates text that tells you he conceptually understands what's going on and that he needs to try something different, and then he steps on the exact same fucking tile.
Replies: >>105686068
Anonymous
6/24/2025, 4:16:24 AM No.105686014
nvidia in shambles
https://jerryliang24.github.io/DnD/

https://arxiv.org/pdf/2506.16406
Replies: >>105686064 >>105686529
Anonymous
6/24/2025, 4:21:11 AM No.105686031
bros I have 64gb of VRAM what ERP model should I try on it
Replies: >>105686045
Anonymous
6/24/2025, 4:23:38 AM No.105686045
>>105686031
The new mistral small 24b
Anonymous
6/24/2025, 4:27:36 AM No.105686064
>>105686014
What does this mean? I can make the equivalent of a lora in a few seconds?
Replies: >>105686080
Anonymous
6/24/2025, 4:28:07 AM No.105686068
GPTmap
GPTmap
md5: 604c6a8cedc73eb720d1833b123ea913๐Ÿ”
>>105685965
>Claude also has these things.
This is what GPT gets from its stream. It can pick any tile on an entire map and be automatically walked to it, while Claude's navigator is limited to what's directly visible. That's what I mean about them not being comparable, this tool alone would've bypassed literal weeks of time spent in Mt Moon, Vermilion City, Rock Tunnel, etc. Claude's bumbling is caused by the spatial and planning failures of LLMs while GPT uses external means to avoid them.
Replies: >>105686194 >>105688488
Anonymous
6/24/2025, 4:30:58 AM No.105686080
>>105686064
pretty much instantly, yes.
Anonymous
6/24/2025, 4:34:33 AM No.105686096
>>105685733
I liked watching Anons type posts live. Kinda cute.
Replies: >>105686265
Anonymous
6/24/2025, 4:36:18 AM No.105686106
file
file
md5: b717d9b8b30020004871923de19cf4b7๐Ÿ”
Replies: >>105686109
Anonymous
6/24/2025, 4:37:29 AM No.105686109
>>105686106
Good night miku
Anonymous
6/24/2025, 4:44:14 AM No.105686151
>>105683011
Yes. Anything under 70B becomes really retarded when it has to act for more than one character.
Hell, ask models under 70B "Who am I", and half the time they'll describe themselves and think they're you.
Replies: >>105686183
Anonymous
6/24/2025, 4:50:26 AM No.105686183
>>105686151
What about putting a bunch of llms playing as a character each and interacting, and only the user can see their internal monologue.
Replies: >>105686255
Anonymous
6/24/2025, 4:51:58 AM No.105686194
>>105686068
>spatial and planning failures of LLMs
If the goal was really to get a Pokemon-playing AI, it'd be easier to transplant the LLM architecture onto a Roomba than the other way around.
Anonymous
6/24/2025, 4:59:53 AM No.105686227
Base Image
Base Image
md5: 6014157b748c579b7472bd8d2e3edd56๐Ÿ”
LongWriter-Zero: Mastering Ultra-Long Text Generation via Reinforcement Learning
https://arxiv.org/abs/2506.18841
>Ultra-long generation by large language models (LLMs) is a widely demanded scenario, yet it remains a significant challenge due to their maximum generation length limit and overall quality degradation as sequence length increases. Previous approaches, exemplified by LongWriter, typically rely on ''teaching'', which involves supervised fine-tuning (SFT) on synthetic long-form outputs. However, this strategy heavily depends on synthetic SFT data, which is difficult and costly to construct, often lacks coherence and consistency, and tends to be overly artificial and structurally monotonous. In this work, we propose an incentivization-based approach that, starting entirely from scratch and without relying on any annotated or synthetic data, leverages reinforcement learning (RL) to foster the emergence of ultra-long, high-quality text generation capabilities in LLMs. We perform RL training starting from a base model, similar to R1-Zero, guiding it to engage in reasoning that facilitates planning and refinement during the writing process. To support this, we employ specialized reward models that steer the LLM towards improved length control, writing quality, and structural formatting. Experimental evaluations show that our LongWriter-Zero model, trained from Qwen2.5-32B, consistently outperforms traditional SFT methods on long-form writing tasks, achieving state-of-the-art results across all metrics on WritingBench and Arena-Write, and even surpassing 100B+ models such as DeepSeek R1 and Qwen3-235B.
https://huggingface.co/THU-KEG
very cool. good method to make a story writing model
Replies: >>105686335
Anonymous
6/24/2025, 5:05:13 AM No.105686255
>>105686183
You could also just use one LLM and maintain separate contexts for each character, only feeding the model the context for the active character.
You might still run into the model forgetting which character it's currently supposed to be though.
Anonymous
6/24/2025, 5:07:32 AM No.105686265
>>105686096
Same.
I still think a good compromise between anonymity and non-anon would be thread options so OP fags can make the thread what they want. If you want live typing, IPs, flags, no trips or names allowed, and the ability to self-moderate the thread and delegate thread-specific jannies, then you can do that. If people don't like it then they can make their own thread with their own options. Thread splitting was already happening anyway, this just gives more control over the actual usefulness of the splitting.
Anonymous
6/24/2025, 5:08:30 AM No.105686271
What are the recommended starter models these days?
Replies: >>105686323 >>105686331 >>105686618 >>105686618
Anonymous
6/24/2025, 5:16:32 AM No.105686323
>>105686271
StableLM-7b
Anonymous
6/24/2025, 5:17:26 AM No.105686331
1623220013347
1623220013347
md5: 549ddd60d4925597a2c592f1cc8ae847๐Ÿ”
>>105686271
Anonymous
6/24/2025, 5:18:31 AM No.105686335
>>105686227
See >>105677544 and >>105661997, it sucks unfortunately
Anonymous
6/24/2025, 5:47:50 AM No.105686489
how difficult would it be to beat GPUs with specialized hardware running LLMs? how come there are no companies selling specialized hardware to small companies to run models in their own servers? didn't Google make their own hardware? do they still use that?
Anonymous
6/24/2025, 5:49:19 AM No.105686493
>>105685940
>>105685952
Basically sums up fine tune recs.
Anonymous
6/24/2025, 5:54:29 AM No.105686529
>>105686014
Big if ever gets released and doesn't have massive downsides that are conveniently excluded from the write-up
Anonymous
6/24/2025, 6:09:59 AM No.105686618
Anyways, >>105686271, please listen to me. That it's really related to this thread.
I went to HuggingFace a while ago; you know, HuggingFace?
Well anyways there was an insane number of people there, and I couldn't reload the page.
Then, I looked at the banner hanging from the model card, and it had "#1 12B MODEL ON LMARENA" written on it.
Oh, the stupidity. Those idiots.
You, don't download a model just because it tops the leaderboard, fool.
It's only 1.5 points, ONE-POINT-FIVE POINTS for crying out loud.
There're even entire families here. Family of 4, all out for some local models, huh? How fucking nice.
"Alright, daddy's gonna get the q8 gguf." God I can't bear to watch.
You people, I'll give you 1.5 points if you get out of here.
Huggingface should be a bloody place.
That tense atmosphere, where two finetunes of the same base can start a fight at any time, the stab-or-be-stabbed mentality, that's what's great about this place.
Women and children should screw off and stay home.
Anyways, I was about to start RPing, and then the bastard beside me goes "ollama run deepseek-r1:1.5b"
Who in the world uses ollama nowadays, you moron?
I want to ask him, "do you REALLY want to chat with ollama?"
I want to interrogate him. I want to interrogate him for roughly an hour.
Are you sure you don't just want to try saying "ollama"?
Coming from a /lmg/ veteran such as myself, the latest trend among us vets is this, extra MoE Miqu.
That's right, extra MoE Miqu. This is the vet's way of chatting.
Extra MoE Miqu means more negi than slop. But on the other hand the model is a tad larger. This is the key.
And then, it's coomworthy. This is unbeatable.
However, if you download this then there is danger that you'll be marked by the finetooners from next time on; it's a double-edged sword.
I can't recommend it to amateurs.
What this all really means, though, is that you, >>105686271, should just stick with Mistral Nemo.
Anonymous
6/24/2025, 7:34:00 AM No.105687066
>>105685322
there's an easy solution to your problem
just make your thread in /pol/
Anonymous
6/24/2025, 7:58:08 AM No.105687212
>>105685379
I've never asked for donations, never set up a patreon/kofi/etc account, just basically stopped when it was clear that there wasn't much that could be done without large amounts of compute and funds to keep up with LLM releases and ever-growing model sizes, and that finetuning the models on mostly or just ERP logs makes the models retarded and silly-horny.

E/RP capabilities must be solved both at the pretraining and post-training level by the companies making the models, there's no other way.
Replies: >>105687293
Anonymous
6/24/2025, 8:07:40 AM No.105687256
>>105683370
Anonymous
6/24/2025, 8:14:09 AM No.105687293
>>105687212
>E/RP capabilities must be solved both at the pretraining and post-training level by the companies making the models, there's no other way.
so it's theoretically possible, right? If some company was to train heavily on smut, they could produce a 12b model that would be insanely good for erp
Replies: >>105687403
Anonymous
6/24/2025, 8:31:00 AM No.105687403
>>105687293
They don't have to train *heavily* on smut, just not to filter it to irrelevance from their pretraining datasets and not to completely exclude it or RLHF it away from post-training, although the latter would be less of an issue if that data (ERP logs, etc) was included in the pretraining phase instead.

But for a model to be actually good for ERP, not just smut (in moderate amounts), also intimate/flirty conversations from many different sources would have to be included in the training pipeline. I suspect Gemma 3 actually saw these, although the explicit portions were likely masked / rewritten / filtered out.
Replies: >>105687731
llama.cpp CUDA dev !!yhbFjk57TDr
6/24/2025, 8:36:45 AM No.105687438
>>105684823
>>105684905
I'm working on llama.cpp/ggml because I think language models and machine learning in general are a key technology of the future.
And the future I want to live is one where this key technology is in the hands of the people, not just corporations and billionaires.
Replies: >>105688703
Anonymous
6/24/2025, 8:42:00 AM No.105687473
Screenshot 2025-06-24 083738
Screenshot 2025-06-24 083738
md5: 92d0ccdfc4246d46f69d8718a10d5c65๐Ÿ”
i got my hands on evil corps cloud account and can spin up any amounts of RTX A5000. are there any 4Q_K_M quants of
Llama-3_1-Nemotron-Ultra-253B-v1
or any other recommandations i could try to fit?

how much vram would i need for deepseek r1 for a 5Q_K_M? i heard the loss is not that bad compared to full fp18-

well anyways i actually just want to build some private LLM serving that i can pass to the collegues in the team to fuck around with. it should atleast be somewhat usefull.

happy for any recommandations.

the max i can probably spin up are 8 more cards btw. as a ballpark
Replies: >>105687524
Anonymous
6/24/2025, 8:51:04 AM No.105687524
1728179887559757
1728179887559757
md5: ead4e478e54720756c61f105d1b61bd8๐Ÿ”
>>105687473
>Llama-3_1-Nemotron-Ultra-253B
grim

qwen 3 235b if you really want fast speed or if you have a little ram/ok ssd on that machine then 131gb r1
https://unsloth.ai/blog/deepseekr1-dynamic
https://github.com/ikawrakow/ik_llama.cpp/discussions/258
Replies: >>105687643
Anonymous
6/24/2025, 9:05:57 AM No.105687610
>say something jokingly
>have to say "jokingly, I retort" in my response or the model won't understand and take it literally
>it's a subtle, trivial joke that should, at most, elicit a chuckle or grin and some witty comeback
>model responds with character bursting out in laughter and doubling over with tears in their eyes
the life of a vramlet is pure suffering
Replies: >>105687642 >>105687672 >>105687681
Anonymous
6/24/2025, 9:11:29 AM No.105687642
>>105687610
Just prefill the model's response
Write "{{char}} rolls her eyes" or something
Replies: >>105687652
Anonymous
6/24/2025, 9:11:32 AM No.105687643
>>105687524
elaborate, why grim?

the unloth dynamic quants looking cool. i guess i can try some 2 bit quants with the amount of vram i have.

can i run these quants across multiple hosts liek with pipeline parallelism in vllm? i can only fit 4 a5000 per host.
Anonymous
6/24/2025, 9:12:08 AM No.105687652
>>105687642
why don't I just write both sides of the dialogue, who even needs LLMs
Replies: >>105687664
Anonymous
6/24/2025, 9:13:48 AM No.105687664
>>105687652
this. I'm slowly going from chatting with an llm to just...writing an entire story all by myself
Anonymous
6/24/2025, 9:14:47 AM No.105687667
Best model to write an entire story all by myself?
Replies: >>105687705
Anonymous
6/24/2025, 9:15:18 AM No.105687672
>>105687610
Add emoji to convey subtlety. It actually works.
Anonymous
6/24/2025, 9:16:01 AM No.105687681
>>105687610
just use mikupad
Anonymous
6/24/2025, 9:21:07 AM No.105687705
>>105687667
At the very least, not anything under 32b or 70b q4. Unless you're writing common stories with popular lines.
Anonymous
6/24/2025, 9:26:28 AM No.105687731
>>105687403
Why not just continue pretrain a bit while adding your ERP logs and fiction back in? It should not get too overfit that way. Similarly, how does merging an overfit on ERP model go back into the original, then RL'ing it a little bit against refusals (or even SFT). I suspect there's's many things that can be done, but people are not willing to try it, if the goal is to preserve the instruct/reasonign model's capabilities intact.
Replies: >>105687792
Anonymous
6/24/2025, 9:26:33 AM No.105687733
Rewriting the web...

https://arxiv.org/abs/2506.04689
>Recycling the Web: A Method to Enhance Pre-training Data Quality and Quantity for Language Models
>
>Scaling laws predict that the performance of large language models improves with increasing model size and data size. In practice, pre-training has been relying on massive web crawls, using almost all data sources publicly available on the internet so far. However, this pool of natural data does not grow at the same rate as the compute supply. Furthermore, the availability of high-quality texts is even more limited: data filtering pipelines often remove up to 99% of the initial web scrapes to achieve state-of-the-art. To address the "data wall" of pre-training scaling, our work explores ways to transform and recycle data discarded in existing filtering processes. We propose REWIRE, REcycling the Web with guIded REwrite, a method to enrich low-quality documents so that they could become useful for training. This in turn allows us to increase the representation of synthetic data in the final pre-training set. Experiments at 1B, 3B and 7B scales of the DCLM benchmark show that mixing high-quality raw texts and our rewritten texts lead to 1.0, 1.3 and 2.5 percentage points improvement respectively across 22 diverse tasks, compared to training on only filtered web data. Training on the raw-synthetic data mix is also more effective than having access to 2x web data. Through further analysis, we demonstrate that about 82% of the mixed in texts come from transforming lower-quality documents that would otherwise be discarded. REWIRE also outperforms related approaches of generating synthetic data, including Wikipedia-style paraphrasing, question-answer synthesizing and knowledge extraction. These results suggest that recycling web texts holds the potential for being a simple and effective approach for scaling pre-training data.
Hi all, Drummer here...
6/24/2025, 9:33:42 AM No.105687779
Hi all, just wanted to update you beautiful people.

Valkyrie was quite an interesting tune. I knew that there was potential in it beneath the dysfunctional RP formatting. Glad I've successfully unlocked it by ironing out the kinks. I wasn't surprised with the outcome, but I am surprised by how well received it had become.

The new Mistral Small 3.2 is fucking weird. It uses the same base as 3.1 and 3.0 and yet it's clear that it's more sensitive to the same tuning process. Don't worry, I'm iterating further on both Skyfall and Cydonia. But it's clear that Mistral is cooking their models differently now.
Replies: >>105687813 >>105688549 >>105689336
Anonymous
6/24/2025, 9:35:15 AM No.105687786
Did anyone benchmark the rdna4 gpus ? I am thinking about buying one and just use it for ai as hobby
Replies: >>105687798
Anonymous
6/24/2025, 9:36:56 AM No.105687792
>>105687731
Anything out of reach of community finetuners with excessive self-esteem will be good.
Replies: >>105687889
Anonymous
6/24/2025, 9:38:03 AM No.105687798
>>105687786
Nobody bought them
Anonymous
6/24/2025, 9:41:25 AM No.105687813
713
713
md5: 4c9008052f37c635dadc655845b5abf4๐Ÿ”
>>105687779
Anonymous
6/24/2025, 9:56:22 AM No.105687889
>>105687792
Out of reach due to "skill issue" or due to not having 5-10 times the funding that they usually put into a finetune? I wasn't really talking about a 100B+ continued pretrain here or even that one AI Dungeon did a while ago (that they released teh weights of)
Replies: >>105687982
Anonymous
6/24/2025, 10:16:50 AM No.105687982
>>105687889
To retain pre-existing capabilities and not just superficially integrate missing knowledge into their weights, the models can't be simply continually pretrained for a few billion tokens on smut, fiction and human conversations; those would have to be introduced at sane percentages for a long enough training duration (much longer than 100B tokens) together with the previously used general data mixture using similar training hyperparameters, which only the companies training the models are privy of.

Likewise, RP or even ERP data would have to be introduced organically in the same post-training datasets used for the standard instruct models in a way that doesn't turn the models into horny sluts.

It is both a skill and funding issue, because you can't simply slap some AO3 or ASSTR data and Claude logs into the weights and call it a RP/writing model.
Replies: >>105688083
Anonymous
6/24/2025, 10:35:23 AM No.105688083
>>105687982
What's the longest attempt at community "continued pretrain" so far? I'd certainly like to see some paper on why it'd need to be that long (100b+), I'm not talking about replicating their exact data mix as that would be impossible in most cases, but something like let's say 5-10% finetune material 90-95% "somehealthy pretrain mix'(books, common crawl, etc). I recall one paper by meta from a year or more ago stating that you can mitigate most catastrophic material by including as little as 2% of the original mixture, enough to trigger the needed capabilities so that the optimizer doesn't wipe them.
Replies: >>105688088 >>105688174 >>105688224
Anonymous
6/24/2025, 10:36:34 AM No.105688088
>>105688083
*most catastrophic forgetting
Anonymous
6/24/2025, 10:53:13 AM No.105688174
>>105688083
I have no idea of what was the longest attempt so far at that. That's something the various (some ongoing) distributed training efforts should have focused on, instead of training new useless models from scratch.

If you have say 50B selected tokens of conversational/writing/RP-related data (not really a lot of data, all things considered), making that 5% of the mixture would bring the total training data to 1T tokens.
Replies: >>105688213
Anonymous
6/24/2025, 11:00:58 AM No.105688213
>>105688174
I'm, aware of Prime Intellect's efforts as far as making some decentralized training infrastructure, I think most of their code is open, so maybe one day lmg can get off their ass and try their own runs. Assuming anyone can agree on what the datasets would be what the mix would be and so on, or even on top of what to train! It'd probably be at least half a year preparing a good enough dataset heh, if not longer, but I have severe doubt about lmg's desire to organize on this.
Replies: >>105688283
Anonymous
6/24/2025, 11:02:24 AM No.105688224
>>105688083
the only good result I've seen from continued pretrain was by mistral with mistral medium (1) aka miqu, and that's because they knew what was in the llama2-70B data that they continued for it
anything else was pretty shit
Anonymous
6/24/2025, 11:07:33 AM No.105688247
>>105630585
RAM arrived, some initial DeepSeek-R1 benchmarks on an old single-socket E5v4 platform.
Platform: Xeon E5-2697A v4, 256GB RAM, 2133MHz 4-channel + GTX 1060 6GB
Quant: unsloth/DeepSeek-R1-0528-UD-IQ2_M
pp: on GPU
tg: on CPU only
>llama.cpp (bf2a99e)
| model | size | params | backend | ngl | fa | mmap | test | t/s |
| ------------------------------ | ---------: | ---------: | ---------- | --: | -: | ---: | --------------: | -------------------: |
| deepseek2 671B IQ2_M - 2.7 bpw | 212.82 GiB | 671.03 B | CUDA | 0 | 1 | 0 | pp512 | 7.94 ยฑ 0.13 |
| deepseek2 671B IQ2_M - 2.7 bpw | 212.82 GiB | 671.03 B | CUDA | 0 | 1 | 0 | tg128 | 2.07 ยฑ 0.07 |

>ik_llama.cpp (ddda4d9)
| model | size | params | backend | ngl | fa | mla | amb | mmap | rtr | fmoe | test | t/s |
| ------------------------------ | ---------: | ---------: | ---------- | --: | -: | --: | ----: | ---: | --: | ---: | ------------: | ---------------: |
| deepseek2 671B IQ2_M - 2.7 bpw | 213.83 GiB | 672.05 B | CUDA | 0 | 1 | 3 | 512 | 0 | 1 | 1 | pp512 | 5.53 ยฑ 1.37 |
| deepseek2 671B IQ2_M - 2.7 bpw | 213.83 GiB | 672.05 B | CUDA | 0 | 1 | 3 | 512 | 0 | 1 | 1 | tg128 | 1.84 ยฑ 0.07 |

ik_llama.cpp is slower, both in pp and tg. I have runtime repacking enabled, downloading ubergarms quant right now to see if it makes any difference, or if ik_llama.cpp is a meme for CPU-only inference.
Replies: >>105688269 >>105688843
llama.cpp CUDA dev !!yhbFjk57TDr
6/24/2025, 11:12:03 AM No.105688269
>>105688247
Consider also benchmarking the performance at a non-zero --depth since the code for attention is different and you won't see this difference on an empty context.
Replies: >>105688291
Anonymous
6/24/2025, 11:14:32 AM No.105688283
>>105688213
There's also Nous Psyche: https://psyche.network/runs/consilience-40b-1/0
It might be decentralized training, but they're still using hundreds of H100 GPUs and it's nevertheless taking forever. It's unclear if consumer ones would be even enough for decently-sized models at modern context sizes.
As for the pretraining dataset, it's not just the mixture itself; at this point it's also augmentation, whether to include synthetic data/instructions there (which would very likely help although some might be ideologically opposed to it), any specific long-context training strategy, etc. And then there's post-training/RLHF which would make or break the model...
Replies: >>105688325
Anonymous
6/24/2025, 11:15:32 AM No.105688291
>>105688269
oh right, good point
Anonymous
6/24/2025, 11:17:36 AM No.105688300
Gtjq6CybIAAKURe
Gtjq6CybIAAKURe
md5: 0301150134e2cf5d41cacecf7f8169f1๐Ÿ”
Replies: >>105688383
Anonymous
6/24/2025, 11:18:07 AM No.105688303
>https://rentry.org/LLMAdventurersGuide
Did a few test by incorporating the Game Master character and couple of lorebooks to ST and this is pretty cool with Mistral 3.2. This certainly has potential but as of now, it's kind of free form adventure with no real goals obviously.
Now doing a test and converting (https://en.wikipedia.org/wiki/Castle_Caldwell_and_Beyond) descriptions to lorebooks to see how more closed location would work.
Not entirely sure what would be the best format though. The adventure booklet has descriptions for each room and encounters as well so in this sense everything has been laid out.
Anonymous
6/24/2025, 11:21:48 AM No.105688320
>>105681859
I think the training involves a lot of "relevant data -> answer" and not a lot of "random junk that might include relevant data -> answer". At least that's what I was seeing with my Japanese translation with RAG project attempts last year.
Anonymous
6/24/2025, 11:22:48 AM No.105688324
you did write your own code right anons?
https://github.com/LLMauthorbench/LLMauthorbench
Anonymous
6/24/2025, 11:22:56 AM No.105688325
>>105688283
I mentioned PI's stuff because they were claiming their code is ready to handle both malicious nodes and smaller GPUs, but I think the smaller GPU stuff is mostly good for RL rather than pretrain proper, but maybe I'm wrong about that:
https://xcancel.com/PrimeIntellect/status/1937272179223380282#m Pipeline Parallelism No single GPU holds the full model - each handles a stage, streaming activations forward. This lets smaller GPUs run large models like DeepSeek-R1. Hidden states pass stage to stage; the final GPU decodes a token, sends it back, and the cycle continues.
Anonymous
6/24/2025, 11:28:30 AM No.105688361
file
file
md5: fdc5adc5e2847e4e805fe30696770165๐Ÿ”
Anonymous
6/24/2025, 11:32:46 AM No.105688383
1710529522813
1710529522813
md5: 515f5ba9137c62f68c9f2f4ddc84fde6๐Ÿ”
>>105688300
Replies: >>105688391 >>105688392 >>105688413 >>105688946 >>105689007
Anonymous
6/24/2025, 11:33:44 AM No.105688391
>>105688383
l-lewd...
Anonymous
6/24/2025, 11:33:53 AM No.105688392
>>105688383
migu when she wears exactly the opposite of what she should be earing
Anonymous
6/24/2025, 11:37:52 AM No.105688413
>>105688383
she's literally me right now
Replies: >>105688449
Anonymous
6/24/2025, 11:44:37 AM No.105688449
>>105688413
troon
Replies: >>105688496
Anonymous
6/24/2025, 11:54:59 AM No.105688488
>>105686068
Right but I argue neither of the tools and scaffolding affording for LLMs to play Pokemon are even accurate. if we want to actually replicate accurately presenting an LLM with what a human does, the only real way to replicate that is to provide it a PDF of the game's manual, and then let it loose on the game with vision and button press capability which was what I had when I was 7 and whatever happens happens. I guess you can add the reuse context and feed it over and over again until it can play it but effectively, any of the mapping and etc. tool stuff that is manually coded into the LLM's input for tool use by an LLM does not accurately model human play at all. They both effectively suck.
Replies: >>105688498 >>105689706
Anonymous
6/24/2025, 11:56:29 AM No.105688496
>>105688449
buy an ad
Anonymous
6/24/2025, 11:56:51 AM No.105688498
>>105688488
>provide it a PDF of the game's manual
Do 6 year olds read manuals?
Replies: >>105688505 >>105688507
Anonymous
6/24/2025, 11:59:18 AM No.105688505
06%2B07_pkmn-red
06%2B07_pkmn-red
md5: 0ee00b3a7cd0490e97ca2bbc4e0e25ee๐Ÿ”
>>105688498
I did because it was in the box, and there are enough simple words there you can skip over the words you don't know and still get a general gist of things from the pictures.
Replies: >>105688513
Anonymous
6/24/2025, 11:59:40 AM No.105688507
>>105688498
Game manuals don't exist anymore, but I did. Whenever my parents bought me a game I'd read the manual on the way home.
Replies: >>105688513
Anonymous
6/24/2025, 12:00:33 PM No.105688513
>>105688505
>>105688507
And that is why you wear a dress and post that disgusting avatar of your AGP fetish everywhere now.
Replies: >>105688522 >>105688524 >>105688541
Anonymous
6/24/2025, 12:02:49 PM No.105688522
>>105688513
I stopped at Ruby and Sapphire. You can have played Pokemon and remember enough of what you played because of the Pokemania from the 90s and still have dropped it and not identify with those freaks that take up an entire board and spend a sad existence coping about the state of the franchise.
Anonymous
6/24/2025, 12:03:12 PM No.105688524
>>105688513
>AGP
The forerunner of PCI, and better than VESA local.
Anonymous
6/24/2025, 12:05:24 PM No.105688541
>>105688513
Imagine seeing a game manual and thinking about trannies in dresses
Anonymous
6/24/2025, 12:07:05 PM No.105688549
>>105687779
I prefer GLM4 nowadays. How did the tuning of that work out?
Anonymous
6/24/2025, 12:09:14 PM No.105688560
>go to local migu general where people use text to communicate about text (and image) models
>also use text to communicate
>read post about people using text to communicate
>this must be a sick fetish.

anon your LLM slop is so retarded I can only at this point assume that you occupy either full terminal illness or you're actually a third world hire who's sole function is to engagement farm to increase site activity. across so many boards and so many threads you are consistently the one with the worst imaginable takes that at this point it must be, in some capacity, a task that is no longer fuelled by any personal ambition or enjoyment because it's too fucking retarded at all times that any central guiding force simply cannot exist outside of cash money
if you're doing this for free, you are single-handedly the biggest waste of a child brought to full term in the human race.
Replies: >>105688845
Anonymous
6/24/2025, 12:10:33 PM No.105688567
>>105681743
Their mod queue isn't progressing or something so even new comments aren't appearing. For example there's supposed to be 35 replies to this thread but it's just blank...

old.reddit.com/r/LocalLLaMA/comments/1lhi8p8/how_much_performance_am_i_losing_using_chipset_vs/

Something must have happend to the, apparently, only 1 moderator.
Replies: >>105688592
Anonymous
6/24/2025, 12:14:46 PM No.105688592
>>105688567
There is a discussion on the localllm subforum https://old.reddit.com/r/LocalLLM/comments/1lif5yo/whats_happened_to_the_localllama_subreddit/
Please take your discussion there.
Anonymous
6/24/2025, 12:21:21 PM No.105688625
1731857759159955
1731857759159955
md5: cb88b4971b36c4e62052d4a39a760714๐Ÿ”
>try the ERP LLMs thinking they can't be that good
>run some stock settings and a pre-made card
>type out of a basic user persona
>the busty demon futa LLM proceeds to seduce my foxgirl with hypnosis, then fucks my poor foxgirls brains out in multiple orgasms using both holes, choking me out at the end before we both fall asleep in a fluid soaked cuddle
>can't distinguish it from a normal ERP, even surprised me with the hypnosis and choking (THIS WASN'T IN THE CARD, I EVEN READ THE FULL THING TO SEE HOW IT WORKS)
what the fuck what the fuck what the fuck what the fuck what the fuck what the fuck
how has this not ruined more lives jesus christ
this shit is too dangerous, I'm deleting it
Replies: >>105688637 >>105688648 >>105688665 >>105688677 >>105688714 >>105688842 >>105688890
Anonymous
6/24/2025, 12:21:38 PM No.105688626
>>105681743
Time to move to
https://www.reddit.com/r/LocalLLM
Replies: >>105688639
Anonymous
6/24/2025, 12:23:03 PM No.105688637
>>105688625
It's pretty shit for the more niche kinks.
Anonymous
6/24/2025, 12:23:18 PM No.105688639
>>105688626
You can't simply "move" like that. LocalLLaMA was large and visited enough that ML research papers were citing it too.
Replies: >>105688644 >>105688654
Anonymous
6/24/2025, 12:24:23 PM No.105688644
>>105688639
>You can't simply "move" like that
Sure you can, just not overnight
Anonymous
6/24/2025, 12:25:07 PM No.105688648
>>105688625
The more you use these models, the more you notice their problems until you need a new, better one.
Which, coincidentally, releases in two weeks.
Anonymous
6/24/2025, 12:25:55 PM No.105688654
>>105688639
>ML research papers were citing reddit
What a pathetic state this field is in
Anonymous
6/24/2025, 12:27:16 PM No.105688665
>>105688625
You probably accidentally brushed the dominant millionaire werewolf vampire sex training data domain. You are just lucky.
Anonymous
6/24/2025, 12:29:33 PM No.105688677
>>105688625
>most mentally sane futafag groomer tranny
damn i wonder why everyone except other cumbrained gooner trannies hate these types of 'people'
Replies: >>105688688
Anonymous
6/24/2025, 12:31:43 PM No.105688688
>>105688677
>starts talking about trannies unprompted
Replies: >>105688719 >>105690110
Anonymous
6/24/2025, 12:32:00 PM No.105688689
>literal fucking leftie redditors in my /lmg/ thread
time for 4chan to burn down
Replies: >>105688701 >>105688712 >>105688720
Anonymous
6/24/2025, 12:34:24 PM No.105688701
>>105688689
>literal fucking leftie redditors
always has been. there is zero traffic increase since r/localllama died
Replies: >>105688708
Anonymous
6/24/2025, 12:34:30 PM No.105688703
>>105687438
>And the future I want to live is one where this key technology is in the hands of the people, not just corporations and billionaires.
I still don't see how that's a leftist thing, I'm a right winger and I'm pro open source aswell
Replies: >>105688849
Anonymous
6/24/2025, 12:35:26 PM No.105688707
is the reasoning component of a model always shit for RP? Reasoning is for math and puzzles, right?
Replies: >>105688731 >>105688733
Anonymous
6/24/2025, 12:35:31 PM No.105688708
>>105688701
>r/localllama died
what happened?
Replies: >>105688726
Anonymous
6/24/2025, 12:36:12 PM No.105688712
>>105688689
It already did a few months back. We're all dead here.
Anonymous
6/24/2025, 12:36:18 PM No.105688714
>>105688625
1) You'll get over it. You'll see.
2) This is the Atari 2600 version of this tech. There are $billions chasing it in both HW and SW.
We are only at the beginning; having just scratched what's possible. I expect full holodeck-tier VR, where you state a premise and the system responds with full audio/visual RP with you as a character, within my lifetime. I expect entertainment so compelling people waste away from it, Infinite Jest style.
Anonymous
6/24/2025, 12:37:13 PM No.105688719
>>105688688
thanks for outing yourself, ywn
baw
back to trooncord, disgusting faggot
Anonymous
6/24/2025, 12:37:15 PM No.105688720
>>105688689
That's right, we 4chan anonymous hackers are edgy as fuck and use at least 3 twitter buzzwords in every sentence
Anonymous
6/24/2025, 12:37:31 PM No.105688726
>>105688708
The owner set the automod to shadowdelete every new post, removed the other human moderator and deleted his account.
Replies: >>105688735
Anonymous
6/24/2025, 12:38:22 PM No.105688731
>>105688707
Not necessarily shit/worse, more just pointless unless you're tracking stats and actions for an RPG-like experience. If you're doing a normal chat then I wouldn't bother.
Anonymous
6/24/2025, 12:38:26 PM No.105688733
>>105688707
It works sometimes but if you're using 70b finetuned llamas that <think>, you need to tard wrangle it's reasoning AND it's response. It's really good for character states and locations.
Anonymous
6/24/2025, 12:38:40 PM No.105688735
>>105688726
wtf, why?
Replies: >>105688746
Anonymous
6/24/2025, 12:38:57 PM No.105688740
1741921463443060
1741921463443060
md5: f0b4fd3771185c38df9520d8a4338451๐Ÿ”
i present to you the most disturbing image on 4chan
never forget what really has happened to this place
Replies: >>105688747 >>105688833
Anonymous
6/24/2025, 12:39:37 PM No.105688746
>>105688735
Trannydrama, probably. He had no recent posting history.
Anonymous
6/24/2025, 12:39:56 PM No.105688747
>>105688740
>the flag
if I speak...
https://www.youtube.com/watch?v=9wtvXoXh0VU
Anonymous
6/24/2025, 12:57:27 PM No.105688833
>>105688740
can mods see your IP?
Anonymous
6/24/2025, 12:58:57 PM No.105688842
MedicalCondition
MedicalCondition
md5: 8820f5ef80e05949108633fd49818bcb๐Ÿ”
>>105688625
I think it's funny.
Replies: >>105690077
Anonymous
6/24/2025, 12:58:58 PM No.105688843
>>105688247
you need their quants because they've implemented mla in a different way
Anonymous
6/24/2025, 12:59:21 PM No.105688845
>>105688560
seethe, rope & dial8
go back with your 8b model rajeesh
llama.cpp CUDA dev !!yhbFjk57TDr
6/24/2025, 1:00:33 PM No.105688849
>>105688703
My personal view on open source software is that it's basically communism (yes, even if billion dollar corporations partake in it).
My motivation for working on llama.cpp/ggml to a large part goes along the lines of "from each according to their ability, to each according to their needs".
If you disagree with my view that's fine, I'm not making the claim that there aren't other reasons to be pro open source.
Anonymous
6/24/2025, 1:07:29 PM No.105688890
>>105688625
what did you use?
Anonymous
6/24/2025, 1:14:09 PM No.105688925
msaba-deprecated
msaba-deprecated
md5: 6f673b2a6abe8af3832f08ffd298cbd9๐Ÿ”
It looks like Mistral Saba is being deprecated and the recommended replacement model is now the latest Mistral Small (3.2). That wouldn't be normally worth mentioning, but it could mean it's indeed more than just a slightly different finetune.
Anonymous
6/24/2025, 1:16:55 PM No.105688946
>>105688383
delet again
Replies: >>105688993
Anonymous
6/24/2025, 1:22:27 PM No.105688993
1710529697764718
1710529697764718
md5: c52db0c950cc59a82a45db304db611a0๐Ÿ”
>>105688946
Replies: >>105689007 >>105689059 >>105689241
Anonymous
6/24/2025, 1:24:38 PM No.105689007
Death_to_mikufags_thumb.jpg
Death_to_mikufags_thumb.jpg
md5: 013d06041021d2c52421870fa16bee4b๐Ÿ”
>>105688383
>>105688993
Replies: >>105689019 >>105689086
Anonymous
6/24/2025, 1:26:40 PM No.105689019
>>105689007
BASED
Anonymous
6/24/2025, 1:34:27 PM No.105689059
>>105688993
yeee
Anonymous
6/24/2025, 1:39:37 PM No.105689086
>>105689007
awfully agp troon rajesh of you xaar
Anonymous
6/24/2025, 1:50:54 PM No.105689143
A grown man plays with dolls and posts pictures of it on an imageboard. And then he is shocked that people realize he is a troon.
Replies: >>105689167 >>105689168
Anonymous
6/24/2025, 1:54:33 PM No.105689167
>>105689143
many such cases
Anonymous
6/24/2025, 1:54:55 PM No.105689168
>>105689143
fuwanon is cute and you're not
fuwanon has proven that he isn't a tranny by wearing cargo pants
Anonymous
6/24/2025, 1:57:44 PM No.105689183
What are the lil bros yappin about
Anonymous
6/24/2025, 2:00:55 PM No.105689201
>check lmg for the first time in months
>no new models
>majority of thread talking about trannies
Replies: >>105689211 >>105689220 >>105689290
Anonymous
6/24/2025, 2:02:46 PM No.105689211
>>105689201
>no new models
small3.2 exists
and this is the usual state of threads now
Replies: >>105689228
Anonymous
6/24/2025, 2:04:00 PM No.105689220
>>105689201
sebian zoomer is in the middle of a meltdown, probably related to the isreal-iran war
Anonymous
6/24/2025, 2:05:15 PM No.105689228
file
file
md5: 1b5e4652925467a7d4196c4fa2fe9c6b๐Ÿ”
>>105689211
It's shit, it can't into lewd, and mistral was caught benchmaxxing "what is a mesugaki?"
Replies: >>105689235 >>105689258 >>105689274
Anonymous
6/24/2025, 2:06:11 PM No.105689235
>>105689228
>and mistral was caught benchmaxxing "what is a mesugaki?"
really? that's kinda based if you ask me
Replies: >>105689243 >>105689251
Anonymous
6/24/2025, 2:06:53 PM No.105689241
file
file
md5: a027429fb1f4e2fb8b083c3aae7475ac๐Ÿ”
>>105688993
Replies: >>105689309
Anonymous
6/24/2025, 2:07:08 PM No.105689243
>>105689235
>if you ask me
But no one did.
Anonymous
6/24/2025, 2:08:01 PM No.105689251
>>105689235
It's not based because it still doesn't know what a mesugaki is in any context other than that exact question.

>>105660676
>>105660793
Replies: >>105689270
Anonymous
6/24/2025, 2:08:34 PM No.105689258
>>105689228
Compared to Gemma 3's the vision model in Mistral Small sucks for NSFW imagery & poses and it didn't get improved in 3.2.
Replies: >>105689263
Anonymous
6/24/2025, 2:09:24 PM No.105689263
>>105689258
Isn't gemma 3 really cucked though?
Replies: >>105689394
Anonymous
6/24/2025, 2:10:36 PM No.105689270
>>105689251
Other than (pre)training the model on many different sources where that word is used, how would they (or any finetuner) improve that sort of knowledge?
Replies: >>105689312
Anonymous
6/24/2025, 2:11:05 PM No.105689274
>>105689228
I'm sure they and everyone else must train on everything that people ask on lmarena, but I got to wonder how they got the correct answer? Do they have people manually creating datasets with the correct answers to lmarena questions?
Anonymous
6/24/2025, 2:12:10 PM No.105689290
1741139080936632
1741139080936632
md5: 631a293ee2922eef495955c890a6a6d3๐Ÿ”
>>105689201
>astroturfing this hard
uh oh
Replies: >>105689312
Anonymous
6/24/2025, 2:15:06 PM No.105689309
>>105689241
>least mentally ill terminally online tranny spamming his trash nobody cares about in irrelevant places online 24/7 instead of keeping it to his gooner discord
So this is why everyone hates you
Anonymous
6/24/2025, 2:15:22 PM No.105689312
>>105689270
There is no other way but it's funny that this superficial knowledge of the definition suddenly appeared in a minor update.

>>105689290
Literally nobody in this thread ever defended trannies ever and you are still seething about them. It's incredible that you've managed to become more hated than a literal tranny.
Replies: >>105689339 >>105689347
Anonymous
6/24/2025, 2:18:37 PM No.105689336
>>105687779
Do a MoE.
Anonymous
6/24/2025, 2:18:51 PM No.105689339
>>105689312
The only 'people' who try to shame others who shit on trannies in any context are either trannies or some even more retarded normgroid NPCs, your gaslighting failed
Replies: >>105689344
Anonymous
6/24/2025, 2:19:47 PM No.105689344
>>105689339
People shame you for shitting on the thread, not for shitting on trannies.
Replies: >>105689363 >>105689365 >>105689386
Anonymous
6/24/2025, 2:20:18 PM No.105689347
>>105689312
>It's incredible that you've managed to become more hated than a literal tranny.
Mikufaggots are the most hated demographic of /lmg/. A subset of those subhumans is a fucking janny faggot.
Replies: >>105689369
Anonymous
6/24/2025, 2:21:19 PM No.105689363
>>105689344
>People shame you for shitting on the thread
This is what happens when you post your AGP fetish avatar and you never learn that is why people hate you troon.
Anonymous
6/24/2025, 2:21:27 PM No.105689365
>>105689344
Your gaping axe wound makes everything smell like shit everywhere you go you disgusting troon.
Anonymous
6/24/2025, 2:21:44 PM No.105689369
>>105689347
Miku was the thread mascot almost since the beginning of the general and it was never a problem until (You) arrived.
Replies: >>105689374 >>105689388
Anonymous
6/24/2025, 2:22:23 PM No.105689374
>>105689369
>thread culture
oh no, melty inc
Anonymous
6/24/2025, 2:23:47 PM No.105689386
1746944661715013
1746944661715013
md5: 281a122046d91a4bf28b9ffcf8b49016๐Ÿ”
>>105689344
>People shame you for shitting on the thread,
lmao, picrel normgroid futa gooner underage degenerates really are the highest quality newniggers that can join the high quality thread discussion, you definitely arent a mentally ill npc who again failed to revise history as called out, again

troons really are retarded, lol
Anonymous
6/24/2025, 2:23:51 PM No.105689388
>>105689369
>offtopic trash waifu was here since the beginning
Stop posting your offtopic trash waifu. Or don't and continue being hated for being a troon. Either is fine.
Anonymous
6/24/2025, 2:24:26 PM No.105689394
gem-expl-01
gem-expl-01
md5: 160422e3e46443bf318e4ae69d0b4e1a๐Ÿ”
>>105689263
It can't organically use dirty words on its own or write good smut, but with a suitable prompt it doesn't have issues describing explicit nudity and limited pornographic poses.
Anonymous
6/24/2025, 2:25:04 PM No.105689401
>>105689385
>>105689385
>>105689385
Anonymous
6/24/2025, 2:26:44 PM No.105689416
I like seeing high-quality machine generated images, vocaloids are fine as a motif.
I don't particularly care about photographs of dolls either way.
Culture warriors can fuck off to Twitter.
Anonymous
6/24/2025, 3:04:15 PM No.105689706
>>105688488
>the only real way to replicate that is to provide it a PDF of the game's manual, and then let it loose on the game with vision and button press capability
It has that. Its context is preloaded and a separate LLM provides it with information on where to go and what to do next. So when he gets off track the other LLM puts his current goal in his context like "beat Erica." The experiment is set up well and in my opinion any additional tools would make the run uninteresting. Maybe the Claude dev is just lazier than the Gemini or GPT devs, but imo he did a good job of choosing the information and tools.
>does not accurately model human play at all. They both effectively suck.
That's the point. When you're watching Claude it gives you a good idea of how AI agents are different than humans and how we can use them with that difference in mind. He has a verbal IQ of 120 and a spatial IQ of 10. It's very strange and insightful.
Anonymous
6/24/2025, 3:45:48 PM No.105690077
>>105688842
Based fellow prolapse enjoyer.
Anonymous
6/24/2025, 3:48:21 PM No.105690110
>>105688688
>futa
>not tranny
pick one