Search results for "4c2d9f4ecfacf22e48ddfd8fcfdf0ce5" in md5 (2)

/g/ - /lmg/ - Local Models General
Anonymous No.106582480
►Recent Highlights from the Previous Thread: >>106575202

--Troubleshooting low token generation speeds with multi-GPU configurations on Linux:
>106575420 >106575668 >106575698 >106575792 >106575808 >106575836 >106575848 >106575891 >106575898 >106575933 >106576021 >106576059 >106576092 >106576126 >106576137 >106576151 >106576186 >106576245 >106576331 >106576358 >106576378 >106576431 >106576477 >106576497 >106576596 >106576592 >106576606 >106576610 >106576652 >106576726 >106576759 >106576688 >106576698 >106576714 >106576789 >106576867 >106576931 >106577028 >106577094 >106577146 >106577210 >106577154 >106577350 >106577372 >106577408 >106577575 >106577677 >106576395 >106576430 >106577477 >106578561 >106578743
--Issues with instruct model formatting and jailbreaking GPT-oss:
>106579721 >106579736 >106579784 >106579795 >106579859 >106579884 >106579897 >106579908 >106579934 >106579949 >106580072 >106580156 >106580153 >106579748
--vLLM Qwen3-Next: Speed-focused hybrid model with mtp layers:
>106575851 >106576089 >106576174 >106576443
--GGUF format's support for quantized and high-precision weights:
>106575413 >106575474 >106575499 >106575521
--Self-directed LLM training via autonomous task/data generation and augmentation:
>106580707 >106580838 >106580717 >106580762 >106580794
--Qwen Next's short response issues and version instability concerns:
>106580940 >106580951
--Finding a lightweight AI model for TTRPG GM use within VRAM and RAM constraints:
>106580295 >106580315 >106580332 >106580337 >106580342 >106580350 >106580514 >106580531
--Grok-2 support to be added to llama.cpp:
>106580473
--Miku (free space):
>106576245 >106578711 >106578793 >106579905

►Recent Highlight Posts from the Previous Thread: >>106575209

Why?: >>102478518
Enable Links: https://rentry.org/lmg-recap-script
/g/ - /lmg/ - Local Models General
Anonymous No.105614260
>>105614033
Through the power of your butthurt, you have now summoned Migu.

I wonder if there will be a Blackwell card with 48GB? I don't need 96, and 32 just isn't enough. 48GB is about right. It just seems a little overboard to spend $8500 on a GPU.