Search results for "40cfe74d45ee954565ffa626a79c1a52" in md5 (2)

/g/ - /lmg/ - Local Models General
Anonymous No.106230528
►Recent Highlights from the Previous Thread: >>106225432

--Offloading models to RAM with bandwidth limits and token processing overhead:
>106227504 >106227523 >106227573 >106227654 >106227559 >106227563 >106227657 >106227689 >106227832 >106228279 >106227740 >106227717 >106227773 >106227780 >106227804
--Performance and optimization trade-offs for Wan 2.2 inference with fp8 and GGUF:
>106228751 >106228795 >106228931 >106229005 >106229191 >106229209
--Potential of LLMs with cleaner training and scalable complexity:
>106225921 >106225945 >106225973
--Flawed few-shot prompting design using system role for dialogue examples:
>106227634 >106227691 >106227771
--Alleged Meta leaks links accompanied by takedown request data:
>106225655 >106225678 >106225874
--Using embeddings to measure diversity in model rerolls:
>106225727 >106225809 >106228011
--llama.cpp draft PR for GLM-style multi-token prediction support:
>106228890
--GLM-4.5-FP8 roleplay with puzzle-solving diligence and self-checking behavior:
>106227431
--Runtime Jinja chat template support enables flexible prompt formatting in inference:
>106228166 >106228228 >106228326 >106228358
--Chat completion outperforms text completion due to prompt formatting differences:
>106225954 >106226037 >106226060 >106226111
--Ollama's GGUF incompatibility due to forked ggml engine:
>106227545
--Allegations of unethical model distillation at MistralAI amid internal fallout:
>106226988 >106227085 >106228025 >106228229 >106228265 >106227149 >106227181 >106227208 >106227218 >106227328 >106227393 >106227358 >106227409 >106228967 >106229013
--Satirical clash over semantic compression and self-aggrandizing pseudoscience in AI:
>106226533 >106226789 >106226823 >106226905 >106226912 >106227329 >106227345 >106227470 >106227488 >106227614
--Miku (free space):
>106225641 >106225762 >106225965 >106227670

►Recent Highlight Posts from the Previous Thread: >>106225438

Why?: 9 reply limit >>102478518
Fix: https://rentry.org/lmg-recap-script
/g/ - /lmg/ - Local Models General
Anonymous No.106149759
►Recent Highlights from the Previous Thread: >>106142968

(1/2)

--Paper: MicroMix: Efficient mixed-precision quantization for NVIDIA Blackwell GPUs:
>106145429 >106145591 >106146975
--Papers:
>106145497
--Running LLMs on legacy NVIDIA GPUs is a painful but possible workaround due to CUDA support:
>106143243 >106143281 >106143312 >106143415 >106143485 >106145204
--Efficient git LFS-based model fetching and export script for local LLM setup:
>106148989
--Optimizing Florence-2 image tagging speed with ONNX and pre-indexing:
>106147992 >106148102 >106148152 >106148188 >106148216 >106148239 >106148244 >106148248 >106148263 >106148260 >106148352 >106148443
--GLM 4.5 Air IQ4_KSS shows strong local performance with character knowledge and high throughput:
>106146562 >106146602
--CUDA 13.0 shows performance regression compared to 12.9 in inference benchmarks:
>106143933
--Qwen-Image criticized as bloated model lacking true multimodal capabilities:
>106143040 >106143057 >106143115 >106143131 >106143158 >106143070 >106143087 >106143097 >106143121 >106143449 >106143453 >106143537 >106143490 >106143540 >106143568 >106144440 >106144456 >106146851 >106143237 >106143313 >106143443 >106143488 >106143527 >106143548 >106143462
--GLM-4.5-Air chat template bugs prevent reliable thinking mode and tool use:
>106146877 >106146941 >106146967 >106146972 >106147950 >106147968 >106148069 >106148100 >106148121 >106148205 >106148253

►Recent Highlight Posts from the Previous Thread: >>106142972

Why?: 9 reply limit >>102478518
Fix: https://rentry.org/lmg-recap-script