Search Results
7/25/2025, 6:29:08 AM
>>106016612
You read the data on the cassette into ram and then execute.
You read the data on the cassette into ram and then execute.
7/9/2025, 5:03:16 AM
►Recent Highlights from the Previous Thread: >>105832690
--Papers:
>105834135 >105834182
--Experimenting with local model training on consumer GPUs despite known limitations:
>105839772 >105839805 >105839821 >105839838 >105840910 >105841022 >105841129 >105839881 >105841661 >105841824 >105841905 >105841992 >105842071 >105842166 >105842302 >105842422 >105842624 >105842704 >105842731 >105842816 >105842358 >105842418 >105842616 >105842654 >105842763 >105842902 >105842986 >105843186 >105842891
--Risks and challenges of building a CBT therapy bot with LLMs on consumer hardware:
>105836762 >105836830 >105836900 >105837945 >105840397 >105840554 >105840693 >105840730 >105839512 >105839539 >105839652 >105839663 >105839943 >105841327
--Memory and performance issues loading Q4_K_L 32B model on CPU with llama.cpp:
>105840103 >105840117 >105840145 >105840159 >105840191 >105840201 >105840255 >105840265 >105840295 >105840355 >105840407 >105840301 >105840315
--Evaluating 70b model viability for creative writing on consumer GPU hardware:
>105836307 >105836366 >105836374 >105836489 >105836484 >105836778 >105840476 >105841179
--Challenges in building self-learning LLM pipelines with fact-checking and uncertainty modeling:
>105832730 >105832900 >105833650 >105833767 >105833783 >105834035 >105836437
--Concerns over incomplete Hunyuan MoE implementation affecting model performance in llama.cpp:
>105837520 >105837645 >105837903
--Skepticism toward transformers' long-term viability and corporate overhyping of LLM capabilities:
>105832744 >105832757 >105832807 >105835160 >105835202 >105835366 >105839406 >105839863
--Hunyuan MoE integration sparks creative writing data criticism:
>105835909 >105836075 >105836085
--Links:
>105839096 >105839175 >105840055
--Miku (free space):
>105832744 >105832988 >105832992 >105833638 >105840752
►Recent Highlight Posts from the Previous Thread: >>105832694
Why?: 9 reply limit >>102478518
Fix: https://rentry.org/lmg-recap-script
--Papers:
>105834135 >105834182
--Experimenting with local model training on consumer GPUs despite known limitations:
>105839772 >105839805 >105839821 >105839838 >105840910 >105841022 >105841129 >105839881 >105841661 >105841824 >105841905 >105841992 >105842071 >105842166 >105842302 >105842422 >105842624 >105842704 >105842731 >105842816 >105842358 >105842418 >105842616 >105842654 >105842763 >105842902 >105842986 >105843186 >105842891
--Risks and challenges of building a CBT therapy bot with LLMs on consumer hardware:
>105836762 >105836830 >105836900 >105837945 >105840397 >105840554 >105840693 >105840730 >105839512 >105839539 >105839652 >105839663 >105839943 >105841327
--Memory and performance issues loading Q4_K_L 32B model on CPU with llama.cpp:
>105840103 >105840117 >105840145 >105840159 >105840191 >105840201 >105840255 >105840265 >105840295 >105840355 >105840407 >105840301 >105840315
--Evaluating 70b model viability for creative writing on consumer GPU hardware:
>105836307 >105836366 >105836374 >105836489 >105836484 >105836778 >105840476 >105841179
--Challenges in building self-learning LLM pipelines with fact-checking and uncertainty modeling:
>105832730 >105832900 >105833650 >105833767 >105833783 >105834035 >105836437
--Concerns over incomplete Hunyuan MoE implementation affecting model performance in llama.cpp:
>105837520 >105837645 >105837903
--Skepticism toward transformers' long-term viability and corporate overhyping of LLM capabilities:
>105832744 >105832757 >105832807 >105835160 >105835202 >105835366 >105839406 >105839863
--Hunyuan MoE integration sparks creative writing data criticism:
>105835909 >105836075 >105836085
--Links:
>105839096 >105839175 >105840055
--Miku (free space):
>105832744 >105832988 >105832992 >105833638 >105840752
►Recent Highlight Posts from the Previous Thread: >>105832694
Why?: 9 reply limit >>102478518
Fix: https://rentry.org/lmg-recap-script
Page 1