Search Results
7/25/2025, 12:30:12 PM
This is one of the many times a Jew's lust for gold blinds him. I'm already taking an interest in crypto (Monero and Bitcoin) and trying to figure out how to use it properly, like A LOT of other people currently. In the long run this'll just make Visa/Mastercard lose more market share. It's like Google, an ad company, killing off adblockers in Chrome. Okay? I'll just use LibreWolf. The fuck do I care? Or the recent age restrictions forcing people into using VPNs.
>>106015967
>>106017808
But unlike Visa the Monero payment will go through.
>>106015967
>>106017808
But unlike Visa the Monero payment will go through.
7/24/2025, 5:34:35 AM
►Recent Highlights from the Previous Thread: >>106001651
--Papers:
>106004422
--Local AI waifu development challenges and skepticism toward visual-first implementations:
>106001822 >106002467 >106002483 >106002530 >106002750 >106002507 >106002544 >106002576 >106002539 >106002564 >106002591 >106002687 >106002661 >106002811 >106002725
--Qwen3-235B-A22B-2507 offers efficient inference with competitive performance:
>106003167 >106003190 >106003843
--Local TTS tools approach but don't match ElevenLabs quality yet:
>106001910 >106001955 >106001965
--Frustration with LLM dominance and lack of architectural innovation in corporate AGI efforts:
>106003383
--Difficulty suppressing newlines via logit bias due to tokenization and model behavior quirks:
>106003868 >106003898 >106003910
--AI analysis identifies ls_3 binary as malicious with backdoor and privilege escalation capabilities:
>106002258
--Debate over uncensored models and flawed censorship testing methodologies:
>106002782 >106002806 >106002856 >106003665 >106004021 >106004049
--Struggling to improve inference speed by offloading Qwen3 MoE experts to smaller GPU:
>106001836 >106001948 >106002046
--Qwen3-235B shows improved freedom but still suffers from overfitting:
>106002119 >106002161
--AMD's AI hardware is competitive but held back by software:
>106001704 >106001724
--Stop token configuration tradeoffs in local LLM chat scripting:
>106004068
--Miku (free space):
>106001717 >106001732 >106001923 >106001981 >106002168 >106002491 >106002541 >106002659 >106002722 >106003620 >106004006 >106005098 >106005225 >106005408
►Recent Highlight Posts from the Previous Thread: >>106002148
Why?: 9 reply limit >>102478518
Fix: https://rentry.org/lmg-recap-script
--Papers:
>106004422
--Local AI waifu development challenges and skepticism toward visual-first implementations:
>106001822 >106002467 >106002483 >106002530 >106002750 >106002507 >106002544 >106002576 >106002539 >106002564 >106002591 >106002687 >106002661 >106002811 >106002725
--Qwen3-235B-A22B-2507 offers efficient inference with competitive performance:
>106003167 >106003190 >106003843
--Local TTS tools approach but don't match ElevenLabs quality yet:
>106001910 >106001955 >106001965
--Frustration with LLM dominance and lack of architectural innovation in corporate AGI efforts:
>106003383
--Difficulty suppressing newlines via logit bias due to tokenization and model behavior quirks:
>106003868 >106003898 >106003910
--AI analysis identifies ls_3 binary as malicious with backdoor and privilege escalation capabilities:
>106002258
--Debate over uncensored models and flawed censorship testing methodologies:
>106002782 >106002806 >106002856 >106003665 >106004021 >106004049
--Struggling to improve inference speed by offloading Qwen3 MoE experts to smaller GPU:
>106001836 >106001948 >106002046
--Qwen3-235B shows improved freedom but still suffers from overfitting:
>106002119 >106002161
--AMD's AI hardware is competitive but held back by software:
>106001704 >106001724
--Stop token configuration tradeoffs in local LLM chat scripting:
>106004068
--Miku (free space):
>106001717 >106001732 >106001923 >106001981 >106002168 >106002491 >106002541 >106002659 >106002722 >106003620 >106004006 >106005098 >106005225 >106005408
►Recent Highlight Posts from the Previous Thread: >>106002148
Why?: 9 reply limit >>102478518
Fix: https://rentry.org/lmg-recap-script
Page 1