>>106560314 >an 80B model requires ~160GB of VRAM. A 3-bit version could potentially run in under 40GB of VRAM, making it feasible to run on a single high-end GPU like an NVIDA RTX 4090
This is Gemini? The peak of LLMs right now? With web access?
>>714644286
This. The industry will look silly when JEPA becomes AGI and achieves cat-like intelligence. Meanwhile LLMfags will continue training their models on the next simple logic test their models failed while masturbating over +1.2% on memebench #392334.