Search results for "1ecac848071ec10b32d1b2cccd03c03f" in md5 (7)

/g/ - /lmg/ - Local Models General
Anonymous No.106560356
>>106560314
>an 80B model requires ~160GB of VRAM. A 3-bit version could potentially run in under 40GB of VRAM, making it feasible to run on a single high-end GPU like an NVIDA RTX 4090
This is Gemini? The peak of LLMs right now? With web access?
/g/ - /lmg/ - Local Models General
Anonymous No.106179102
So? Are you enjoying those fancy new LLMs?
/int/ - Thread 212965856
Anonymous United States No.212970339
>
/g/ - /lmg/ - Local Models General
Anonymous No.105950582
.
/g/ - /lmg/ - Local Models General
Anonymous No.105893873
>>105893180
>AI
LLMs. LLMs are not going to keep improving.
/v/ - Thread 714589462
Anonymous No.714644607
>>714644286
This. The industry will look silly when JEPA becomes AGI and achieves cat-like intelligence. Meanwhile LLMfags will continue training their models on the next simple logic test their models failed while masturbating over +1.2% on memebench #392334.
/g/ - /lmg/ - Local Models General
Anonymous No.105660686
>>105660676
jesus