>>536516140
Yeah, it's a shame that this problem is effectively only going to get worse, Deepseek or otherwise. The current "meta" with LLM development is expansion regardless of cost, they're making advancements but at the expense of bloating requirements while Nvidia cums their pants at the thought of selling more high-end and overpriced cards. The result is models having huge leaps in hardware requirements, while the hardware itself is rapidly ballooning in price and energy expenditure. The industry really needs more people with a focus on making models more efficient and cheaper to run, and if Nvidia's competitors could start putting out cheaper more energy efficient hardware that'd be a massive boon too. At least Anlatan's trying to make an effort with the former even if it's taking quite a bit to materialise.