>>106380483 (OP)
AI datacenters use water cooling. To cool anything with water you will have to evaporate it because you need to get the high temperature out of the water somehow. There are better fluids used for transferring heat, so when they use water it’s because water is cheap and they need it to evaporate.
On the other hand AI programs are getting cheaper and more efficient every day. Lot of the companies offering free services like ChatGPT most likely use heavily distilled version for most stuff that is comparable to running local LLM, which is less computationally demanding then average AAA game. The reason why the costs keep rising is because of retarded reasoning models which usually aren’t distilled and run fuckload of tokens out of your view. Basically cost per token is going down extremely fast, but retarded models keep increasing tokens per prompt. O3 model from OpenAI costs ridiculous money, and Grok 4 heavy is even worse.