Search Results
7/19/2025, 6:21:28 PM
My desktop has a 12900k, 128GB of RAM, and an RTX 3090 So in total I can devote ~140 GB of RAM to the model. I don't care about token speed and I'm happy to bridge a model across the CPU and GPU if its too big to fit in either one alone. What's the best, smartest general purpose model that will run on my system? This isn't something I'll use every day so it doesn't have to be fast in a practical sense. I just want to see what the absolute best model I can run on my local system is. I use Arch Linux btw. Thank you for your attention to this matter
Page 1