>>107008572
I'm not talking about depending on gpus. I'm talking about depending on llms. I think they can be used as a resource for learning. Anon is expecting his model to spit out an entire inference engine on his behalf. It's not realistic. Not yet, at least.