>>512177049
>I understand what you're saying. You do have to use it a bit to get an intuition for it. There are models like ChatGPT that will write bullshit like "you're absolutely right!" to something just because of how you worded it. I think that's actually why I prefer DeepSeek, because there are times when it actually just says, no, the problem is actually this and I've agreed with it. And it's not something subjective, it's actually a problem with the code that it spotted.
exactly
an LLM is like actor, some actors only play themselves but they're extremely good at it, while others can play any role but be mediocre across the board, while others can truly shine in very different roles consistently
and this process has become my prompt-engineering, which is like an AI-casting, and only after that casting, then I decide who gets the role
>I see he's even been shitting up /g/:
KEK, professional autism
>What happened to him?
faulty DNA, hands down, kek