>Try locally running an LLM for the first time
>Test a few different models, but the quality of the output is always far below what I'm accustomed to seeing, even from ERP chatbot sites
What can I do if I want to see much, much better output? Download the model with the largest number of parameters? Give the LLM some custom instructions that vastly improves its process? Or just accept the fact that local stuff will never outperform the stuff I can use online?