Anonymous
10/25/2025, 9:10:50 AM
No.107001750
>>107001734
small models are garbage sadly.
You could try nemo instruct, or a recent gemma abliterated.
If you're asking for prompting techniques, then you'll have to play around with samplers, more randomness you want, the higher the temeperature. there are some sampler that help make the bot coherent with hight temp (but I forgot the name, I usually use llms for work and low temp), I'd suggest you ask chatgpt or lmg for this.
For prompting itself, it usually works better if you give the chatbot a list to choose from (but at that point it would be the same as using wildcards substitution) and the prompting techinque GREATLY varies between models, so there's not a general way to do it
small models are garbage sadly.
You could try nemo instruct, or a recent gemma abliterated.
If you're asking for prompting techniques, then you'll have to play around with samplers, more randomness you want, the higher the temeperature. there are some sampler that help make the bot coherent with hight temp (but I forgot the name, I usually use llms for work and low temp), I'd suggest you ask chatgpt or lmg for this.
For prompting itself, it usually works better if you give the chatbot a list to choose from (but at that point it would be the same as using wildcards substitution) and the prompting techinque GREATLY varies between models, so there's not a general way to do it