>>106013579
Hmm, after looking further into how thinking is handled in llama.cpp, I believe it hardcoded with <think>. It won't work with your model. It's quite bad as most frontend and tools won't work correctly.