>>107091153
I wish novelty seekers like you would first give models a try (from their official provider chat UIs or APIs) before obsessing over a possible llama.cpp implementation
https://chat.qwen.ai/
for qwen3-next
and then noticing: it's actually a really bad model even compared to other qwen models and there's no reason to want this so much on your local llama.cpp
you are waiting for garbage