If any of you faggot wants to try Hunyuan-A13B, here you go:
https://huggingface.co/FgRegistr/Hunyuan-A13B-Instruct-GGUF
Only works with https://github.com/ggml-org/llama.cpp/pull/14425 applied!
--flash-attn --cache-type-k q8_0 --cache-type-v q8_0 --temp 0.6 --presence-penalty 0.7 --min-p 0.1