Right. SInce Ollama doesn't use llama.cpp directly anymore, both the conversion script and ggml will have to account for that to some extent yeah?