>>106869697
> have you tried vulkan?
no, i treat that as a fallback for when i can't get rocm working, but i guess there's no harm in trying just to be sure
> llama-bench
forgot to set the batch sizes award
will rerun with 4096 and without the second iteration because it doesn't seem to be that variable anyway