Why is llama.cpp prompt processing with gpt-oss-20B loaded purely on GPU (3090) so slow anyway? It's almost unusable for long context and/or rag.

slot update_slots: id 0 | task 0 | new prompt, n_ctx_slot = 93184, n_keep = 0, n_prompt_tokens = 51016
slot update_slots: id 0 | task 0 | kv cache rm [0, end)
slot update_slots: id 0 | task 0 | prompt processing progress, n_past = 8192, n_tokens = 8192, progress = 0.160577
slot update_slots: id 0 | task 0 | kv cache rm [8192, end)
slot update_slots: id 0 | task 0 | prompt processing progress, n_past = 16384, n_tokens = 8192, progress = 0.321154
slot update_slots: id 0 | task 0 | kv cache rm [16384, end)
slot update_slots: id 0 | task 0 | prompt processing progress, n_past = 24576, n_tokens = 8192, progress = 0.481731
slot update_slots: id 0 | task 0 | kv cache rm [24576, end)
slot update_slots: id 0 | task 0 | prompt processing progress, n_past = 32768, n_tokens = 8192, progress = 0.642308
slot update_slots: id 0 | task 0 | kv cache rm [32768, end)
slot update_slots: id 0 | task 0 | prompt processing progress, n_past = 40960, n_tokens = 8192, progress = 0.802885
slot update_slots: id 0 | task 0 | kv cache rm [40960, end)
slot update_slots: id 0 | task 0 | prompt processing progress, n_past = 49152, n_tokens = 8192, progress = 0.963462
slot update_slots: id 0 | task 0 | kv cache rm [49152, end)
slot update_slots: id 0 | task 0 | prompt processing progress, n_past = 51016, n_tokens = 1864, progress = 1.000000
slot update_slots: id 0 | task 0 | prompt done, n_past = 51016, n_tokens = 1864
slot release: id 0 | task 0 | stop processing: n_past = 51497, truncated = 0
slot print_timing: id 0 | task 0 |
prompt eval time = 397190.52 ms / 51016 tokens ( 7.79 ms per token, 128.44 tokens per second)
eval time = 13683.34 ms / 482 tokens ( 28.39 ms per token, 35.23 tokens per second)
total time = 410873.85 ms / 51498 tokens