>>105884972
>See you in a few hours I guess

llama_perf_context_print: prompt eval time = 405410.81 ms / 14822 tokens ( 27.35 ms per token, 36.56 tokens per second)
llama_perf_context_print: eval time = 7667250.75 ms / 27206 runs ( 281.82 ms per token, 3.55 tokens per second)
llama_perf_context_print: total time = 8742602.49 ms / 42028 tokens


Would this have been an English text, 3:1 ratio would roughly apply to estimate the token count. I this case of a rather densely packed base64 string, 17000 bytes resulted in 14800 tokens