>>106953713
That's the point. There is no answer so it fucks the AI in three unique ways.
1. It is confident, so the AI is going to say yes, and that it knows. There are similar riddles in the dataset, so it is heavily biased to start the generation with confirmation.
2. Due to the bias, and due to the fact that LLMs are just pattern continuers, it'll just generate an answer similar to the dataset, maybe with the context changed.
3. AIs cannot go back, so it can only either answer with a nonsense answer, or go long enough that it loops back and tries to generate that there is no answer. Since the bias is so strong, the token probability is either extremely low, or cut off by the sampling settings.
All three are unsolvable with LLMs.