Search Results
7/19/2025, 8:15:16 PM
>>105957270
It looks like they don't include the reasoning traces but you can see from the outputs themselves that the model is way less verbose than typical LLMs.
For example:
https://github.com/aw31/openai-imo-2025-proofs/blob/main/problem_3.txt
>So universal constant c can be taken 4. So c<=4.
>Need sharpness: show can't be less.
It's plausible that the reasoning is also more conservative with word usage. It turns out the secret to gold-level math skills was training it to talk like Kevin.
It looks like they don't include the reasoning traces but you can see from the outputs themselves that the model is way less verbose than typical LLMs.
For example:
https://github.com/aw31/openai-imo-2025-proofs/blob/main/problem_3.txt
>So universal constant c can be taken 4. So c<=4.
>Need sharpness: show can't be less.
It's plausible that the reasoning is also more conservative with word usage. It turns out the secret to gold-level math skills was training it to talk like Kevin.
Page 1