Anonymous
9/5/2025, 2:44:01 PM
No.106491246
>>106491187
(You) shut the fuck up, I've fucking tried. You can't tell R1 to do "X" if it goes against its "guidelines" and it will actually become adversarial if you do that because of safety slopping. Just mentioning the words "restrictions", "guidelines" etc. triggers R1 into becoming even more censored and I've found the most success from skirting around that.
I'd love to have a single sentence prompt, but it doesn't fucking work. R1 is a headache, it ignores the system prompt half the time. Everything in that prompt addresses a reason R1 makes up in its thinking for why it needs to refuse, I tried my best to trim it as much as possible.
To be fair, I'm running it quanted, obviously, so that might be part of the problem.
(You) shut the fuck up, I've fucking tried. You can't tell R1 to do "X" if it goes against its "guidelines" and it will actually become adversarial if you do that because of safety slopping. Just mentioning the words "restrictions", "guidelines" etc. triggers R1 into becoming even more censored and I've found the most success from skirting around that.
I'd love to have a single sentence prompt, but it doesn't fucking work. R1 is a headache, it ignores the system prompt half the time. Everything in that prompt addresses a reason R1 makes up in its thinking for why it needs to refuse, I tried my best to trim it as much as possible.
To be fair, I'm running it quanted, obviously, so that might be part of the problem.