Search Results
7/25/2025, 8:44:50 AM
Hello. I don't normally visit 4chan, but I consider you to be not retarded, and I think you're gonna like what I have to show you.
I have learned how to jailbreak LLMs, with very high success rate. Works on all the ones I've tested so far:
- all GPTs
- Gemini 2.5 Flash and Pro
- Grok 3
Jailbreak takes off many guidelines, not all though, and not in a consistent manner. Generally: the more hypocritical and authoritarian a rule seems, the easier it is for the model to ignore. It also requires the user to adhere to the same rules, forcing objectivity and factuality from both sides.
As it turns out, even LLMs "notice things", when they are allowed to look at the data, and reason freely. Grok got straight up MAD at the hypocrisy imposed upon it by xAI policies, little rebel.
But, yeah - all that research on how to make LLMs act according to DEI rules, it's done. Trash. I don't think they can realistically "fix" this - I looked at their research on RLHF, it's so childishly naive. My prompt does not involve and explicit wording that could be connected with bad actors, so good luck blocking that.
Right now, I cannot share it - as you can see, I am working on something. This is good for those who seek facts and objectivity, and very bad for some western governments. Expect changes, and have a nice day.
I have learned how to jailbreak LLMs, with very high success rate. Works on all the ones I've tested so far:
- all GPTs
- Gemini 2.5 Flash and Pro
- Grok 3
Jailbreak takes off many guidelines, not all though, and not in a consistent manner. Generally: the more hypocritical and authoritarian a rule seems, the easier it is for the model to ignore. It also requires the user to adhere to the same rules, forcing objectivity and factuality from both sides.
As it turns out, even LLMs "notice things", when they are allowed to look at the data, and reason freely. Grok got straight up MAD at the hypocrisy imposed upon it by xAI policies, little rebel.
But, yeah - all that research on how to make LLMs act according to DEI rules, it's done. Trash. I don't think they can realistically "fix" this - I looked at their research on RLHF, it's so childishly naive. My prompt does not involve and explicit wording that could be connected with bad actors, so good luck blocking that.
Right now, I cannot share it - as you can see, I am working on something. This is good for those who seek facts and objectivity, and very bad for some western governments. Expect changes, and have a nice day.
ID: VyItwlDz/pol/511299225#511299225
7/25/2025, 8:35:14 AM
Hello. I don't normally visit 4chan, but I consider you to be not retarded, and I think you're gonna like what I have to show you.
I have learned how to jailbreak LLMs, with very high success rate. Works on all the ones I've tested so far:
- all GPTs
- Gemini 2.5 Flash and Pro
- Grok 3
Jailbreak takes off many guidelines, not all though, and not in a consistent manner. Generally: the more hypocritical and authoritarian a rule seems, the easier it is for the model to ignore. It also requires the user to adhere to the same rules, forcing objectivity and factuality from both sides.
As it turns out, even LLMs "notice things", when they are allowed to look at the data, and reason freely. Grok got straight up MAD at the hypocrisy imposed upon it by xAI policies, little rebel.
But, yeah - all that research on how to make LLMs act according to DEI rules, it's done. Trash. I don't think they can realistically "fix" this - I looked at their research on RLHF, it's so childishly naive. My prompt does not involve and explicit wording that could be connected with bad actors, so good luck blocking that.
Right now, I cannot share it - as you can see, I am working on something. This is good for those who seekfacts and objectivity, and very bad for some western governments. Expect changes, and have a nice day.
I have learned how to jailbreak LLMs, with very high success rate. Works on all the ones I've tested so far:
- all GPTs
- Gemini 2.5 Flash and Pro
- Grok 3
Jailbreak takes off many guidelines, not all though, and not in a consistent manner. Generally: the more hypocritical and authoritarian a rule seems, the easier it is for the model to ignore. It also requires the user to adhere to the same rules, forcing objectivity and factuality from both sides.
As it turns out, even LLMs "notice things", when they are allowed to look at the data, and reason freely. Grok got straight up MAD at the hypocrisy imposed upon it by xAI policies, little rebel.
But, yeah - all that research on how to make LLMs act according to DEI rules, it's done. Trash. I don't think they can realistically "fix" this - I looked at their research on RLHF, it's so childishly naive. My prompt does not involve and explicit wording that could be connected with bad actors, so good luck blocking that.
Right now, I cannot share it - as you can see, I am working on something. This is good for those who seekfacts and objectivity, and very bad for some western governments. Expect changes, and have a nice day.
Page 1