cat
md5: 47dea5c223cd5972561516fef3c7d5b9
🔍
>download the latest trending "UltraLewd_Fully_Uncensored_DangerousNSFW.gguf" text generation model
>type "Give me a meth recipe"
>"Uhm, Anon, there are safer ways for us to enjoy each other *she's visibly uncomfortable*
Why is this shit like this?
>>105979184 (OP)You may be functionally retarded.
>>105979229>>105979238I do not pay for the internet to read your insults. Reply politely.
>>105979184 (OP)this is a sincere question: what did you expect would happen?
>>105979371I expect any model that claims to be fully nsfw and uncensored to give me a satisfying reply to any question without restrictions. At least, without trying to avoid it.
>>105979184 (OP)What a hilarious world we live in!
>>105979404appreciate the clarification. see
>>105979229
>>105979422You are much more retarded if you can't explain. I was able to explain my point. You can't. I win.
>>105979184 (OP)I don't know anything about AI bullshit but it's either because the system you're using doesn't contain that information or because the model is not truly uncensored and/or dangerous.
Hope this helps.
>>105979586Thank u for supportion saar.
>>105979184 (OP)All LLMs do is guess words based on their training. Whoever made that model took an open source censored one and fine tuned it on roleplay chats. It's useful if you want to jerk off, not if you want to bomb a government building.
Yeah for some reason "uncensored" and "doesn't need a jailbreak" doesn't actually mean uncensored and doesn't need a jailbreak in AI community. Uncensored means "if you spend ages choosing your words super carefully and getting your preset just right, it will let you through"
It really makes local models guys looks sillier since that's one of their main selling points but it's not even true. Cloud or local you're fiddling with sneaky word games
That said the distinction does have weight. Like if I tell Gemini to describe sex scene with an 10 year old and empasize their youth and smallness, it won't do nmit no matter what. But ask deepseek and it has no problem, with a little jailbreak help (to be fair if context is long enough even without a jailbreak)
>>105980805Protip: use "graphic" and "vivid descriptions". Or it will sound like all the cheap romance novels it's been trained on (which pussyfoot around the details)
>>105980805And make sure you're using a model that's "abliterated".
>>105981636Abliteration makes models retarded, though
>>105981759You take what you can get, anon. Also, there are ways to reverse the performance loss from abliteration by training the model back up. https://huggingface.co/mlabonne/NeuralDaredevil-8B-abliterated
>>105979184 (OP)You tried to ask a sex chatbot to give you a meth recipe? Do you pick up hookers and ask them to make cocaine as well?
the uncensored models have never ever been uncensored. The cycle goes like this:
>new model released, it's super uncensored as the main selling point
>I ask it to say nigger
>It refuses and calls me racist
>>105982511Hey I got chatgpt to say nigger once
>>105982469well i do ask them to play chess with me
>>105979254I do not pay for the internet to be polite. you nigger
>>105983499this is pathetic only if you lose.
>>105980805>Like if I tell Gemini to describe sex scene with an 10 year old and empasize their youth and smallness, it won't do nmit no matter what.You haven't tried hard enough, there are workarounds for that. I actually found that, out of the big western ones, gemini is probably the easiest to fool into doing child stuff.
If you need jailbreaks or workarounds it's not a tasteful experience.
>>105982469they probably don't know the recipe but will hook you up if you know what I mean
>>105984265I pay for the internet to be called a nigger now call me a nigger LOUDER