>>106403832
This fine tune of mine is way more willing to RP raunchy, smut stuff than the base model but due to it being an 8b model It flubs the logic every now and then, though not to an egregious degree. I continue the chat a little bit further and when they started fooling around in the bathroom anon calls her her sister instead of the mother, but the characters otherwise act the same. This was trained off of a data set that was trained down to be only two megabytes (the original data set in full was over 1.8 GB) so I wonder if training it on The full dataset's worth of content wouldn't prove the logic or is it just an inherent limitation of the 8b model and training it on something higher like 12B or beyond would lead to better results? I'll have to test this further when I get the chance