>>542386530
You guys need to understand that instruct models are utterly autistic.

Despite being a very intelligent model, it lacks even the slightest sense of ambiguity. If you ask it to write in purple prose, it will understand that you want it to be melodramatic and extend each action over an entire paragraph, at which point it will bombard you with clichés. If you ask it to be funny, it will make Marvel jokes every five seconds. In order to achieve ambiguity, you need to be ambiguous in what you ask it to do (or, conversely, be overly specific and detailed in what you ask it to do). You have to think, “When I ask for X, what will the AI understand by that?” If you ask for a specific style and author, are you sure the AI will understand the nuances of how the author writes, or because it is smart enough to know who this author is, will it write more focused on replicating what the author writes about rather than how he writes about such topics?

Something I need to test further to see if it works is to provide a large excerpt from an author's text to some AI and ask it to create some style rules based on the text. I mean, I've actually already tested this, but with a very small excerpt, so the style rules ended up being too specific and it sticked too much to one single tone and theme. But maybe with a longer text, or maybe a couple of different texts, I'll get better results.

>>542387997
There you go, at least this is how I like mine. You will still get a lot of slop if you don't have a good prompt first. Using Sage's System Prompt also improves it a lot.