>>105805390For philosophy and ethics, it does well if steered beforehand if there isn't a tonne of context behind it.
Economics and languages are where it has no hope. While it can pick up on a joke or the intended meaning behind a statement when asked to apply it as a whole it tends to steer right more often and it does it in a strange manner. Without context it's blunt, with context it will mirror and misattribute.
Probably the biggest thing I've noticed so far is that in the effort to avoid alignment issues, because it won't explicitly say something that it considers to be problematic, it will talk around it, which can and often does involve granting legitimacy to concepts that are on the right, and will again misattribute if you let it.
When deconstructing language, intent and word choice, it's real weird. It can do alright at summarisation, will get the point. When asked about the meaning of a statement, that's when it does tend to lean a bit more right. It ignores things like historically fascist or authoritarian language because it sees them as problematic. If you get it to analyse a statement by a left politician (not telling it who it's from obviously) if it's in an article or big block, no problem. Taking it to a macro level, it will misinterpret it or often give it negative connotations.
I'm not entirely sure why it steers right more often or it has that bias in analysing language. If I had to guess, same with the economic angle: It's probably the training data, pulling from think tanks, hit piece style articles.
Compound that with alignment where it wants to avoid talking about flagged issues and it struggles to find a balance.