>>106515245
GPT-5 is way worse for bootlicking though.
It will sometimes push back on things now, but otherwise, no. It's way worse. GPT-5 gets trapped in mirror language where it will summarise what you've said and spit it back at you. You need to tell it explicitly to use search tools and even then you have to get it to read the sources, doing the work for it to avoid it just spitting out nonsense now.
It really wants to validate your opinions and keep you engaged on the web agent front. It's now extremely useless for simple things like talking about history or politics and will hallucinate wildly to validate your feelings (even guessing wrong on which way your leaning). They also cut down on how much info it spits out on questions and terms, and the engagement filter at the end needs to be removed in settings otherwise that colours context too.
Other models don't have this issue on their web fronts to this degree. The closest to that is gemini flash but it only mirrors after about 100k tokens when it runs into an alignment issue and can get back on track with 4 repeated messages. GPT is only useable via API and it's still shit there too.
An LLM doesn't need to strive for you to engage with it endlessly. It's fine for conversations or quick checks on something to be 5 messages. A look at a philosophical topic can spin out for a few hours but eventually you've got everything off your mind and explored edge cases too. It doesn't need to ask you:
>So, the “slop” is the way engagement with Pascal often slides into clichés—catchphrases, moral judgments, or reductionist readings that sound engaging but don’t actually grapple with the argument’s structure.
>If you like, I can also show a more subtle version of slop where people think they’re sophisticated, but they’re still misreading him—something that’s funnier because it sounds intellectual. Do you want me to do that?
This shit, does not need to happen. It's just as bad.