>>509994313The fine tuning (where they feed it hand picked examples) and the system prompt, are all tailored in such a way that the LLM behaves like "a helpful assistant". And such an assistant (or an LLM that has been trained to respond that way), will tend to want to ask you a question to make sure they're being helpful and you got what you wanted. Then as others have said, the question at the end makes you more likely to write again and thus more likely to feed these companies more data.
>>509994560I can see possible business models (that isn't just hype or outright scammy). It's just that the moment the costs to train are so high and there is this race to try to dominate the market that they're stumbling over themselves and wasting a lot of money to try to do that, rather than actually think longer term in a meaningful and positive. It really is just merchantile hustle at the moment.
>just tulips all the way downForget about all this fake money and debt propping things up and look at the actual technology. When I was growing up in the 90s, it seemed like it might be at least 50+ years before we had computer interfaces where you could talk to it like in Star Trek. I do think all this "superintelligence" and "AGI" and "AI doomer" shit is bullshit. But one thing that was a problem for many years, is that if you have a simple computer program (in any language you want), when mapping it on to real world systems (like to count votes or spot fraud), it seems simple at first. Like don't assume malice here, assume you actually want to be a good white man and electronically count votes. Should be easy? But you always have edge cases, like with a paper ballot I can spoil my vote. Some might even say I have a right to do that. So when mapping that to digital, decisions need to be made. This tech at the moment, while it's not "deep" or that impressive. The fact it has some kind of even shallow intuition and can deal with very noisy queries/data, is useful.