>>16737778
Look at how much pushback there's been against AI censoring "obscene" content or developers trying to force it to "correct" controversial opinions; people don't like being overtly told what to think or do and I think any widely adopted system would be under sufficient scrutiny that people would be drawing attention to it overtly pushing a particular product or agenda.
>How do they make money? Advertising and analytics, which are dependent on the people using those platforms not noticing or caring. And clearly it works.
True, but I don't think that's a sustainable model for AI. Social media is much, MUCH less resource intensive compared to AI - users provide the bulk of the content, users provide the bulk of the data, so the only real work on the part of the company is storage and security, which makes supporting it through advertising and analytics more viable. AI models being free and open access is not sustainable, not just because of the higher resource costs, but because its very nature makes it extremely difficult to protect as intellectual property on the part of the company unless they can afford to put in the money and manpower to carefully cultivate their own training data. AI will be paid by necessity, and this paid status will make consumers far less willing to tolerate any shenanigans from the provider.