>>105608879I was looking back at a government policy recently and looking at the impact it had.
Search results weren't working great so I used a model to assist. It understood the policy, some of the implications & effects but then started inserting incorrect information in and it wasn't a hallucination.
It turned out it was using new reports from government data that obscured figures to make the policy look more successful. When questioned on the effects and that data, it then used another study that was funded pre-policy to support its claims but did recognise that there was "discourse" surrounding it.
OK! things are getting good it'll be solved right?
err, not quite. Now it was pulling in reports and news articles from business that was opposed to the policy.
So at this point it's went off, pulled in something that was incorrect due to selective data, reporting from the government and now to prevent bias it's sucked in a lobbied report, comments and articles.
Eventually, it found data from charities and the full figures in context with previous years including from before the policy was implemented.
It had to be specifically fed and queried to recognise the bias and every time it read the biased source it would continue to insert little falsities in too that are 100% false like where the funds raised by the policy would be spent, because again it kept looking at things from before the policy was implemented and those business sources criticising where it was going.
Don't get me wrong, as a tool, it was good and could crunch the numbers and analyse but it still required significant effort to look through the false data.
For a normalfag looking at that at face value, it's no better than a biased news source. GIGO.