Anonymous
7/14/2025, 11:07:00 PM No.105907177
Has there ever been a proof of narrow AI ever improving exponentially? Like people talk about this supposed exponential improvement over short period of time once AI surpasses human in every domain called Singularity, but as far as I know there has never been a proof of that happening.
People point at chess playing AI playing against itself and very quickly becoming like 10 000 times better then the best grandmaster, but from reading about what happened in that research it does not look like that. They basically made AI A which won 9/10 times against the greatest player of chess in the world, and then concluded that it was 10x better then the best human. Then they trained AI B against AI A and it won 9/10 times, so it became 10x better then AI A, or 100x better then best human, but is that really the case? They put static AI against training one and the training one learned over long period of time how to outplay it, but does learning how to outplay one player really make you multiple times better then the original player, or is it just nerds applying DragonBallZ power level logic to systems playing against themselves and giving only one of them AI analogue to fluid intelligence?
And once again this is just example of games where scores are very easy and objective, you either lose or win, but in real world work environments success and failure are hard to figure out. We already have synthetic data, but we still need people to decide if it is good or not otherwise we get model collapse, and if we do use human data then that system only becomes good at mimicking the median internet user, so where do they think the superintelligence will come from? Not even to mention that for AI to know anything beyond what humans know today it would need to do its own research and experiments which take time and money.
People point at chess playing AI playing against itself and very quickly becoming like 10 000 times better then the best grandmaster, but from reading about what happened in that research it does not look like that. They basically made AI A which won 9/10 times against the greatest player of chess in the world, and then concluded that it was 10x better then the best human. Then they trained AI B against AI A and it won 9/10 times, so it became 10x better then AI A, or 100x better then best human, but is that really the case? They put static AI against training one and the training one learned over long period of time how to outplay it, but does learning how to outplay one player really make you multiple times better then the original player, or is it just nerds applying DragonBallZ power level logic to systems playing against themselves and giving only one of them AI analogue to fluid intelligence?
And once again this is just example of games where scores are very easy and objective, you either lose or win, but in real world work environments success and failure are hard to figure out. We already have synthetic data, but we still need people to decide if it is good or not otherwise we get model collapse, and if we do use human data then that system only becomes good at mimicking the median internet user, so where do they think the superintelligence will come from? Not even to mention that for AI to know anything beyond what humans know today it would need to do its own research and experiments which take time and money.
Replies: