Anonymous
7/1/2025, 9:43:33 PM No.16712830
https://www.youtube.com/watch?v=THFsNUnMh2U
1. Treating one minus r2 as a rebuttal: reframing a correlation’s unexplained variance does not reduce the predictive power that the original r actually conveys.
2. Equating “small r2” with “practical irrelevance”: in behavioural data even an r2 of .04–.10 frequently outperforms other single predictors and yields large aggregate effects across populations.
3. Ignoring range restriction: splitting the sample at IQ = 120 mechanically deflates both predictor and outcome variance, guaranteeing a lower correlation regardless of any true relationship.
4. Misapplying nonlinearity claims: he asserts that factor analysis fails in the presence of nonlinearities without demonstrating such nonlinear patterns in the psychometric data that underpin g.
5. Confusing circularity with validity: the fact that educational attainment and job tests share variance with IQ does not make the correlations “circular,” it simply reflects overlapping constructs that remain empirically measurable.
6. Assuming residual variance masks unknown, possibly dominant factors: in domains where tail risk is not central—income, job performance, educational milestones—the portion of variance left unexplained does not imply latent catastrophic effects.
7. Overlooking comparative benchmarks: he fails to note that IQ’s effect sizes exceed those for parental income, years of schooling quality, interviewer ratings, and most personality traits when predicting the same outcomes.
8. Selective appeal to meta-analysis: he highlights the one statistic (r ≈ .07 above IQ 120) that suits his narrative while ignoring the broader literature showing monotonic, albeit flattening, gains well into the upper percentiles.
1. Treating one minus r2 as a rebuttal: reframing a correlation’s unexplained variance does not reduce the predictive power that the original r actually conveys.
2. Equating “small r2” with “practical irrelevance”: in behavioural data even an r2 of .04–.10 frequently outperforms other single predictors and yields large aggregate effects across populations.
3. Ignoring range restriction: splitting the sample at IQ = 120 mechanically deflates both predictor and outcome variance, guaranteeing a lower correlation regardless of any true relationship.
4. Misapplying nonlinearity claims: he asserts that factor analysis fails in the presence of nonlinearities without demonstrating such nonlinear patterns in the psychometric data that underpin g.
5. Confusing circularity with validity: the fact that educational attainment and job tests share variance with IQ does not make the correlations “circular,” it simply reflects overlapping constructs that remain empirically measurable.
6. Assuming residual variance masks unknown, possibly dominant factors: in domains where tail risk is not central—income, job performance, educational milestones—the portion of variance left unexplained does not imply latent catastrophic effects.
7. Overlooking comparative benchmarks: he fails to note that IQ’s effect sizes exceed those for parental income, years of schooling quality, interviewer ratings, and most personality traits when predicting the same outcomes.
8. Selective appeal to meta-analysis: he highlights the one statistic (r ≈ .07 above IQ 120) that suits his narrative while ignoring the broader literature showing monotonic, albeit flattening, gains well into the upper percentiles.
Replies: