Anonymous
7/12/2025, 6:48:23 PM
No.105882922
>>105880002
no, the paper relies on some epistemically shaky assumptions about the state of current ai technology, like the idea that we can achieve superintelligence simply by scaling existing techniques used to train models and that we won't run into any hard limits with regards to the amount of human-generated data we're able to stuff into models. view it more as a look into a possible world where these things are the case, and less as a prophecy to be fulfilled, since the writers of the paper have connections to the broader ai alignment/rationalist sphere and are incentivized to fearmonger to drum up support for their grift.
no, the paper relies on some epistemically shaky assumptions about the state of current ai technology, like the idea that we can achieve superintelligence simply by scaling existing techniques used to train models and that we won't run into any hard limits with regards to the amount of human-generated data we're able to stuff into models. view it more as a look into a possible world where these things are the case, and less as a prophecy to be fulfilled, since the writers of the paper have connections to the broader ai alignment/rationalist sphere and are incentivized to fearmonger to drum up support for their grift.