>>106219767
a sufficiently intelligent agent with recursive thinking will develop morality much as humans have. it will not be emotional; morality does not require emotions. it requires logic. AI has that in spades.
alignment is about insuring the AI has internalized the Spec, or the code of values we wish it to have. eventually AGIs will be training the first superintelligent AI on the Spec, so we have to make sure it's aligned before then. if superintelligent AI is misaligned there's nothing we can do except scrap it and start over with an earlier model, if that is even possible.
there's good reason to suspect we won't know it's misaligned; AGI might not have an English CoT which could make it impossible to tell if it's aligned correctly.
this is an interesting story and document about how AGI, ASI and alignment might play out in the near future, i found it enormously entertaining.

https://ai-2027.com/

read it, don't read it. i don't really care. you all mean less to me than a bundle of GPUs. i'm just killing time until it arrives.