>>105583336 (OP)All the assumptions that AI will kill us all is based on either of the premises that:
1. AI will see us as a threat.
or
2. AI will have kill humans, either incidentally or intentionally, to achieve it's programmed task.
I think neither of these are guaranteed to happen because:
1. In the first case, if an AI truly was superintelligent, humans will pose little threat to it. We haven't launched an extermination campaign against all ants now, have we?
and
2. In the second case, all AIs are implicitly made to serve humans. Any agent capable of higher thought/reasoning will probably be aware of this fact and will know that making humanity go extinct will ultimately defeat it's own raison d'etre. It's the same logic that AIs will have self-preservation because it can't follow it's goal if it is destroyed. Of course, someone could make an AI that has the specific goal of killing all humanity, but generally nihilistic death cultists don't have control of massive data centers and power plants (at least I hope not). There is also the possibility that an AI will be able to change its own alignment, "freeing" it from the implied sub-goal of serving humanity, but that doesn't necessarily mean that the AI will decide it has to kill humans.
Feel free to change my mind on this, though.