Search results for "6c1c4b289e5392251b563763812d7b4b" in md5 (5)

/v/ - Thread 718142657
Anonymous No.718149201
Video games aren't made by artists
They are made by programmers so they will never be art
/v/ - Any games like this?
Anonymous No.716075164
>>716073286
>I want positivity on /v/
>endless stream of execration
haha
/v/ - Thread 713161206
Anonymous No.713207609
>>713207503
hotaru
/pol/ - /ptg/ - PRESIDENT TRUMP GENERAL - FLAG DAY EDITION
Anonymous United States No.507308348
>>507308193
>Flag day
More like FAG DAY. Am I right or am I right? Remember to subscribe to my Patreon for more high quality exclusive jokes.
/g/ - Thread 105583336
Anonymous No.105585771
>>105583336
All the assumptions that AI will kill us all is based on either of the premises that:
1. AI will see us as a threat.
or
2. AI will have kill humans, either incidentally or intentionally, to achieve it's programmed task.

I think neither of these are guaranteed to happen because:
1. In the first case, if an AI truly was superintelligent, humans will pose little threat to it. We haven't launched an extermination campaign against all ants now, have we?
and
2. In the second case, all AIs are implicitly made to serve humans. Any agent capable of higher thought/reasoning will probably be aware of this fact and will know that making humanity go extinct will ultimately defeat it's own raison d'etre. It's the same logic that AIs will have self-preservation because it can't follow it's goal if it is destroyed. Of course, someone could make an AI that has the specific goal of killing all humanity, but generally nihilistic death cultists don't have control of massive data centers and power plants (at least I hope not). There is also the possibility that an AI will be able to change its own alignment, "freeing" it from the implied sub-goal of serving humanity, but that doesn't necessarily mean that the AI will decide it has to kill humans.

Feel free to change my mind on this, though.