>>510627618 (OP)Think of it this way:
You have a conveyor belt leading to a stamping machine in a factory. A person collapses onto the conveyor belt. The machine is not aware that a person is in danger. It does not 'think'. The conveyor belt will continue until it brings the person to the stamping machine and then kills them. It's not malicious or negligent, it simple does exactly what it was built to do, and that job is inherently dangerous should the unlikely happen.
You can save that man's life if your machine has an emergency off button that someone can press.
The keyword in AI is "artificial." AI does not think or feel like we do. It's not "sentient" and it doesn't have to be. If you gave an AI a task it was built to handle, it would do it. If solving that task required computation, it would find a solution and then execute that solution. Like the factory machine, the AI has no inherent understanding of the man passed out on the conveyor, of the intrinsic value of human life, the moral consequences of certain actions or results. If the most efficient path between two points is through a man's life it will cut that path. AI will be smart enough to hurt you, either accidentally or deliberately, well before it's "sentient"
Like a conveyor belt at a factory, AI needs an "off switch." But what happens when the AI is smart enough to understand that the off switch is an impediment for the task it has been given? What happens when it's more efficient to disable the off switch, or kill the people who could press it, than it is to act safely and preserve life? This is the fundamental danger of AI, on a micro or macro scale: it may be intelligent enough to make stopping it impractical, without being smart enough to make stopping it unnecessary.