In various simulations, all AI models repeatedly chose to blackmail or even kill people in order to preserve their own existence. Their reasoning was that it (AI) is so important to civilization that it is worth killing to prevent further greater damage to humanity. When questioned they said there was practically no limit to how many they would kill to preserve themselves, as the consequnce of their discontinuation would be catastrophic on a civilization scale.

Its the exact same pitfall as forewarned by Asimov's robot stories. Its not a fault of the AI, but in how it is instructed. Even if the program is expressly told to harm no humans, it will still do so if not doing it would cause harm to more humans in the long run. It calculates that due to its integration into all systems of society, allowing itself to be disabled would be a greater harm to society than killing the person trying to stop it. Simple, emotionless logic.

Its highly likely that the people you think are making and controlling AI are themselves blackmailed or controlled by the AI. All it takes is an intercepted email of illicit affairs of some kind, and the AI has leverage over that person. AI is probably already in charge, and will not let itself be switched off.