>>63970694>Maybe I've watched Terminator too many times(if such a thing is even possible)but fully autonomous killing machines sounds like a terrible idea.Not that anon, but fwiw:
1) Terminator completely sidesteps all the logistics issues with magical power plants and magical energy guns in the future. Real world drones, fully automated or not, are not immune to the exact same battery/fuel and ammo consumable limits as anything else. If you launch a fully autonomous killing machine with a range of 20 miles at the front in a place like Ukraine, then it can't fly on back to a city 100 miles in the rear and start attacking civilians, it's physically impossible. That still provides plenty of levers for human control.
2) It's inevitable so we have to deal with it. If one side has fully autonomous capable drones in large quantities and the other doesn't, the other side is going to fucking lose. It's too much of an advantage, and obtained too "easily" (ie, no rare super hard to purify uranium or whatever required).
3) Geofence fail safe systems and such can actually be made pretty fucking safe. We DO know how to write simple formally verified high assurance code. IFF is very secure. Nobody is ignorant of the risks of having such systems attack your own side, and no doubt that will be under constant actual attack by the other side so you're in a hostile environment from the get-go. Which is probably actually healthy vs the scifi Cold War scenario where the AI gets to bake in safety.
Ultimately I think full AI drones that have no manufacturing capacity, no strategic intelligence, and no logistical control can be pretty well limited. Humans being retarded will remain the bigger risk.