According to Pentagon policy, a human will always decide when a robot should kill someone. So if you get fired on by a predator drone, know that’s not some impersonal thing and there is some person sitting somewhere you said, “That guy right there: Kill him.”
I guess this doesn’t mean you can’t program some autonomy into robots. Like they could fly around and then message will pop up on the computer screen saying, “I think we should kill this guy.” And it will have a “Yes” and “No” button to press. And this will stop the inevitable “kill all humans” problem that arises with AI as when the robot says, “I think we should kill all humans,” one just has to push the “No” button.
That sounds like a perfectly happy solution and I don’t see a problem to it. It does creep me out a little, though, how my Roomba keeps suggesting all these people we should kill. Do any of you have that problem?