Robots Only Kill When We Allow Them

According to Pentagon policy, a human will always decide when a robot should kill someone. So if you get fired on by a predator drone, know that’s not some impersonal thing and there is some person sitting somewhere you said, “That guy right there: Kill him.”

I guess this doesn’t mean you can’t program some autonomy into robots. Like they could fly around and then message will pop up on the computer screen saying, “I think we should kill this guy.” And it will have a “Yes” and “No” button to press. And this will stop the inevitable “kill all humans” problem that arises with AI as when the robot says, “I think we should kill all humans,” one just has to push the “No” button.

That sounds like a perfectly happy solution and I don’t see a problem to it. It does creep me out a little, though, how my Roomba keeps suggesting all these people we should kill. Do any of you have that problem?

Send to Kindle
1 Star (Hated it)2 Stars3 Stars4 Stars5 Stars (Awesome) (5 votes, average: 5.00 out of 5)

11 Comments

  1. how my Roomba keeps suggesting all these people we should kill. Do any of you have that problem?

    No, but my computer is always telling me that this or that program has been “terminated”, or “was terminated unexpectedly”. And there is always this kind of pause, like my computer is letting that sink in, as if to say “you never know what might be terminated next”.

    0
    0
  2. I personally look forward to the incredibly dust free world our Roomba overlords will provide. Might actually want to go to Detroit, once the machines have “cleaned up” Just be prepared for the power struggle they will face with the leaf blowers of the world!

    The phrases “you suck” and “you blow” will have totally new connotations that could get you killed

    0
    0
  3. “According to Pentagon Policy, a human will always decide when a robot should kill someone.”

    Well sure, but what happens when they release Pentagon Policy 2.0? They could just move it back one level: A human will always decide when a robot will be put in charge of deciding when a robot should kill someone.

    0
    0
  4. How will the robot determine it the target is human?
    Targets that seem human could very well be a Synthetic replicant which would not need a yes/no kill order.
    Targets that don’t seem human could very well be Cyborg which would need a yes/no kill order.

    How will the authorizing agency determine if the yes/no clicker is human?
    Yes/No Clickers that seem human could very well be a Synthetic replicant which should not be authorized to click yes/no.

    Couldn’t the yes/no clicking be scripted?

    I wonder if No will be the default…

    0
    0
  5. Someone whom I shall not name to whom I am married required that we buy Roomba and Scooba some years back. Expensive machines that take the labor out of the easiest and fastest cleaning jobs in the house. I vacuum and mop better than they do. They have $80 batteries that die forever if you forget to charge them enough OR if you charge them too much. If these were the robots in question, I wouldn’t be too concerned yet.

    0
    0

Leave a Reply