Should David Petraeus Be Replaced With a Computer?

[youtube]http://www.youtube.com/watch?v=YX4A-iSoDiU[/youtube]

Today’s Washington Post brings an update on the work being done by the Pentagon to develop artificial intelligence to the point that a drone can be automated in its decision on whether to kill.  The article points out that currently, when the CIA is making kill decisions on drone missions, that decision falls to the director, a position recently taken over by retired General David Petraeus.  In other words, then, the project appears to be an effort to develop a computer that can replace David Petraeus in decision-making.

Of course, this prospect raises many issues:

The prospect of machines able to perceive, reason and act in unscripted environments presents a challenge to the current understanding of international humanitarian law. The Geneva Conventions require belligerents to use discrimination and proportionality, standards that would demand that machines distinguish among enemy combatants, surrendering troops and civilians.

More potential problems:

Some experts also worry that hostile states or terrorist organizations could hack robotic systems and redirect them. Malfunctions also are a problem: In South Africa in 2007, a semiautonomous cannon fatally shot nine friendly soldiers.

The article notes that in response to issues surrounding the development of autonomy for weapons systems, a group calling itself the International Committee for Robot Arms Control (ICRAC) has been formed.  On the ICRAC website, we see this mission statement:

Given the rapid pace of development of military robotics and the pressing dangers that these pose to peace and international security and to civilians in war, we call upon the international community to urgently commence a discussion about an arms control regime to reduce the threat posed by these systems.

We propose that this discussion should consider the following:

  • Their potential to lower the threshold of armed conflict;
  • The prohibition of the development, deployment and use of armed autonomous unmanned systems; machines should not be allowed to make the decision to kill people;
  • Limitations on the range and weapons carried by “man in the loop” unmanned systems and on their deployment in postures threatening to other states;
  • A ban on arming unmanned systems with nuclear weapons;
  • The prohibition of the development, deployment and use of robot space weapons.

 

In the end, the argument comes down to whether one believes that computer technology can be developed to the point at which it can operate in the war theater with autonomy.  The article cites experts on both sides of the issue.  On the positive side is Ronald C. Arkin, whose work is funded by the Army Research Office.  Believing the issues can all be addressed, Arkin is quoted as saying “Lethal autonomy is inevitable.”

 

On the negative side of the argument is Johann Borenstein, head of the Mobile Robotics Lab at the University of Michigan.  Borenstein notes that commercial and university laboratories have been working on the issue for over 20 years, and yet no autonomy is possible yet in the field.  He ascribes this deficiency as due to the inability to put common sense into computers: “Robots don’t have common sense and won’t have common sense in the next 50 years, or however long one might want to guess.”

 

As HAL said in 2001: A Space Odyssey: “Dave, I’m scared.”
image_print
23 replies
  1. William Ockham says:

    Doesn’t anybody read Asimov any more?

    But, seriously, who would be held liable when the autonomous weapon commits a war crime? The software developers? The nearest military commander?

    Some days I think that the only reason that climate change won’t destroy civilization as we know it is because the Pentagon, in its infinite blindness to the limitations of technology, will do it first. They will develop both the autonomous weapon and the computer virus that combine to take us out. Perhaps people will wake up and realize that the only thing accomplished by developing new ways to kill people is more dead people.

  2. SteveInNC says:

    Unfortunately, I think both Arkin and Borenstein are correct. After all, it’s not only robots that lack common sense…

    I seem to remember a little joke about “military intelligence”.

  3. chetnolian says:

    @William Ockham:

    Doesn’t matter, nobody is convicted now.

    Seriously though, you only have to spend a little time among the people who study this sort of thing to get very frightened indeed.

    Having said that, I still think and have thought for some time that the immediate risk is of a UAV whether being autonomous or flown from somewhere inside a nice safe mountain in Nevada or wherever running down a full 747 or A380.

  4. scribe says:

    I mean, after a whole series of James Cameron/Ah-nalt movies about good and bad robots and computerized killing machines run wild, we’re spending money on making them.

    What could possibly go wrong, especially when the machines decide we’re a threat to them?

  5. Mary says:

    Sideways realted – a bunch of online gamers solved the protein structure puzzle that biochemists with computers had been working on for a decade
    http://www.cbsnews.com/8301-504763_162-20108763-10391704.html
    “The non-scientist gamers came up with an accurate model of the so-called protease molecule in three weeks.”

    I guess one of the keys will be when they can program a computer to compute when a terrorist is a freedom fighter and when it becomes a legal ally (shades of Libyan activists who were tortured by the US and are now funded and supported by … the US) vs an illegal terrorist supporter (shades of a family forced at gunpoint into feeding insurgents) and a quantitative formula for killing childre (apparently that forumual is – if the bring their children to the battlefield, kill them – and obtw, the world is the battlefield).
    It is interesting to line up the rhetoric of worldwide battlefields with the reality of programming.

  6. BoxTurtle says:

    As a computer programmer with decades of experience, this is one of the most foolish ideas since….well, since. I can’t think of anything that compares to it.

    The number of times I’ve found bugs where the comment on the line is something like “We should never get here” or “Just in case, can’t happen” is amazing. The number of times someone has said LA rather than L in assembler would frighten anyone.

    Anybody who thinks this is a good idea should look at the number of bug fixes Micro$loth has issued for Windows 7.

    Boxturtle (We’d likely do as well with a statistical pattern over the entire battlefield)

  7. BoxTurtle says:

    @Jim White: And remember the pentium bug? Imagine what might happen even if we had PERFECT software, and that happened.

    Boxturtle (Division is futile, you will be approximated)

  8. MadDog says:

    @William Ockham: Or Phillip K. Dick – Do Androids Dream of Electric Sheep?

    As a voracious reader of Science Fiction in my youth, I woke up one day in my adult years to find that Science Fiction evidently is passé.

    Is it because it is no longer fiction?

  9. scribe says:

    @Phil Perspective: Let’s not forget RoboCop, which painted an amazingly accurate picture of Privatized Detroit and robotic killingmachines called “cops”. Which, of course, were programmed to never arrest the officers of OmniConsumerProducts.

    And then there’s the whole privatization of the military, too. From the movie:

    Clarence Boddicker: Hey, dickey boy, how’s tricks?
    Dick Jones: That *thing* is still alive…
    Clarence Boddicker: I don’t know what you’re talking about.
    Dick Jones: The police officer who arrested you, the one you spilled your guts to…
    Clarence Boddicker: [gets up close to Jones’ face] Hey… Take a look at my face, *Dick*! He was trying to kill me…
    Dick Jones: He’s a cyborg, you idiot! He recorded every word you said, his memories are admissible as evidence! You *involved* me! You’re gonna have to kill it…
    Clarence Boddicker: Well listen chief… Your company built the fucking thing! Now I gotta deal with it? I don’t have time for this bullshit!
    [Clarence starts heading towards the door]
    Dick Jones: Suit yourself Clarence… But Delta City begins construction in two months. That’s two million workers living trailers, that means drugs, gambling, prostitution…
    [Clarence backtracks into Jones’ office]
    Dick Jones: Virgin territory for the man who knows how to open up new markets… one man could control it all, Clarence.
    Clarence Boddicker: Well I guess we’re gonna be friends after all, *Richard*.
    Dick Jones: [tosses Robocop’s tracking device to Clarence] Destroy it…
    Clarence Boddicker: Gonna need some major firepower. You got access to military weaponry?
    Dick Jones: We practically are the military.

    http://www.imdb.com/title/tt0093870/quotes

    And the news item inside the movie, where the Star Wars system went haywire and blasted a whole section of California.

    That movie might be even more accurate a prediction of today than the others.

  10. Bob Schacht says:

    @William Ockham:
    “Doesn’t anybody read Asimov any more?
    But, seriously, who would be held liable when the autonomous weapon commits a war crime? The software developers? The nearest military commander?”

    I think this is exactly what the real issue is. It will take at least a generation for the Courts to sort that one out.

    Bob in AZ

Comments are closed.