Posts

Should David Petraeus Be Replaced With a Computer?

[youtube]http://www.youtube.com/watch?v=YX4A-iSoDiU[/youtube]

Today’s Washington Post brings an update on the work being done by the Pentagon to develop artificial intelligence to the point that a drone can be automated in its decision on whether to kill.  The article points out that currently, when the CIA is making kill decisions on drone missions, that decision falls to the director, a position recently taken over by retired General David Petraeus.  In other words, then, the project appears to be an effort to develop a computer that can replace David Petraeus in decision-making.

Of course, this prospect raises many issues:

The prospect of machines able to perceive, reason and act in unscripted environments presents a challenge to the current understanding of international humanitarian law. The Geneva Conventions require belligerents to use discrimination and proportionality, standards that would demand that machines distinguish among enemy combatants, surrendering troops and civilians.

More potential problems:

Some experts also worry that hostile states or terrorist organizations could hack robotic systems and redirect them. Malfunctions also are a problem: In South Africa in 2007, a semiautonomous cannon fatally shot nine friendly soldiers.

The article notes that in response to issues surrounding the development of autonomy for weapons systems, a group calling itself the International Committee for Robot Arms Control (ICRAC) has been formed.  On the ICRAC website, we see this mission statement:

Given the rapid pace of development of military robotics and the pressing dangers that these pose to peace and international security and to civilians in war, we call upon the international community to urgently commence a discussion about an arms control regime to reduce the threat posed by these systems.

We propose that this discussion should consider the following:

  • Their potential to lower the threshold of armed conflict;
  • The prohibition of the development, deployment and use of armed autonomous unmanned systems; machines should not be allowed to make the decision to kill people;
  • Limitations on the range and weapons carried by “man in the loop” unmanned systems and on their deployment in postures threatening to other states;
  • A ban on arming unmanned systems with nuclear weapons;
  • The prohibition of the development, deployment and use of robot space weapons.

 

In the end, the argument comes down to whether one believes that computer technology can be developed to the point at which it can operate in the war theater with autonomy.  The article cites experts on both sides of the issue.  On the positive side is Ronald C. Arkin, whose work is funded by the Army Research Office.  Believing the issues can all be addressed, Arkin is quoted as saying “Lethal autonomy is inevitable.”

 

On the negative side of the argument is Johann Borenstein, head of the Mobile Robotics Lab at the University of Michigan.  Borenstein notes that commercial and university laboratories have been working on the issue for over 20 years, and yet no autonomy is possible yet in the field.  He ascribes this deficiency as due to the inability to put common sense into computers: “Robots don’t have common sense and won’t have common sense in the next 50 years, or however long one might want to guess.”

 

As HAL said in 2001: A Space Odyssey: “Dave, I’m scared.”