History of the We Robot conference and the Miami Law founder, Professor A. Michael Froomkin.

Page 1

MIAMILAW WINTER 2013

m agazine

WHO IS TO BLAME WHEN ROBOTS KILL? PROFESSOR A. MICHAEL FROOMKIN’S CONFERENCE PROBES IMPLICATIONS OF VIRTUALLY AUTONOMOUS MACHINES

UNIVERSITY OF MIAMI SCHOOL OF LAW


ROBOTS AT WAR

As Humans Take Back Seat in Combat, Who is to Blame When Machines Kill? By Nick Madigan

Photo by Jenny Abreu for Miami Law Magazine 16 MIAMILAW magazine


F

rom 25,000 feet in the air, an unmanned MQ-1 Predator drone operated by the Central Intelligence Agency draws a bead on an identified al-Qaeda leader standing with a group of people in a ramshackle village in the mountains of northwest Pakistan. At the touch of a button — pressed by a “pilot” sitting at a control panel more than 6,000 miles away at CIA headquarters in Langley, Va. — a Hellfire missile bursts from the drone and streaks toward the ground, pulverizing the terrorist and everyone around him, including, more than likely, at least a few civilians. The scenario has been repeated hundreds of times in recent years as the United States and its allies press their war on terrorism with an increasingly sophisticated array of weapons, the most effective of which — robotic aircraft like Predator and Reaper drones — enable their human overseers to remain far from the battlefield and its attendant dangers while machines do the work of killing. Continued on next page

At We Robots: Front center, Kate Darling, MIT Media Lab; seated in second row, from left, Kristen Thomasen, University of Ottawa; A. Michael Froomkin, Miami Law; M. Ryan Calo, University of Washington; Markus Wagner, Miami Law; and Richard M. O’Meara, Rutgers University; standing at rear, Lisa Shay, U.S. Military Academy, West Point; Mary Anne Franks, Miami Law; Amir R. Rahmani, University of Miami College of Engineering; and Ian Kerr, University of Ottawa. WINTER 2013

17


But does sending machines to do the work of soldiers and airmen make war easier to wage, and thus more likely? Does the logistical distance from a scene of carnage lessen any feeling of responsibility for causing it? And, once it is possible to deploy robotic weapons that make decisions for themselves — something that’s bound to happen sooner or later — who will bear the ultimate liability if the robots commit acts of genocide and other crimes of war? Such were the questions pondered at We Robot, a conference at Miami Law in April that addressed the emerging legal and policy issues raised by robots and their uses, not only in military settings but in the home, hospitals, businesses and public spaces. The conference was the idea of Miami Law Professor A. Michael Froomkin, who said the challenge was to “start a conversation now between people who make the robots and those who make the rules.” He called the conference a “very preliminary” discussion but a “big step toward the creation of an interdisciplinary community, perhaps even an academic field.” Miami Law Dean Patricia D. White agreed. In a speech to board members of the Law Alumni Association, she said robotics and their ethical implications represent “a whole new career for lawyers.” The conference — its name inspired by Isaac Asimov’s 1950 book, “I, Robot” — gathered experts on the front lines of robot theory, design and development and those who influence the legal and social structures in which robots operate. “Robotics is like the genie in the bottle,” said Brigadier General Richard M. O’Meara, a professor of International Law in the Division of Global and Homeland Security Affairs at Rutgers University. He predicted that autonomous robots will soon “be

18

MIAMILAW magazine

able to pull the trigger” without input or intervention from humans, and that there will be a “temptation to stretch the rules of war just because robotics enable us to do it.” But robots have “multiple benefits,” Gen. O’Meara wrote in a paper he presented at the conference. “For one thing, they permit militaries to operate with fewer soldiers. As manpower pools for military recruitment shrink, it is expedient to substitute machines for soldiers in order to maintain military advantage. Second, robots are politically convenient. The 21st century, especially in liberal democracies like the United States, exhibits a distaste for large standing armies and casualties. Robots, like private contractors, are not counted in national casualty reports, nor are their wounds the subject of debate or scrutiny. Third, robots cost a good deal less than human combatants.”

Robots are not burdened by human frailty, and cost less. Furthermore, the general went on, robots “don’t carry with them the baggage of human frailty.” He quoted a member of the Pentagon’s Joint Forces Command as saying that robots not only don’t get hungry, but “they’re not afraid; they don’t forget their orders; they don’t care if the guy next to them has just been shot. Will they do a better job than humans? Yes.” Already, unmanned systems and machines are in high demand by combatant commanders for their “versatility and persistence,” Gen. O’Meara reported. He wrote in his paper that by performing tasks such as surveillance, signals intelligence, precision target designation, mine detection, and chemical, biological,

radiological and nuclear reconnaissance, “unmanned systems have made key contributions to the global war on terror.” Since October 2008, he said, unmanned aircraft have flown approximately 500,000 flight hours in Iraq and Afghanistan, while robotic bomb detectors have found at least 15,000 so-called improvised explosive devices. In addition, remote-controlled vessels clear mines and provide security to ports and ships. In 2010, more than 6,000 military robots were sold around the world, in addition to almost 140,000 industrial robots and 2.2 million service and domestic robots, according to statistics quoted at the We Robot conference. “As robotics gets more and more sophisticated, they will take up potentially lethal but non-combat operations like patrolling camp perimeters or no-fly areas, and open fire only when ‘provoked,’” Gen. O’Meara wrote. “The final stage will be when robotic weapons are an integral part of the battlefield, just like ‘normal’ humancontrolled machines are today, and make autonomous or near-autonomous decisions.” But the field of military robotics has grown so fast that there has been little time in which to “consider the legal, ethical, and moral appropriateness” of their use. He suggested “creating new international treaties and practices, amending old ones, and forging new ethics for the use of new weapons.” Others at the conference supported that view. “There’s no chance of banning autonomous weapons systems, but let’s at least talk about the rules,” said Markus Wagner, an Associate Professor at Miami Law who studies the use of technology in armed conflicts. In a paper that he presented at the conference, he said care should be taken “to ensure, with a reasonable degree of certainty, compliance with international legal rules applicable in armed conflict.”


In a discussion about his paper, Professor Wagner said humans make the wrong decision 10 percent of the time, “so if robots are better, the argument goes, then we should use robots.” When political and military leaders consider whether to wage war, or how to wage it, he said, the decision is easier “when no one dies anymore.” On occasion, the fact that it is “easier” to pull the trigger if the target is far away can lead to terrible errors. Professor Wagner cited the downing of a civilian Iran Air flight by missiles from the U.S.S. Vincennes in 1988, based solely on relayed information that it was a hostile craft. All 209 people aboard were killed. Nevertheless, he went on, the increasing use of technologically remote warfare “creates a psychological distance” that is crucial to the Pentagon’s effort to “train out the element of hesitation” inherent in human decision-making. “As distance increases, it becomes psychologically easier to commit an act which an individual would not otherwise be willing to do,” Professor Wagner wrote in his paper, referring as an example to the firebombing of European cities during World War II. “Humans have a reluctance to kill one another, and the physical as well as psychological distance that long-range weapon systems have brought about, combined with the technological nature of today’s combat operations, where the direct impact of an individual’s action is increasingly less visible, has circumvented this innate reluctance.” But relying on autonomous systems, he went on, can create a perception that there is little or no risk for those who no longer have to order troops to fight. “It has the potential of lowering the costs for political decision-makers to engage in armed conflict, given that the political calculus would not have to take into account the number of fallen soldiers,” Professor Wagner wrote.

Not only that, but if remotely operated robots cannot be held responsible for war crimes, there will be “strong incentives” for states to use such weapons in battle, said Oren Gross, Director of the Institute for International Legal and Security Studies at the University of Minnesota Law School. But it may not be so easy to avoid liability for criminal acts even if the weapons are “truly independent,” Professor Gross said in the paper he presented at the We Robots conference. Someone, somewhere, will still have ultimate power over how the robots are used, and “degrees of control will be relevant to determination of criminal liability.” However, if programmed correctly, robotic weapons are far less likely to violate laws than any human being, Professor Gross went on, because the machine would be capable of “assessing the relevant facts with greater speed and accuracy, and is unlikely to be moved by negative emotions such as fear, panic or hatred.” That point was seconded by Dr. Ian Kerr, Canada Research Chair in Ethics, Law and Technology at the University of Ottawa Faculty of Law. In a paper he presented with Katie Szilagyi, a law student at Ottawa who majored in biosystems engineering at the University

of Manitoba, Professor Kerr said socalled robo-soldiers “will outperform human soldiers physically, emotionally and ethically.” “Robots are not vulnerable to the perils that plague humans on the battlefield: exhaustion, elevated emotions, or the need to seek retribution for the death of a comrade,” Professor Kerr and Szilagyi wrote. They added that “advanced sensory capabilities” will permit robots to cut through the fog of war, reducing confusion, friendly fire incidents and other “erroneous responses.” “With Asimovian aspiration,” they wrote, “we are able to program robosoldiers to be fundamentally decent. This, we are told, will reinforce and enhance international humanitarianism and reduce injustice in armed conflict.” Not everyone at the conference was convinced. AJung Moon, a Ph.D candidate in mechanical engineering at the University of British Columbia, who co-wrote a paper on “open roboethics,” said she found it hard “to implement the ideas of ethics into the design of robots.” Not only that, but Moon admitted being unnerved by the notion of robots “that could automatically fire and hurt people.” She told conference attendees that when she heard that South Korea employs sentry robots along its border with North Korea, “it gave me a sense of unease — it kind of scared me.”

Photo by Catharine Skipp, Miami Law Magazine

Amir R. Rahmani, Assistant Professor of Mechanical and Aerospace Engineering at the University of Miami, displays an ArduCopter, a micro aerial vehicle that uses GPS navigation and has lift capacity for a small payload.

WINTER 2013

19


University of Miami School of Law 1311 Miller Drive Coral Gables, Florida 33146

Non-Profit Organziation U.S. Postage PAID Miami Florida Permit No. 438

MIAMI

LAW magazine

UNIVERSITY OF MIAMI SCHOOL OF LAW www.law.miami.edu


Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.