My Calendar‎ > ‎random essays‎ > ‎

Threat of Robotic Use in Warfare

Akihiro Eguchi


            The use of robots in warfare has recently been increasing rapidly throughout the whole world. In Iraq, many incidents where unmanned US Predator drones fired missiles and killed civilians accidentally have been frequently reported (Goodmans par.1). The technology of robots, which used to be a people's dream, has been distorted in order to create a weapon to take their lives. This problem is no longer negligible because it has become a threat. While the technology gives many advantages such as increasing mobility and decreasing the danger to soldiers, this also brings about many problems at the same time. It is now the time for people to stop to reconsider the use of this new technology. People need to solve the worldwide epidemic of technology abuse by discussing ethical concerns, creating new standards, and alternating ways of technology use.

            A professor of artificial intelligence and robotics at the University of Sheffield, Noel Sharkey believes automated killer robots “pose a threat to humanity” (Hood par.2). As the technology of robots have improved, as many as 12,000 machines with intelligence have been dispatched to battlefields only by the U.S. Sharkey describes that the technologies have already become so advanced that they can “identify and lock onto targets without human help” (par.3). He also points out that the real intentions of military leaders are to create autonomous robot, which would result in no risks for their soldiers (par.8). Because there are no well developed ethics nor rigid international limitation on the technology, the technology is becoming a threat to humans.

            Since the technology has been developed so rapidly that people cannot keep up, ethics for the new age has not yet been well established; therefore, one of the solutions for this problem is to discuss and create clear ethical guidelines for this matter. According to Robert J. Sawyer in Science Magazine, “the notion of killer robots is a mainstay of science fiction,” and he suggests the way to find solutions for this issue in those fictions such as I, robot by Isaac Asimov (par.2). In the Defense Review Magazine, Tony Rogers laments that “[t]he US Army is deploying armed robots in Iraq that are capable of breaking Asimov’s first law that they should not harm a human” (Rogers par.1). Isaac Asimov was a science fiction writer, who formulated the three laws of robotics in 1942 which “become part of the cult and culture of modern robotics” (Angelo 103). Asimov states:

1. A robot may not injure a human being, or, through inaction, allow a human being to come to harm. […] 2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. […] 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. (Asimov 37)

Sawyer thinks this was not applied to reality “because these laws do not equip robots to tackle real-world situations” (par.2). However, he introduces some attempts of governments to draft their versions of Asimov's laws such as Robot Ethics Charter developed by South Korea's government. Although the issue of robotics in warfare seems to be a brand-new issue for people, the same issue has been discussed in fictions for a long time, and Sawyer emphasizes that “science fiction may be our guide as we sort out what laws” (par.5). Therefore, it would be beneficial to use science fictions as the template for creating the guideline.

Similarly, some people like Noel Sharkey and Senior Fellow at the Brookings Institution P.W. Singer points out the importance of humans’ responsibility concerning ethics. Noel Sharkey, cited in an article Ethics Guidelines Needed for Robots, thinks the robot ethics guidelines are needed right away because of a lack of human responsibility (par.1). While many people think about the ethics of robots themselves, Sharkey indicates that robots do not yet think for themselves like in many fictions, so ethics for humans are needed now (“Ethics guidelines” par.17).  Also, Singer emphasizes the importance of responsibility of humans when he was asked “What happens if a robot commits a massacre?” in his interview by Amy Goodman. He points out the lack of clarity in who would have a responsibility for it, by introducing his own interview to a scientist in Pentagon: even if a robot kills wrong person, “it’s just a product recall issue” (Goodman par.27). Singer questions this response; thus, he thinks now is the time that people have to discuss ethics of humans over use of robotic technology; “otherwise, we’re going to make the very same mistake that a past generation did with atomic bombs” (Goodman par.29). Therefore, it is also important to clearly state the responsibility of humans in the ethical guidelines which give people chances to doubt the acceptability of robotic use in warfare.

            Setting limits on the use of the technology internationally in warfare can be another good solution to robotic abuses. Singer uses the term ‘robotics revolution’ to describe the problem and asserts this revolution will change the entire nature of war (Goodman par.51). Therefore, he insists on international agreement on this issue. Singer indicates the use of drones in battlefield which are controlled by pilots thousands of miles away, and he believes the aspect of distance will cause wars to seem unreal. Singer also introduces the story of a 18 year old high school dropout. He mentions that the young man is a big fan of video games, and thus became the best pilots of the drone. Singer criticizes that the war is treated as though it were a game. While Singer finds the similarity between military robot and atomic bomb of World War II in that there was a lack of clear agreement for their use, he indicates the important difference that “unlike with atomic weapons, robotics isn’t being worked on in secret desert labs that no one knows about” (Goodman par.23) Therefore, he believes that it is a time for setting a new international agreement, which rigidly limits the use of robotics in war before they will change the meaning of war.

Moreover, some people like Sharkey even suggest the ban of robotic use in wars by listing the possible threat of the technology. Likewise, Sharkey suggests "we have to say where we want to draw the line and what we want to do” and demands international agreement (Hood par.27). According to Sharkey, although humans now have to make final decision for robots to attack, there is enough possibility that robots can harm humans with strong weapons (Hood par.3). In 2009, a remote-operated robot SWORDS were pulled off the battlefield because the combat bot locked its machine-gun on a soldier on the same side and almost fired it by an error (Page par.3). This incident shows the possibility of machines to become a threat to humans by unpredictable error. Moreover, Sharkey emphasizes the peril of terrorism through technology spillover. He warns that once a robot is captured by terrorist and reprogrammed it can be an autonomic killing machine easily (Hood par.14). Therefore, he even says that “the best solution may be an outright ban on autonomous weapons systems” (Hood par.27). His argument seems to be more reasonable than Singers because if there are no weapons, nobody can use them; in fact, not creating this kind of robot might be one of the best solutions.

It is an undeniable fact that many scientific technologies have been developed for warfare; therefore, some people might have contradicting opinions about the limit over the robot technology. However, it is very important to realize that the technology development does not always have to be for military use, but there also are many other opportunities for peaceful use. Thus, shifting the purpose of creating robot from military use to peaceful use can be a very effective solution. An article, Yamaha's RMAX - The Worlds Most Advanced Non-Military UAV (Unmanned Aerial Vehicle), introduces the utility-use unmanned helicopter, Yamaha RMAX, “for crop dusting that could help reduce labour and costs in Japan's labour-strapped rice farming industry” and calls for the importance of governmental support for non-military use (par.5). The term UAV has achieved name recognition because of the Iraq war because of governmental support for improving this technology in warfare. According to the Department of Defense's Unmanned Systems Roadmap 2007-2032 cited by Hood, “Washington plans to spend four billion dollars by 2010 on unmanned technology systems, with total spending expected rise to 24 billion” (par.11). It explains the reason why the technology has been improved upon so fast recently, but it implies different possibilities for the technology use as well. While many countries invest in military technology, according to the news web site Gizmag, “the RMAX is the finest example of a commercial autonomous aerial vehicle available, with about 2,000 industrial-use unmanned helicopters used for crop-dusting in Japan,” which is supported by the external branch of the Japanese government (par.4). This example shows that people do not have to improve their robotic technology for military use, but they can pursue it for peaceful use if they have governmental support. Therefore, it is important for government to choose the right way.

Also, according to the article Handicapped May Walk in Robot Suit, Yoshiyuki Sankai, a University of Tsukuba professor, shows his strong belief that robotic technology should be created for helping people. Sankai also seeks the peaceful use of highly developed robotic technology and created HAL (hybrid assistive limb) to help physically weak people. In the same article, Sharkey praises that these developments will have wide-ranging benefits for the elderly and others with movement disabilities. Sankai believes the most important thing for scientist which should not be forgotten is that “technology becomes useful only when it works for people" (par.8). Therefore, he shows his strong position to refuse "any possible military use of my robot suits" (par.9). If all people hold similar beliefs to him, it will help in the conversion of the technology of warfare for peaceful use.

Among the rapidly developing technologies, robotic use in warfare is becoming one of the biggest issues. Without deep discussions over this issue, the technology could take a path toward threat to humans. It seems to be a time to discuss and mature the ethics about this matter in the reality not only in fictions, and international agreements should be established. Although many technologies were developed in warfare, it is important to understand that they can be developed for peaceful use as well with the support of governments and strong beliefs in the common good. Nobody wants a "threat to humanity." We need to solve this problem as soon as possible before it is too late. 

Work Cited

  • Angelo, Joseph. A. Robotics: A Reference Guide to the New Technology. Colorado: Libraries Unlimited, 2007.
  • Asimov, Isaac. I, Robot. New York: Bantam Books, 1991.
  • "Ethics guidelines needed for robots." COSMOS. 19 Dec. 2008. Luna Media Pty Ltd. 9 Apr. 2009 <>.
  • Goodmans, Amy. "Wired for War: The Robotics Revolution and Conflict in the 21st Century (P.W. Singer).” Democracy Now! 6 Feb. 2009. 9 Apr. 2009 <>.
  • "Handicapped may walk in robot suit" The Standard. 21 Oct. 2008 The Standard Newspapers Publishing Ltd. 9 Apr. 2009 <>.
  • Hood, Marlowe. "Killer robots 'a threat to humanity.'" The StarPhoenix. 5 Feb. 2009. Canwest News Service. 9 Apr. 2009 <>.
  • Page, Lewis. "US war robots in Iraq 'turned guns' on fleshy comrades." The Register. Apr 11. 2008. Situation Publishing Ltd. Apr 16. 2009 <>.
  • Rogers, Tony. "New Military Robots Violate Isaac Asimov's First Law." Defense Review Magazine ( 16 Mar. 2006. 15 Apr. 2009 <>.
  • Sawyer, Robert J. "Robot Ethics." Science. Nov 2007. AAAS. 15 Apr. 2009 <>.
  • "Yamaha's RMAX - the worlds most advanced non-military UAV" Gizmag. 19 Nov. 2004. 9 Apr. 2009 <>.