Skip to main content

The Future of Just War: Chapter Six

The Future of Just War
Chapter Six
  • Show the following:

    Annotations
    Resources
  • Adjust appearance:

    Font
    Font style
    Color Scheme
    Light
    Dark
    Annotation contrast
    Low
    High
    Margins
  • Search within:
    • Notifications
    • Privacy
  • Project HomeThe Future of Just War
  • Projects
  • Learn more about Manifold

Notes

table of contents
  1. Cover Page
  2. Title Page
  3. Copyright Page
  4. Contents
  5. Introduction
  6. Section One. Jus ad Bellum
    1. Chapter One
    2. Chapter Two
    3. Chapter Three
    4. Chapter Four
  7. Section Two. Jus in Bello
    1. Chapter Five
    2. Chapter Six
    3. Chapter Seven
    4. Chapter Eight
    5. Chapter Nine
  8. Section Three. Jus post Bellum
    1. Chapter Ten
  9. Contributors
  10. Index

CHAPTER SIX
From Smart to Autonomous Weapons

Confounding Territoriality and Moral Agency

Brent J. Steele and Eric A. Heinze

ADVANCES IN MILITARY TECHNOLOGY today are frequently described in terms of the extent to which they remove the soldier from the battlefield and increase the precision of the application of force, therefore reducing the costs and suffering associated with waging war. This capability has been further enhanced by the well-documented use of armed unmanned aerial vehicles (UAVS) by the United States in the “global war on terror.”1 The use of unmanned and increasingly autonomous weapons systems, according to some observers, will inevitably lead to autonomous robots being deployed in the battlefield and entrusted with decisions about target identification and destruction.2

This chapter examines how these advances in military technology confound two fundamental concepts that are critical to making sense of the Just War tradition—those of territoriality (or spatiality) and moral agency—which we argue has implications for important principles within jus ad bellum, jus in bello, and jus post bellum. Regarding territoriality, we observe that the use of remote weapons in general, and specifically the use of armed UAVS, make the notion of Just War highly problematic because they can be deployed quietly and unofficially, anywhere and everywhere, at any time. In terms of moral agency, we argue that the use of increasingly autonomous weapons substantially frustrates the ability to hold an agent responsible for transgressions of jus in bello rules, most notably noncombatant immunity (or discrimination) and proportionality. Furthermore, we contend that the intrinsic ambiguity of morality does not lend itself well to the sorts of predetermined rules that would presumably be the basis for programming autonomous weapons systems to behave “ethically.”

THE CENTRALITY OF TERRITORIALITY AND MORAL AGENCY TO THE JUST WAR TRADITION

Territoriality

Territory has played an important role in conflict and war. We often think of territory being the end goal of conflict, not as a constraint on it, but it can be both. For instance, sovereign “spaces” can be thought of as constraining the actions and activities of warfare. One only needs to reflect on the controversy engendered when conflict spills over—deliberately at times—into neutral sovereign states.3 Thus, sovereignty, especially when connected to territory, serves as an organizing principle of international society.4

Even so, territory’s role within Just War debates has been, as John Williams remarked in a recent study, “inadequately” engaged.5 John H. Herz’s observation, made over a half century ago, about the “demise of the territorial state” provides us a good starting point toward understanding the problematic delinking of territory from authority and conflict.6 Herz discusses how in the era of the territorial state, the practice of war “itself … was of such a nature as to maintain at least the principle of territoriality.”7 Herz identifies two important phenomena derived from territoriality—legitimacy and nationalism—which “permitted the system to become more stable than might otherwise have been the case.”8 Legitimacy “implied that the dynasties ruling the territorial states of old Europe mutually recognized each other as rightful sovereigns. Depriving one sovereign of his rights by force could not but appear to destroy the very principle on which the rights of all of them rested.” Nationalism “personalized” these territories and “made it appear as abhorrent to deprive a sovereign nation of its independence.”9

These phenomena were transformed beginning with, again, the changing practices of warfare, sometime in the nineteenth century and beyond. Among these, Herz notes the two most important and interrelated were air warfare and atomic warfare. Air warfare’s “effect was due to strategic action in the hinterland rather than to tactical use at the front. It came at least close to defeating one side by direct action against the ‘soft’ interior of the country, by-passing outer defenses and thus foreshadowing the end of the frontier—that is, the demise of the traditional impermeability of even the militarily most powerful states.”10 And with atomic warfare, Herz finds that the transformation is even more radical, whereas even in the interwar period power could be seen as something “radiating from one center and growing less with distance from that center,” by the 1950s “power can destroy power from center to center,” thus, “everything is different.”11

One might read Herz’s declaration on the “demise” of the territorial state as simply a cataloguing of the changes technology makes possible both in terms of the practice of war and the understanding of sovereignty, and that this is a process that is defined more by its continuity (technological change and practice adaptation) than its jagged discontinuity or “revolutionary” moments. In this reading, UAVS can be seen as just one more technological change in the practice of war that needs to be taken into consideration within the Just War tradition.

However, we prefer to springboard from Herz’s thesis to suggest that the notion of territoriality,12 while not eliminating the nation-state per se, contained a constraining effect on conflict that was itself “destroyed” by the emerging “nonterritorial” practices of warfare. The point we wish to make is that the further we get from the territorial notions of sovereignty—or put another way, the more those notions are radically confounded—the more difficult it will be for international society to come to some interpretive (not to mention legal) agreement on legitimate practices of warfare.13 If there is no official “termination” of war, if it becomes perpetual, then it is difficult to connect such conflict to the original “right intention” (jus ad bellum) of an initiated war. Further, by invoking the right to both fly UAVS and deploy force from them, the offensive parties are tacitly compromising the “just authority” of the sovereign states whose spaces are being violated. We discuss this below.

Moral Agency

Another concept that provides much of the moral substance of Just War thinking is that of moral agency, which we understand to be the idea that actors (normally human beings) are capable of behaving in accordance with the precepts of morality, have the ability to make moral choices autonomously, and are considered responsible for the moral choices they make.14 Moral agency thus entails autonomy, which means that agents act independently of the will of others and that their actions originate in them and reflect their ends. It also entails intention, which means that the actors meant to achieve the ends that came about from their actions. It follows, then, that agents can be held individually morally responsible for the outcomes of their actions, to the extent that the outcome was intended by the agents and was the result of acting autonomously.

Our account of moral agency, then, is associated primarily with individuals, although we recognize that there are cogent and convincing accounts of collective moral agency that assign moral responsibility to, for example, social and political institutions.15 Just War theory certainly recognizes the existence of collective moral agency in the ad bellum and in bello distinction, wherein the former holds institutions (“governments”) responsible and the latter holds individuals responsible. It is also the case that institutions are at least in part constituted by individuals, without which it is questionable whether they could truly be held morally responsible for their conduct. Yet we contend that collective moral agency obscures individual moral responsibility—a feature highlighted in the Nuremburg proceedings and subsequent war crimes tribunals—which is why in bello concerns focus on individuals, so as to not let their individual immoral conduct go unpunished because they were acting as part of a broader collective war effort. Thus, since our concern in this context is one of jus in bello, we proceed with the standard account of moral agency centered on individuals.

Moral agency, thus stated, is required to hold individuals responsible for their conduct in times of war. To the extent that a goal of the Just War tradition is to subject the conduct of war (jus in bello) to moral rules, an important precondition to this is the ability to single out the actions of individuals in war for either moral praise or blame. Thus, the concept of moral agency provides a basis to identify individuals and hold them responsible for potentially having violated Just War principles. As Walzer succinctly puts it, “the theory of justice should point us to the men and women from whom we can rightly demand an accounting, and it should shape and control the judgments we make of the excuses they offer.”16 Indeed, the entire enterprise of Just War theory is undermined if we do not have some conception of moral agency as a basis for assigning responsibility for moral transgressions.

Furthermore, the ability to intend to achieve certain ends from one’s actions is even constitutive of certain Just War principles. The principle of “right intention,” for instance, stipulates that an actor must have the proper subjective intention, or state of mind, for an act to be moral. Thus, an actor’s intentions and state of mind matter in our moral evaluation of his or her action, and this is possible only if the actor in question is a moral agent. In addition, an actor’s intentions are literally what separate war crimes from mere accidents in the context of jus in bello. The Double Effect doctrine presumes that noncombatant casualties are permissible (1) if they are unintended and (2) if reasonable precautions are taken to minimize harm by the offensive party.17 We would generally consider it a far more severe moral transgression if a commander knowingly and deliberately ordered an attack on noncombatants, versus if the commander genuinely believed that they were attacking a legitimate military target. Only the actions of someone with moral agency can be appraised on such a basis.

CONFOUNDING TERRITORY: PERPETUAL “WAR” IN TIME AND SPACE

The difficulties involved in coming to a consensus about what are or are not the legitimate practices of warfare become especially visible when we catalogue the particular “confoundings” of territory that UAVS make possible. Consider that UAVS have been used most frequently, and recently, in the mountainous “AfPak” region of Afghanistan and Pakistan and thus routinely compromise Pakistan’s sovereignty. These seem, on their face, both pragmatic and legitimate—pragmatic because of the global, transnational and “de-territorialized” nature of al Qaeda, and legitimate because of a “hot pursuit” agreement made in January of 2003 between Pakistan and the United States.18

And yet two wrinkles emerge more recently with the counterterrorism policies of especially the United States in the region. First, by claiming the right and even necessity of intervening with special forces, missiles, and UAVS, the United States is tacitly asserting that Pakistan is unwilling or unable (because of geography or national politics, or both) to practice its own sovereignty by rooting out members of al Qaeda and the Taliban—and, inversely, the United States is therefore invoking such space as within its own authoritative purview—as more than just a “right authority.” And yet, secondly, even the United States recognizes that in certain cases, and especially if things go wrong in an operation, it is violating Pakistan’s sovereignty by carrying out its attacks, as President Obama recently admitted regarding the “Operation Geronimo” mission that killed Osama bin Laden.19

UAVS have been increasingly deployed to the “soft shell” (to borrow Herz’s term) areas of sovereign countries that are (at least as of this writing) not hostile to the United States. They have, furthermore, reinforced that which covert operations began: the possibility of endless war, temporally and spatially.20 Any point in space is fair game, and at any moment—the operational space for battle is anywhere and everywhere, and at any time. This limitless war is facilitated even further by the increasingly microtized (smaller with respect to both time and space) nature of UAVS and their targets. The targeting of terrorists depends not on locating just the area of the terrorists, nor even their hideout, but identifying the terrorists. Armed with recognition capacities on the drones, the “space” for UAV targeting transfers from a compound or safe house of terrorists to their faces.

Such targeting thus brings us to a second confounding of space made possible by UAVS and their visual acuity—the removal of the “fog of war” in military operations. In a somewhat ironic way, the pilot of such missions has never been simultaneously further (in space) and closer (visually) to the target.21 This is similar to but also radically different from what Hans Morgenthau, in delineating the end for any chances of an international morality emerging in a context of modern war, called the development of “push-button war.” Morgenthau described “push-button war” as being “anonymously fought by people who have never seen their enemy alive or dead and who will never know whom they have killed.”22 In essence, the visual acuity of the UAV provides the pilot a perspective where, in the words of Lauren Wilcox, “such deaths are less like combat deaths, and more like executions viewed at close range…. The images show people moving around who seem unsuspecting…. The advanced technological killing capabilities that these drones represent can not only bring death in an instant, from an unseen source, but can make this death visible to the operator and an audience of millions. Visibility in instance functions less like a panopticon, and more like a public execution.”23

Thus, in these situations the pilot faces a situation akin much more to Walzer’s examples of “naked soldiers,” where the “shooter” faces “deep psychological uneasiness about killing.”24 This confounding of space, then, may explain why some of these pilots, even in their “sterile” environments, “suffer from combat stress that equals, or exceeds, that of pilots in the battlefield.”25

Yet, this development illustrates something even more profound that dovetails with what Sebastian Kaempf, in his contribution to the current volume, titles a connection of “humanity” in postmodern, asymmetric warfare. This connection was absent in previous asymmetric engagements where a technologically superior power could and would battle an inferior in a “risk-free manner,” but also in a way that was free of legal or moral obligations, because such a foe was deemed outside of a group of “civilized nations.” And yet, as Kaempf also demonstrates, and as we develop below, while the postmodern context is defined by more universal legal and moral codes, those codes are “undermined” by the “mode of warfare” represented by UAVS. Kaempf notes that since “asymmetric warfare increasingly enables the U.S. military to kill without facing the risk of death in return, then the U.S. military can no longer draw on existing moral and legal frameworks to justify the killing of enemy soldiers.”26 Put another way, there are costs to the U.S. warrior’s gains in autonomy and safety in UAVS.

CONFOUNDING MORAL AGENCY: DISCRIMINATION, PROPORTIONALITY, AND THE INDETERMINACY OF MORALITY

The arsenal of remote and unmanned weapons systems—both deployed and in development—can be understood on a sliding scale of autonomy from their human operators. On one end of the scale are those systems such as remotepiloted UAVS, wherein a human pilot makes the decision on when and where to deploy deadly violence (even though other aspects of UAV missions, such as taking off, locating enemy targets, landing, etc., are undertaken autonomously). Such systems are relatively unproblematic for assigning responsibility for transgressions of the laws of war, and we would normally locate such responsibility with the pilot in the same way we would with manned aircraft. Further down the scale are those systems wherein targets are identified by a machine, and then the decision on whether to fire is left to the human operator. Such systems include targeting systems, such as the Aegis Combat System, which are capable of identifying enemy targets by their radar or acoustic signatures and presenting this information to a human operator who then decides whether to “trust” the system and fire on the target.27 At the far end of the scale, then, are those systems that would be entrusted to identify, as well as destroy, enemy targets without input from a human operator.

These latter two kinds of systems are far more problematic, as the following examples illustrate. First consider Iran Air Flight 655, discussed by P. W. Singer in his book Wired for War, which was an Iranian commercial passenger jet shot down in 1988 by a U.S. naval vessel that used the Aegis targeting computer, which mistakenly identified the passenger jet as an Iranian F-14 fighter.28 In this case, despite other data indicating the aircraft was not a fighter jet, the crew trusted the “judgment” of the computer more than their own and authorized it to fire, thus killing almost three hundred civilians and committing a transgression of the discrimination principle. But who exactly is responsible for this mistake? Normally, we might say the human operators, though surely with the mitigating circumstance of this being a case of mistaken identity, and thus at least partially excusable. But what is interesting about this situation is that it was the targeting computer that misidentified the target, and the mistake the human operators made was to trust the computer’s judgment over their own. Whereas without this system the human operators might have been culpable for not taking reasonable precautions to ensure that their target was an enemy aircraft, by using these systems and increasingly relying upon them to the detriment of their own decisions, the use of this technology becomes itself the “reasonable precaution” and thus provides a moral buffer between the human operators and their actions. This, in turn, allows those who use such systems “to tell themselves that the machine made the decision,”29 thus relieving themselves of feeling morally responsible even though it is they, and not the targeting computer, who are the moral agents in this example.

Yet in considering what reasonable precautions combatants must take to minimize risks to civilians, Walzer reminds us that “[w]hat we look for in such cases is some sign of a positive commitment to save civilian lives.”30 If indeed part of this doctrine assumes, as Walzer suggests, that the attacking party must actually put itself in more danger to reduce the vulnerability of noncombatants, then UAVS are even more problematic. In order to make such a judgment, one must locate and decide whose intentions matters here, for as Jane Mayer notes when it comes to UAVS and their use by the CIA, “there is no visible system of accountability in place,” and “the White House has delegated trigger authority to CIA officials.”31 In essence, the intent (turning an expectation into an action) surrounding a UAV’s use is diffused through a variety of supporting actors ranging from the CIA, the U.S. president, and the head of the Counter-Terrorist Center to, in an even more radical sense, computer programs.32 As Mayer describes, “if a school, hospital, or mosque is within the likely blast radius of a missile,” then the calculation for estimating civilian casualties with a UAV are all “weighed by a computer algorithm before a lethal strike is authorized.”33

The situation with fully autonomous systems becomes even more problematic. Consider if the Iran Air tragedy had occurred with the targeting computer being entrusted to identify and destroy enemy targets without human authorization. To the extent that we consider it important to hold some moral agent(s) responsible for this tragedy, even if it was an “honest” accident, it would seem exceedingly difficult, perhaps impossible, to identify one in this case. If a weapon is truly autonomous, which means that it chooses and destroys its own targets without human input, this implies that its orders do not necessarily determine its actions, even if they obviously influence them.34 This means that the more autonomous a system is, the more it has the capacity to “choose” a course of action that differs from how it was intended to act by both its programmers and those who ordered its use.35 At some point, the designers and manufacturers of such systems, as well as the officers who ordered their deployment, can no longer control or predict the actions of the system, thus making it difficult to hold them morally responsible. Unless we are willing to imbue a machine with moral agency and hold it morally responsible, which would seem to us to be even more problematic (i.e., how would one “punish” a machine), then there may be no moral agent to hold responsible for potential war crimes, and all civilian casualties become ipso facto “excused” as unintended collateral damage.36

Let us be clear: the point here is not that UAVS, when “malfunctioning,” represent a new problem for the ethics and practices within war, although such malfunctions have indeed occurred with UAV use.37 Malfunctioning technologies have been a problem for centuries of warfare. The point is, rather, that the location of “intent” for targeting with increasingly autonomous weapons becomes both diffused and confused. Indeed, the very possibility of intent becomes an almost absurd notion when referring to machines, particularly autonomous ones. Even if it were possible (and desirable) to locate moral responsibility with a machine, ascertaining its “intentions” to determine whether it committed a crime may be not be possible. Computers may be said to have “intentions” in terms of their functionality, which is to say that they can act purposefully in that their only “intention” is to carry out what it has been programmed to do, subject to a set of rules.38 But to say that someone (or something) intended to do something is to say that its actions originated with them and reflected their ends, which the agent itself has “chosen” (in some sense) because of its ability to reason and on the basis of past experience.39 However, granting that a machine can be autonomous in the sense that it may interpret the parameters of its programming in unpredictable ways, the source of its intentions (analogously, its “reason” and “past experience”) is human programming. Thus, paradoxically, a weapons system that “chooses” its own targets and then “decides” to destroy them is at the same time an autonomous agent yet lacks true intention. It could, say, mistake a group of civilians for combatants or make a dubious proportionality calculation, but we can never understand its “reasons” for doing so—the essence of intentions—apart from its programming parameters. As Harry Gould suggests in his chapter in this volume, much of the debate over the moral significance of intentions is about competing conceptions of agency.40

These examples suggest issues that may be symptomatic of a larger problem with using autonomous weapons systems in accordance with the precepts of just war doctrine, which is that moral behavior in warfare may not be achievable through the delineation of rules in a computer program because of the intrinsic ambiguity in morality itself. It has been claimed that autonomous weapons systems are capable of performing more ethically on the battlefield than human soldiers.41 Autonomous robots can integrate more information faster and more accurately than humans, they have high-tech sensors to make observations that humans cannot, and they do not suffer from fatigue or emotions that might impede a human soldier on the battlefield. Thus, the solution is simply to program these systems, using extremely precise and clear commands, to behave ethically and in accordance with the laws of war and simply allow the robot’s mechanical determinism to follow these rules, resulting in ethical behavior. Whatever behavioral problems ensue, therefore, would be because of the ambiguity of the prior rules, which could be resolved by a continual refining and more precise specification of, for example, who is a combatant.

However, as John Kaag and Whitley Kaufman argue, moral judgment is inherently ambiguous, controversial, and not reducible to a set of rules, and if it were, “it is likely that we would have discovered many or most of these rules long ago.”42 One could even argue that morality is more ambiguous in times of war than in ordinary life. Consider a robot programmed with precise instructions to distinguish between civilians and combatants and only engage the latter. Right away one runs into the problem of specifying “civilianness,” as the laws of war are extremely ambiguous on the concept of combatancy, and there is almost always room for moral choice within these rules. The first problem is perceptual ambiguity. Computer scientist Noel Sharkey has argued that even the most sophisticated robots would not be able to tell, for example, “whether a woman is pregnant or whether she is carrying explosives,” whereas a human soldier would simply use the skill of common sense.43 Another problem, relating to the indeterminacy of morality, is that sometimes moral behavior requires that one make certain exceptions to the rules. For example, forward observers who provide intelligence required to target enemy forces are clearly taking direct part in hostilities and may be legitimately killed. Yet a robot, programmed as such, making this calculation would presumably not make an exception for the possibility that some armed groups force civilians to engage in this practice against their will. Such was the situation during the Iraq War when the Mahdi Militia used a child for such purposes and U.S. forces declined to shoot the child on moral grounds, despite this being perfectly permissible under the laws of war.44 A robot is only capable of distinguishing combatants and civilians in the empirical sense, not the normative sense.

WHITHER JUST WAR?

This chapter points out, via the concepts of territory and moral agency, that UAVS and autonomous weapons systems provide distinctive problems for the Just War tradition. Like the rest of the contributors to this volume, we have been assigned to figure out a constructive path for the Just War tradition, in light of the concerns of the chapter, going forward. While we concede that the practice of UAVS within war (per se) may ultimately be handled within the framework of Just War, we ultimately maintain that their use is part of a growing trend of postmodern conflict that makes notions of a Just War increasingly obsolete.

We may consider that UAVS are not the cause but the symptom of a “global war on terror,” and thus their use, while bringing somewhat unique dynamics to war, is not per se a unique problem in the practice of war. Yet regarding the notion of space being challenged by UAVS suggests a further consideration for scholars and practitioners alike. UAV usage in a “global war on terror” forces all of us to come to grips with the de-territorialized and postmodern spatial context of the nature of this conflict—a conflict that is focused on the identity of individuals, the recognition of their faces rather than territorial spaces. If this is the case, then it is not just the notion of a Just War that’s problematic; it is the notion that this is “war” whatsoever. Targeted killing, perhaps; assassination, maybe;45 but not a war that can be easily categorized, or spoken to, by the tradition of Just War. John Williams gets to the crux of the problematic discourses regarding especially terrorism over the past decade:

The U.S. government in particular has recast transnational terrorist threats within a statist discourse. Labeling states members of an “axis of evil,” ascribing responsibility for combating terrorism to governments—it was governments who were to decide whether they were either “with us … or with the terrorists” … is telling of a stubbornly “Westphalian” world view. But more to the point here is that academic debate about Just War, humanitarian intervention and terrorism, especially when the latter two are connected, quickly does the same thing.46

Our point here is not that we need to return (as if that were possible) to the bygone days of Herz’s territorial state, but rather that it seems mere folly to continue to speak of actions like the use of UAVS purely within the language of a Just War tradition that continues to be shackled to principles that have not been properly debated, especially in the context of a “war on terror.”

A further point regarding the operational “space” of UAVS takes one of their primary benefits—the safety it provides human operators and thus the minimization of casualties by an attacking party—and turns it into a liability. In the words of Singer, “what seems so logical and reasonable to the side using them may strike other societies as weak and contemptible,” leading to assumptions that the UAV-using combatants are “cowardly.”47 In fact, in line with the notions of space discussed above, there is an inverse security relationship that develops with UAVS. While on the one hand those “pilots” flying the UAVS are perfectly safe from their bunkers in Nevada, those parties in the “kill zone” are vulnerable at any time, day or night. And both sets of “combatants” know this.

Regarding the notion of moral agency, we would concede that programming autonomous weapons systems to abide by the principle of discrimination seems relatively straightforward compared to programming them to make proportionality calculations. There is really no precise, quantitative way to compare the “good” accrued from neutralizing a certain military object with the “evil” done as a result of incidental collateral damage. Thus, programming a machine to make such a calculation may be an exercise in futility. In some circumstances, the lives of some noncombatants (children) will count more than those of others (munitions factory workers) in making proportionality calculations. For example, some weapons systems in development can be launched like a conventional missile and then “loiter” over enemy territory in order to select and attack their targets by detecting unique radar and mechanical signatures of enemy forces.48 Will such systems be able to tell if these enemy forces are ensconced in civilian areas and are being protected by human shields (and if so, which humans)? Most importantly, will they be able to calculate whether it is worth killing x number of civilians to take out a single enemy radar? The point is that proportionality calculations epitomize the inherent ambiguity in morality and are extremely context dependent, such that a preprogrammed computer algorithm does not possess the sorts of complex intuitions that humans have about right and wrong behavior that are required to make such calculations.

There are thus two main challenges that Just War theorists must confront regarding the confounding of moral agency if the tradition is to continue to be relevant in an age of increasingly autonomous weapons. First, for those systems where a human decision maker is still involved in deciding to attack but trusts computers to identify targets, it seems fairly clear that moral responsibility still lies with the human operator. The problem is the moral buffer provided by the fact that a computer is identifying targets are to be destroyed, which alleviates the human operator of the important moral burden of deciding whether there is sufficient evidence to conclude that it is legitimate to destroy a particular target. Just War theorists thus need to seriously grapple with the question of whether trusting a supposedly superior computer to make targeting decisions is a sufficiently “reasonable precaution” to minimize risks to civilians, and how this implicates the common Just War precept that combatants are expected to put themselves in harm’s way to do so.

Second, for those weapons systems that will become fully autonomous, the challenge is immense, but we see a couple possible ways to proceed. The most straightforward would be to interpret the Just War tradition as simply forbidding the use of fully autonomous weapons. The basic argument would be that the ability to hold agents of war morally responsible for their actions is a necessary condition for fighting a Just War. Since autonomous weapons systems make this impossible, and since ethical behavior in war does not lend itself to the sort of mechanistic rule following that these systems are programmed to execute, then deploying autonomous weapons systems is unjust. One could also attempt to reformulate, or excise altogether, the concept of moral agency in order to accommodate actors that are autonomous yet lack intention, although this would fundamentally alter the normative substance of Just War principles and profoundly undermine their ability to provide sound moral guidance, thus possibly relegating them to irrelevance.

Yet given that the deployment of such weapons is likely to occur anyway, and given the centrality of moral agency to the Just War tradition, to argue that the use of such systems is consistent with a Just War requires at a minimum that one locate moral responsibility for the actions of these systems. If a machine cannot be held morally responsible, which people are responsible for the actions of these machines if they commit moral transgressions and why? Recent literature has explored some ways we might begin to address this issue through the appropriate designing of such systems, whereby clear responsibility would be allocated for each distinct function of the system, as well as the function of the system as a whole.49 Yet such a responsibility regime would need to be extremely precise and entered into voluntarily by weapons manufacturers and designers so that they know they might be held accountable for a war crime if their system fails.

It is one thing to speak of “more moral” outcomes within war because of advances in technology. It is quite another to think that we (as scholars or practitioners) can effectively locate agency in such a diffuse environment. Territory and agency, we assert here, have both been central in discussions on the “morality” of war. Yet in this case, the centralization of moral blame or praise is no longer possible, as judgment scans past the combatants—the “armchair warrior” piloting a UAV from Nevada, the officer who orders the deployment of an autonomous weapon—and is seemingly back-filtered to the manufacturers of these weapons systems themselves. How are we to speak of “responsibility” within war in such a radically diffused environment? UAVS and autonomous weapons may be more accurate weapons of war and may very well lead to the eventual reduction of civilian casualties, but the concerns raised in this chapter no longer sit easy within a “tradition” that is at least a millennium old and that prides itself on providing us the means through which we can discuss a “morality” of war.

Notes

1. See, generally, Peter W. Singer, Wired for War: The Robotics Revolution and Conflict in the Twenty-First Century (New York: Penguin, 2009).

2. Robert Sparrow, “Killer Robots,” Journal of Applied Philosophy 24, no. 1 (2007): 64; Ronald C. Arkin, “The Case for Ethical Autonomy in Unmanned Systems,” Journal of Military Ethics 9, no. 4 (2010): 332.

3. See, for example, the international reaction to Germany’s invasion of Belgium, an at-the-time neutral country, at the beginning of the First World War. Brent J. Steele, Ontological Security in International Relations, (London: Routledge, 2008), chapter 5.

4. See Barak Mendelsohn, “Sovereignty under Attack: The International Society Meets the al Qaeda Network,” Review of International Studies 31, no. 1 (2005): 45–68.

5. John Williams, “Space, Scale and Just War: Meeting the Challenge of Humanitarian Intervention and Trans-national Terrorism,” Review of International Studies 34, no. 4 (2008): 581–600.

6. John H. Herz, “Rise and Demise of the Territorial State,” World Politics 9, no. 4 (1957): 473–93.

7. Ibid., 481.

8. Ibid., 483.

9. Ibid.

10. Ibid., 487.

11. Ibid.

12. Carl Schmitt, in his famous Nomos work, asserted in one important passage that “the core of the nomos lay in the division of European soil into state territories with firm borders” leading to the notion that this “land had a special territorial status in international law.” Carl Schmitt, The Nomos of the Earth in the International Law of the Jus Publicum Europaeum, trans. G. L. Ulmen (New York: Telos Press, [1950] 2003), 148. It should come as no surprise that Herz cites Schmitt in his article.

13. We recognize, as John Agnew and Stuart Crobridge observed some years ago, that the practice of territoriality has not led to purely uncontested, mutually exclusive claims over space. Agnew and Crobridge, Mastering Space: Hegemony, Territory, and International Political Economy (New York: Routledge, 1995), especially chapter 4.

14. See Andrew Eshleman, “Moral Responsibility,” The Stanford Encyclopedia of Philosophy (Winter 2009 ed.), ed. Edward N. Zalta, http://plato.stanford.edu/archives/win2009/entries/moral-responsibility/ (accessed April 25, 2013).

15. See, for instance, Toni Erskine, ed., Can Institutions Have Responsibilities? Collective Moral Agency and International Relations (New York: Palgrave Macmillan, 2004).

16. Michael Walzer, Just and Unjust Wars: A Moral Argument with Historical Illustrations, 4th ed. (New York: Basic Books, 2006), 287.

17. Eric A. Heinze and Brent J. Steele, “Introduction: Non-state Actors and the Just War Tradition,” in Ethics, Authority and War: Non-state Actors and the Just War Tradition, ed. Eric A. Heinze and Brent J. Steele (New York: Palgrave, 2009), 6. See also Harry D. Gould, this volume.

18. Cian O’Driscoll, “From Versailles to 9/11: Non-state Actors and Just War in the Twentieth Century,” in Heinze and Steele, eds., Ethics, Authority and War, 21–46, 37; “U.S., Pak Agree to ‘Quiet’ Hot Pursuit,” Indian Express, January 7, 2003, http://www.indianexpress.com/oldStory/16206/ (accessed April 25, 2013).

19. “Obama on Bin Laden: The Full 60 Minutes Interview,” CBS News, May 8, 2011, http://www.cbsnews.com/8301-504803_162-20060530-10391709.html (accessed April 25, 2013).

20. Jane Mayer quotes Mary Duziak’s statement, “Drones are a technological step that further isolates the American people from military action, undermining political checks on … endless war.” Jane Mayer, “The Predator War,” New Yorker, October 28, 2009, http://www.newyorker.com/reporting/2009/10/26/091026fa_fact_mayer (accessed April 25, 2013).

21. This perspective can lead to a mystified contextualization of “targets” as well. Singer relates one vignette from 2002, when a tall thirty-year-old Afghan man, Daraz Khan, was killed in a drone strike because his height—relative to others with him—resembled that of bin Laden: “The men were wearing robes, were at a suspected terrorist hideout, and, most important, one of them was much taller than the others, as bin Laden was thought to be. As best as could be determined from seven thousand miles away, these were the men whom the Predator was looking for. As Pentagon spokeswoman Victoria Clarke explained, ‘We’re convinced that it was an appropriate target … [although] we do not yet know exactly who it was.” Singer, Wired, 397 (emphasis added).

22. Hans J. Morgenthau, Politics among Nations, 7th ed. (1948; New York: McGrawHill, 2006), 250 (emphasis added).

23. Lauren Wilcox, “Compulsory Visibility: Violence, Bodies, and the Visual,” paper presented at the 2010 annual meeting of the International Studies Association, Northeast, Baltimore, November 2010.

24. Walzer describes these situations generally as “a soldier on patrol … catches an enemy soldier unaware, holds him in his gunsights, easy to kill, and then must decide whether to shoot him or let the opportunity pass.” Walzer, Just and Unjust War, 138–39.

25. Mayer, “Predator War.”

26. Sebastian Kaempf, this volume.

27. Singer, Wired for War, 124–25.

28. Ibid., 125.

29. Robert Sparrow, “Building a Better WarBot: Issues in the Design of Unmanned Systems for Military Applications,” Science and Engineering Ethics 15, no. 1 (2009): 183. See also Mary L. Cummings, “Automation and Accountability in Decision Support System Interface Design,” Journal of Technical Studies 32, no. 1: 23–31.

30. Walzer, Just and Unjust Wars, 156.

31. Mayer, “Predator War.”

32. Again, Kaempf (this volume) notes in a similar vein that “most servicemen and women are no longer soldiers in a conventional sense. Instead, they have become machine- and technology-assisted agents.”

33. Ibid. Peter Singer quotes one robotics expert, “how do we transition authority for lethal action to the machine”? Singer, Wired for War, 400.

34. Sparrow, “Killer Robots,” 69.

35. Ibid., 70.

36. See Noel Sharkey, “The Ethical Frontiers of Robotics,” Science 322, no. 5909 (December 19, 2008): 1800–1801. See also John P. Sullins, “When Is a Robot a Moral Agent?” International Review of Information Ethics 6 (2006): 23–30.

37. What one 2006 account titled “what is believed to be the world’s first incident in which a civilian has been accidentally killed by a military unmanned air vehicle” is the case of a UAV operated by the Belgian army in the Congo. The UAV crashed and killed one woman and injured three others on the ground. See “Belgians in Congo to Probe Fatal UAV Accident,” Flightglobal, October 10, 2006, http://www.flightglobal.com/articles/2006/10/10/209752/belgians-in-congo-to-probe-fatal-uav-incident.html (accessed April 25, 2013).

38. Deborah G. Johnson, “Computer Systems: Moral Entities but Not Moral Agents,” Ethics and Information Technology 8, no. 4 (2006): 201.

39. Sparrow, “Killer Robots,” 65. See also Gould, this volume.

40. Gould, this volume.

41. Arkin, “Case for Ethical Autonomy,” 332.

42. John Kaag and Whitley Kaufman, “Military Frameworks: Technological Knowhow and the Legitimization of Warfare,” Cambridge Review of International Affairs 22, no. 4 (2009): 601.

43. Quoted by Nic Fleming, “Robot Wars ‘Will Be a Reality within 10 Years,’” Telegraph, 27 February 2008, http://www.telegraph.co.uk/earth/earthnews/3334341/Robot-wars-will-be-a-reality-within-10-years.html.

44. Kaag and Kaufman, “Military Frameworks,” 600.

45. See Avery Plaw, Targeting Terrorists (London: Ashgate, 2008).

46. Williams, “Space, Scale and Just War,” 595.

47. Singer, Wired for War, 312.

48. Noel Sharkey, “Saying ‘No!’ to Lethal Autonomous Targeting,” Journal of Military Ethics 9, no. 4 (2010): 370.

49. Sparrow, “Building a Better WarBot,” 179.

Annotate

Next Chapter
Chapter Seven
PreviousNext
All rights reserved
Powered by Manifold Scholarship. Learn more at
Opens in new tab or windowmanifoldapp.org