Back to top

Autonomous Weapon Systems, Quo Vadis? – Clone

Image by Computerizer from Pixabay – Pixabay License Free for commercial use No attribution required

By Iness Arabi

Abstract

This paper examines the implications arising from states’ use of autonomous weapon systems in armed conflict. The analysis starts by addressing the definitional problems found in the literature on autonomous weapon systems. The primary finding is that the differential feature of autonomous weapon systems is their ability to select among targets and decide to kill without human oversight. The paper then delves into the effect that the increasing use of autonomous weapon systems has on conflict and war and the resulting policy implications for states and the international community as a whole. I conclude by discussig the legal, ethical, and moral implications of the use of weapons that can kill autonomously, which are at the core of the debate.

Keywords: autonomous weapon systems, unmanned weapons, foreign policy, law of armed conflicts.

 

  1. Introduction

We find ourselves in a time in which the rapid advances of technology profoundly affect, if not completely revolutionize, how the world operates. From the rise of cyberterrorism to the effects of social media on democracy, international relations have not remained unscathed. Even so, the most Terminator-like concern has been one raised in the last decade: the creation of ‘killer robots’.[1] What could have well been the plot of a science-fiction movie is now the concern of academics and policy-makers alike.

The revolutionary effect of autonomous weapons systems on warfare and state relations has been likened to that of gunpowder, computers, and even electricity.[2] In the face of such sensationalism, we must ask ourselves: why are autonomous weapons set to change the world as we know it? More importantly, how do we ensure that we are two steps ahead of these ‘killer robots’?

This paper will answer these questions by touching upon the following issues. First, in an aim to bring clarity to what autonomous weapon systems are, it will address the definitional problems found in the literature on autonomous weapon systems. More specifically, it will analyze what ‘autonomy’ is and where different stakeholders draw the line of autonomy. Second, the paper will address the policy implications of autonomous weapon systems. Finally, it will raise the questions of the ethical, legal and moral implications of these machines in the public debate.

  1. Autonomous Weapon Systems: What is in a name?

The advent of autonomous weapon systems has been given much momentum in public policy and has been closely anticipated and monitored. Part of the reason for this is the widespread belief that these ‘killer robots’ are unique and revolutionary. If this is the case, we must ask ourselves why that is. What exactly makes these weapons so different from their predecessors? It would seem that the answer lies in their ‘autonomy’.

 

  1. Existing definitions

The United States Department of Defense has defined autonomous weapon systems as systems that “once activated, can select and engage targets without further intervention by a human operator. This includes human-supervised autonomous weapon systems that are designed to allow human operators to override operation of the weapon system, but can select and engage targets without further human input after activation.”[3]

 

Conversely, it defines semi-autonomous weapon systems as systems that “once activated, [are ] intended to only engage individual targets or specific target groups that have been selected by a human operator.”[4] The main point is that “human control is retained over the decision to select individual targets and specific target groups for engagement.”[5]

 

Some scholars[6] have rightly pointed out that, in an abstract sense, weapons such as landmines could qualify as autonomous weapon systems under that definition, as they are triggered without a human operator. In other words, there is no human oversight over who the target is. Given this ambiguity, it has been necessary to narrow the function of ‘select’ to ‘select among’  targets. Under this development, ‘selection among’ would entail that there is “a machine-generated targeting decision made; some form of computational cognition, meaning some form of AI or logical reasoning, is inherently part of autonomous weapon systems in the contemporary debate.[7] Consequently, autonomous weapon systems would possess “some decisional capability to ‘select’ and ‘engage.”

 

Figure 1[8]

 

  1. Drawing at the line at “autonomy”

According to these definitions, it would seem that the line of autonomy is drawn at the decision-making level and more specifically in the selection of targets. This distinction has been corroborated by the International Committee of the Red Cross, which has defined autonomous weapons systems as “any weapon system with autonomy in its critical functions—that is, a weapon system that can select (search for, detect, identify, track or select) and attack (use force against, neutralize, damage or destroy) targets without human intervention.”[9]

 

Alternatively, some authors[10] have argued that a dichotomous division is not reflective of the practical reality of these weapons. Instead, the level of autonomy of different weapon systems will depend on the interactions between human operators and machine functions and should be assessed on a case-by-case basis. Others[11] have posited that the term ‘autonomous systems’ creates confusion and ambiguity, as it clusters together systems that are fundamentally different by using ‘autonomy’ as their main label, above all of other features and capabilities.[12] These scholars have proposed to use an alternative nomenclature for these systems: ‘autonomous function in a system’.

 

Figure 2[13]

 

While there may be divergence in the literature regarding the definition and the nomenclature given to autonomous weapon systems, consensus is found on the fact that ahead of us lies an increase in levels of autonomy until the human role is negligibly small. In all likelihood, human intervention will be limited to activating the weapons.[14]

  1. Policy Implications

One of the concerns raised by political scientists and policymakers is how the advent of autonomous weapon systems will impact the likelihood of conflict and war. The main argument here is that the development and use of lethal weapons that “pose little risk to the lives of the operators removes a potent deterrent for armed conflict[15] and will consequently “revolutionize warfare.[16] This revolution would come, on the one hand, from the decrease in the operational cost of war and would therefore “democratize” warfare by increasing the military capabilities of smaller states[17] and, on the other hand, from the disappearance of the transaction cost that comes with sending troops to combat. The latter effectively de-politicizes the question of whether to go to war, as it stops being a high-cost issue for the constituency or a polarizing issue in public opinion. In other words, the concern is: what will warfare look like once it no longer is an issue of public debate?

The first implication, namely that of the democratization of warfare, could have profound implications for the global balance of power, similar but not to the extent of that of nuclear weapons. Additionally, many policy papers[18][19] have warned against the effects that these weapons would have on global terrorism. Some even contend that “a new arms race appears inevitable alongside a new set of dangers from terrorism.”[20]

In the face of such possibilities, many have called for a complete ban of autonomous weapon systems. In fact, in 2015, an open letter signed by over three thousand leading AI researchers was presented at the International Joint Conference on Artificial Intelligence in Buenos Aires, Argentina, calling for a ban on offensive autonomous weapons. Other experts have taken more strategic approaches[21] and have set out strategy plans to ensure their state’s superiority in the field. Others prefer a more laissez-faire approach by claiming that, because autonomous are already being used lawfully today, international law already regulates their creation, development, and use.

When it comes to ensuring a successful ban, Rebecca Crootof, expert on autonomous weapon systems and author of “Killer Robots”,  has identified the different factors that have led to the ban of previous weapons and contends that at least one of these need to apply to ensure the practical and successful ban of any type of weapon system[22]: weapons causing superfluous injury or unnecessary suffering, inherently indiscriminate weapons, ineffective weapons, other existing means for accomplishing the same military objective, clear and narrowly tailored prohibitions, prior regulation, public concern and civil society engagement, and sufficient state commitment. Crootof claims that the only factor applicable to the ban of autonomous weapon systems is “public concern and civil society engagement”, particulary because: (i) states already use autonomous weapon systems, and (ii) the most common concerns (which will be addressed later in this paper) are framed in ethical, legal or moral terms. Crootof draws a parallel with the Mine Ban Convention and the Convention on Cluster Munitions which has been attributed mainly to the participation of nongovernmental organizations and other civil society representatives.[23]

  1. Other considerations

The debate about autonomous weapon systems, which has spread to the realm of public opinion, has been framed in ethical, legal, and moral terms. Is it ethical for us to allow machines to decide whom to target?[24] Are autonomous weapon systems in breach of the distinction principle of international humanitarian law?[25] These are the questions that one can find in the literature on autonomous weapon systems. This paper will continue by addressing the implications arising from such concerns.

  1. Ethical, legal, and moral considerations

 

Regarding the ethical, legal, and political dilemmas that autonomous weapon systems pose for a number of scholars, this paper will address the following ones: (i) do autonomous weapon systems currently fulfill the requirements of the law of armed conflicts in international humanitarian law to be lawfully used, and if not, will they ever?, (ii) do autonomous weapon systems hinder or impede accountability in armed conflicts?, and most importantly, (iii) do human beings have the moral monopoly on killing?

Many scholars[26][27] have addressed the common and popular claim that autonomous weapon systems will never be able to comply with the law of armed conflict. I will proceed by deconstructing this claim.

First and foremost, it would seem that it rests on assumptions about how technology, artificial intelligence and weaponry will evolve in the future, and that is in a way that will never fulfill the set of requirements imposed by international humanitarian law. It is true that machines and weapon systems may never develop moral and ethical values.  However, this should not give way to skeptical and unfounded assumptions on technological evolution. Instead, it should incentivize engineers, policy makers, and legal authorities to develop ways to circumvent this issue.

Second, it rests on assumptions on how international humanitarian law will evolve and, specifically, on its lack of flexibility. While it is true that many of the principles that are the backbone of international humanitarian law today have been in use for decades, if not centuries, the law has also proven to be flexible enough to address the emerging issues it has been faced with over time. If the law remains static while reality is in constant motion and evolution, we will find ourselves operating within an obsolete and outdated framework. Moreover, an interesting reality that has been pointed out is that autonomous weapon systems are currently being employed lawfully, as a counter-argument to their inherently “unlawful nature”.[28]

Within this broader legal debate, much attention has been paid to the principle of distinction, namely the legal precept that differentiates between “military objectives and civilian objects, combatants and civilians, and active combatants and those hors de combat.”[29] Military commanders and actors in conflict must abide by this principle, and by extension, so must autonomous weapon systems. On the one hand, most scholars and experts agree that autonomous weapon systems are incapable of distinguishing between combatants and civilians[30], thus rendering them unlawful under the distinction principle. On the other hand, some have raised doubts about the ability of humans to make such distinctions, especially in the fog of war. The difference, it would seem, between an autonomous legal system and a human commander, both of which do not abide by the distinction principle is that the human commander can be held accountable for a breach of international humanitarian law, while the machine cannot.

This takes us to the second concern, and that is whether the use of autonomous weapon systems can hinder accountability in the realm of armed conflict. The  International Committee of the Red Cross has been very categorical in its view on this issue and has stated that “all obligations under international law and accountability for them cannot be transferred to a machine, computer program or weapon system.”[31] Consequently, these weapons “should be banned because machine decision-making undermines, or even removes, the possibility of holding anyone accountable in the way and to the extent that, for example, an individual human soldier might be held accountable for unlawful or even criminal actions.”[32] This argument relies on the weight that individual criminal responsibility has on international law. While the importance of the emergence of individual criminal responsibility in the last half-century and the impact and contribution of its institutions (the International Criminal Court, the Nuremberg trials, the International Military Tribunal for the Far East, the International Criminal Tribunal for the former Yugoslavia, the International Criminal Tribunal for Rwanda, etc) is undeniable, it is also true that the “effective adherence to the law of armed conflict traditionally has come about through mechanisms of state (or armed party) responsibility.”[33] Thus, the use of autonomous weapon systems would not impede the establishment of criminal responsibility for the party that has unlawfully deployed them.

The last and perhaps most important question is, put in simple terms, whether machines can morally decide to kill. This question is vested on the underlying premise that human beings have the monopoly on morality, and by extension, moral killing. Human beings have decided what is moral throughout time and space. More recently, social psychology has introduced the idea of ‘framing’ as the way in which public opinion, and by extension, common notions of morality and ethics are framed. The question is no longer  whether machines are morally able to kill, but instead, whether machines can kill within the framework of morality created by human beings at a certain point in time and space.

The American roboticist Ronald C. Arkin has addressed this issue by developing the eponymous Arkin test, under which “an unmanned platform fulfills the demands of law and morality (and may therefore be permissibly employed) when it can be shown to comply with legal and moral requirements and constraints as well or better than a human under similar circumstances[34]. It seems that, nowadays, no machine passes the Arkin test. Currently, the largest effort to reproduce human conscience in a machine is in ‘strong AI’, which would replicate human decision-making processes and capabilities in machines. This raises the question, is this a desirable thing for society?

The questionable assumption behind the arguments in favor of ‘strong AI’ and the Arkin test is that because human beings can act morally, they do act morally. Furthermore, it harbours the idea that human capabilities somehow render decisions safer or more reliable, thus completely removing human failings and error out of the equation. This assumption ignores the flip side of the coin, which is that any notion of morality inherently carries with it notions of immorality. In other words, if human beings can be moral, they can also be immoral and act immorally. Machines, on the other hand, act and operate outside of the framework of morality. They, like animals, are amoral. So far, the amorality of machines has been implicitly equated to the immorality of humans, but these are profoundly distinct. As some scholars have pointed out, the fact that machines do not pass the Arkin test and may never pass the Arkin test can also be cause for celebration, as it gives us the reassurance that unmanned systems could not emulate any undesirable human reactions[35], which until now have been behind many military catastrophes. This is because machines “do not care, they have no interests, intentions, or self-regard, they harbor no ambitions or hatred, and they are utterly incapable of the “interiority” characteristic of self-consciousness.”[36] And so, we reach the conclusion that not only is it impossible for robots to be human, but that, for the time being, neither do we wish them to be. F

  1. Conclusion

Autonomous weapon systems have made headlines in the last decades causing equal amounts of outrage and praise among civil society and in academic debate. This is mainly due to their differential feature: autonomy. Mutatis mutandis, autonomous weapon systems have the ability to select among targets and decide to kill without any human intervention or oversight.

The arguments against the use of autonomous weapon systems are political, legal, and moral. Politically, it would seem that these weapons may incentivize states and non-state actors, such as terrorist groups, to turn to armed conflict. Legally, the lack of human oversight over decisive actions in conflict may impede the establishment of individual criminal responsibility. Morally, it would seem that giving machines the power to decide on the life of a human being is wrong.

Proponents, on the other hand, refute these arguments and find that autonomous weapon systems may make conflict less costly and more efficient. Politically, the deployment of troops and the loss of casualties is reduced or even eliminated. Legally, the use of autonomous weapon systems does not affect the establishment of criminal responsibility of each party in armed conflict. Finally, morally, autonomous weapon systems substitute human emotions and interests by algorithms and lines of code, thus eliminating human error from the decision to kill. Outside of this debate, the reality is that states currently deploy autonomous weapon systems in combat. Civil society, however, remains strongly against their use and calls for a complete ban of these weapons. Only time will tell whether the people’s voices will be loud enough to be heard.

 

Bibliography

Crootof, Rebecca. “The killer robots are here: legal and policy implications.” Cardozo L. Rev. 36 (2014): 1837.

United States, Department of Defense, Executive Service Directorate. “Department of Defense Directive 3000.09” Department of Defense Directive , ser. 3000.09, 2012.

Anderson, Kenneth, and Matthew C. Waxman. “Debating Autonomous Weapon Systems, their Ethics, and their Regulation under international law.” (2017).

Williams, Andrew. “Defining Autonomy in Systems: Challenges and Solutions.” Issues for Defence Policymakers(2015): 27.

Davison, Neil. “A legal perspective: Autonomous weapon systems under international humanitarian law.” Perspectives on lethal autonomous weapon systems (2017): 5-18.

Scott, Ben, Stefan Heumann, and Philippe Lorenz. “Artificial Intelligence and Foreign Policy.” Stiftung Neue Verantwortung Policy Brief (2018).

Lucas Jr, George R. “Automated Warfare.” Stan. L. & Pol’y Rev. 25 (2014): 317.

human, but that

 

Endnotes

[1] Crootof, Rebecca. “The killer robots are here: legal andpolicy implications.” Cardozo L. Rev. 36 (2014): 1837.

[2] Naval Research Committee: Autonomous and Unmanned Systems in the Department of the Navy

[3] D United States, Department of Defense, Executive Service Directorate. “Department of Defense Directive 3000.09” Department of Defense Directive , ser. 3000.09, 2012.

[4] Ibid.

[5] Ibid.

[6] Anderson, Kenneth, and Matthew C. Waxman. “Debating Autonomous Weapon Systems, their Ethics, and their Regulation under international law.” (2017).

[7]  Anderson, Kenneth, and Matthew C. Waxman. “Debating Autonomous Weapon Systems, their Ethics, and their Regulation under international law.” (2017).

[8] Figure 1: Crootof, Rebecca. “The killer robots are here: legal and policy implications.” Cardozo L. Rev. 36 (2014): 1837.

 

[9] Davison, Neil. “A legal perspective: Autonomous weapon systems under international humanitarian law.” Perspectives on lethal autonomous weapon systems (2017): 5-18.

[10] Anderson, Kenneth, and Matthew C. Waxman. “Debating Autonomous Weapon Systems, their Ethics, and their Regulation under international law.” (2017).

[11] Williams, Andrew. “Defining Autonomy in Systems: Challenges and Solutions.” Issues for Defence Policymakers(2015): 27.

[12] Ibid.

[13] Figure 2.3: Williams, Andrew. “Defining Autonomy in Systems: Challenges and Solutions.” Issues for Defence Policymakers(2015): 27.

[14] United States, Department of Defense, Executive Service Directorate. “Department of Defense Directive 3000.09” Department of Defense Directive , ser. 3000.09, 2012.

[15] Scott, Ben, Stefan Heumann, and Philippe Lorenz. “Artificial Intelligence and Foreign Policy.” Stiftung Neue Verantwortung Policy Brief (2018).

[16] Ibid.

[17] Artificial Intelligence and National Security Greg Allen Taniel Chan A study on behalf of Dr. Jason Matheny, Director of the U.S. Intelligence Advanced Research Projects Activity (IARPA)

[18] Ibid.

[19] Williams, Andrew. “Defining Autonomy in Systems: Challenges and Solutions.” Issues for Defence Policymakers(2015): 27.

[20] Scott, Ben, Stefan Heumann, and Philippe Lorenz. “Artificial Intelligence and Foreign Policy.” Stiftung Neue Verantwortung Policy Brief (2018).

[21]  Artificial Intelligence and National Security Greg Allen Taniel Chan A study on behalf of Dr. Jason Matheny, Director of the U.S. Intelligence Advanced Research Projects Activity (IARPA)

[22] Crootof, Rebecca. “The killer robots are here: legal and policy implications.” Cardozo L. Rev. 36 (2014): 1837.

[23] Ibid.

[24] Crootof, Rebecca. “The killer robots are here: legal and policy implications.” Cardozo L. Rev. 36 (2014): 1837.

[25] Crootof, Rebecca. “The killer robots are here: legal and policy implications.” Cardozo L. Rev. 36 (2014): 1837.

[26] Crootof, Rebecca. “The killer robots are here: legal and policy implications.” Cardozo L. Rev. 36 (2014): 1837.

[27] Anderson, Kenneth, and Matthew C. Waxman. “Debating Autonomous Weapon Systems, their Ethics, and their Regulation under international law.” (2017).

 

[28] Crootof, Rebecca. “The killer robots are here: legal and policy implications.” Cardozo L. Rev. 36 (2014): 1837.

[29] Davison, Neil. “A legal perspective: Autonomous weapon systems under international humanitarian law.” Perspectives on lethal autonomous weapon systems (2017): 5-18.

[30] Crootof, Rebecca. “The killer robots are here: legal and policy implications.” Cardozo L. Rev. 36 (2014): 1837.

[31] Davison, Neil. “A legal perspective: Autonomous weapon systems under international humanitarian law.” Perspectives on lethal autonomous weapon systems (2017): 5-18.

[32] Anderson, Kenneth, and Matthew C. Waxman. “Debating Autonomous Weapon Systems, their Ethics, and their Regulation under international law.” (2017).

[33] Anderson, Kenneth, and Matthew C. Waxman. “Debating Autonomous Weapon Systems, their Ethics, and their Regulation under international law.” (2017).

[34] Lucas Jr, George R. “Automated Warfare.” Stan. L. & Pol’y Rev. 25 (2014): 317

[35] Ibid.

[36] Ibid.

 

No Comments

Sorry, the comment form is closed at this time.

We use both our own and third-party cookies to enhance our services and to offer you the content that most suits your preferences by analysing your browsing habits. Your continued use of the site means that you accept these cookies. You may change your settings and obtain more information here. Accept