Archives for posts with tag: lethal autonomous weapons systems

There is nothing “new” in this “new report” apart from yet another synonym for “killer robot” to add to an already over-long list that includes lethal autonomous robot, lethal autonomous weapons system, unmanned weapons system, autonomous weapons system and autonomous weapon. There are myriad others. We now have “fully autonomous weapon” to add as well.

I’ll stick to the term lethal autonomous weapons system (LAWS) mainly because that is what the diplomats attending the Expert Meeting on the Convention on Certain Conventional Weapons used last year. And that is the term they are using this year.

LAWS is a sensible term that is neither “emotive” (Heyns, 2013) nor an “insidious rhetorical trick” (Lokhorst & van den Hoven, 2011) that covers complex distributed weapons systems that are actually fielded that have multiple integrated components and that are likely to evolve into “off the loop” LAWS and in the absence of regulation from that point to “beyond the loop” weapons systems that might have “machine learning” and “genetic algorithms” that “evolve” and “adapt” and indeed might turn into Skynet in due course.

Walking, talking, human-scale, titanium-skulled killer robots with beady red eyes are not actually fielded by anybody yet except for James Cameron in his Terminator flicks. But they are more scary and the hope of the Scare Campaign is that fright will make right.

Indeed this kind of tabloid trash “argument” might get a headline but to persuade an audience of diplomats, who are very bright and very sharp, the calibre of the argument needs to be far better that the vague and recycled confusions of Mind the Gap.

The report makes various points about “the lack of accountability for killer robots” none of which have not already been made. The two word solution for the “problem” of “killer robot accountability” would be “strict liability” as suggested by the Swedish delegation (among others) last year.

Scare campaigners please put that in your draft Protocol VI of the CCW.

Actually, how about you actually draft a Protocol VI and put it out for discussion?

Clarify what exactly it is that you want.

Mind the Gap does have some mildly original confusion about the meaning of “autonomous” and some spectacular question begging to accompany the well-worn rhetorical tricks.

Line 1:

Fully autonomous weapons, also known as “killer robots,” raise serious moral and legal concerns because they would possess the ability to select and engage their targets without meaningful human control.


So we open with the customary “emotive” and “insidious” tabloid language “killer robots,” we use this recycled and as yet undefined term “meaningful human control” and we blithely assert that fully autonomous weapons (whatever that means) do not have meaningful human control (whatever that means). We beg and blur the decisive question right from the start.

Later in the paper “fully autonomous weapons” are defined as human “off the loop” as distinct from “in the loop” and “on the loop” weapons. This assumes that a strictly causal, human-programmed artefact making delegated decisions on the basis of objective sensor data according to human defined policy norms is not in any sense under “meaningful human control.”

Much confusion is added by careless “personification” of machines. Consider this line:

On the one hand, while traditional weapons are tools in the hands of human beings, fully autonomous weapons, once deployed, would make their own determinations about the use of lethal force.

This language “their own determinations” suggests there is some cognitive element in the programmed machine that is not a human-defined instruction. There is no “I” in the robot. It has no values on the basis of which it can make choices.

Line 2.

Many people question whether the decision to kill a human being should be left to a machine.

People in real wars have been leaving the decision to kill human beings to machines since 1864 and probably earlier. The Union lost several men to Confederate “torpedoes” (landmines) on Dec 13th, 1864 in the storming of Fort McAllister at the end of Sherman’s infamous March to the Sea. Militaries continue to delegate lethal decisions to machines by fielding anti-tank and anti-ship mines which remain lawful “off the loop” weapons.

Line 2 is actually a very fair question and worthy of deeper analysis which, alas, you will not find in Mind the Gap. How exactly a “decision” differs from say a “reaction” and a “choice” (as defined in the Summa Theologica) is a deep and interesting philosophical question.

Moving on.

Fully autonomous weapons are weapons systems that would select and engage targets without meaningful human control. They are also known as killer robots or lethal autonomous weapons systems. Because of their full autonomy, they would have no “human in the loop” to direct their use of force and thus would represent the step beyond current remote-controlled drones.

The tacit assumption here is that the human “in the loop” will guarantee better human rights outcomes. “Meaningful human control” gave us the Somme, the Holocaust and the Rwandan Genocide. Frankly, I am not automatically signed on to this assumed Nirvana of “meaningful human control.”

Meaningful legal control is far more reassuring. And if a programmed robot can be engineered to do this better than the amygdalas of 18-25 year old males with testosterone and cortisol pulsing through their blood-brain interfaces, then I do not (as yet) see compelling reasons as to why such R & D possibilities should be “comprehensively and pre-emptively” banned, especially on the basis of a conceptually muddled scare campaign expressed in tabloid language.


Heyns, C. (2013). Report of the Special Rapporteur on extrajudicial, summary or arbitrary executions, Christof Heyns.   Retrieved 16th Feb 2015, 2015, from

Lokhorst, G.-J., & van den Hoven, J. (2011). Responsibility for Military Robots. In P. Lin, K. Abney, & G. Bekey (Eds.), Robot ethics: the ethical and social implications of robotics (pp. 145-156). Cambridge MA: MIT Press.

The Professor’s visit got quite a bit of media coverage. Some links here.

Mike Grimshaw Newstalk ZB (Radio)


Sydney Morning Herald

NZ Herald

3 News (NZ)

Yahoo! NZ News


Scoop NZ

It’s been quite a while since I dealt with media in my capacity as advisor to Warren Entsch … but it’s a bit like riding a bike.

Once you learn, you don’t forget…

Briefing Note for Policy Makers

Lethal Autonomous Weapons Systems (LAWS)


ISSUE: Regulation or ban of Lethal Autonomous Weapons Systems (LAWS).

LANGUAGE: Some regard “killer robots” as “emotive” or “pejorative” language e.g. Heyns (2013). Lethal Autonomous Weapons Systems (LAWS) is the current diplomatic term.

CONTEXT: Debate on LAWS is on the agenda of a meeting of High Contracting Parties to the Convention on Certain Conventional Weapons (CCW) at the UN in Geneva April 13-17. The issue was discussed at an Expert Meeting in May 2014. Australia and NZ are High Contracting Parties to the CCW. Existing International Humanitarian Law (IHL) already regulates LAWS.

NGO: The Campaign to Stop Killer Robots ( is an umbrella group of various human rights organizations. They are lobbying for a comprehensive and pre-emptive ban on killer robots (LAWS). There is an NZ branch (

POLICY: The CCW is generally regarded as the appropriate forum for LAWS.

There are five broad policy options.

  1. Ban LAWS.
  2. Regulate LAWS (by a Protocol VI to be added to the CCW or some other treaty).
  3. Status quo (i.e. rely on existing IHL and take no action). IHL as is regulates LAWS already.
  4. Impose a moratorium pending a later decision to ban/regulate/rely on existing IHL (effectively a temporary version of policy 1 above)
  5. Defer any decision pending further discussion (effectively policy 3 above).

BAN: A ban on LAWS could be modelled on Protocol IV of the CCW (which banned blinding lasers) or the Ottawa Convention (which banned anti-personnel landmines).

The ban argument is based on several claims. Robots cannot technically comply with core principles of IHL. Robots cannot discriminate between combatant and non-combatant. Robots cannot make proportionality calculations. Robots cannot be held responsible. There are also appeals to moral intuition. Robots should not make the decision to kill humans. Robots should not have the power of life and death over humans. There are also proliferation and cultural concerns. Lethal robots will make bad governments worse. Robots will exacerbate the decline and extinction of martial valour already started by drone warfare.

REGULATION: LAWS regulation may be modelled upon Protocol II of the CCW. This regulated anti-personnel mines and defined the conditions of military necessity under which they could be used and provided explicit regulation to protect civilians.

Regulation would explicitly affirm the applicability of IHL to LAWS. It would require that norms be encoded in robots to constrain their behaviour so that they acted in strict accordance to IHL. The main argument against a ban is that lethal autonomy (e.g. Aegis, Patriot, C-RAM, Iron Dome) already exists and such systems will further evolve to make faster decisions. Human cognition will not able to compete with the speed of machine decision making (e.g. future air war between peers). The defence of service personnel in the conduct of their military duties (and the nation more broadly) therefore will require increasing use of autonomous weapons. Thus they should be regulated not banned.

STATUS QUO: IHL is a broad framework designed to deal with the evolution of weapons and warfare. The key principles of necessity, discrimination, proportionality and responsibility are almost universally accepted and of broad scope. Thus if a robot cannot discriminate, calculate proportionality and if responsibility cannot be assigned for its use and the military acts it performs are not necessary, its use would already be illegal and thus there is no need to ban what is already banned. This is the UK position as stated by Under-Secretary Alistair Burt in 2013. (NB. When following the above link, search for the section entitled Lethal Autonomous Robotics some way down the page.)

Given the breadth of scope of LAWS, it may be unworkable to have a treaty instrument that enters into the detail of Protocol II to protect civilians for every conceivable system capable of lethal autonomy. (See in particular the Technical Annex to Protocol II of the CCW which goes into very specific detail defining the requirements for lawful anti-personnel landmines. Anti-personnel landmines as regulated by Protocol II were lawful from 1980 to 1999 when the Ottawa Convention became binding IHL.)

DEFER: There is always the option to have more discussion or to defer a decision. In the meantime, there might be a temporary moratorium or reliance on existing IHL pending an eventual choice of ban, regulation or reliance on IHL.

DEFINITIONS: LAWS are commonly divided into three types. Some refer to a fourth type.

Type Human relation to robot Explanation
I “in the loop” Human must approve a kill decision. Human must act to confirm kill.
II “on the loop” Human can disapprove a kill decision but robot will kill in case of human inaction.
III “off the loop” Human does not approve kill decision and cannot intervene to disapprove kill decision. Humans authorize kill rules. While autonomous in targeting decisions, the robot cannot disobey human authorized rules.
IV “beyond the loop” The robot is “free” to overwrite, reject, vary or supplement the rules put into it. This overwriting would be done on the basis of human-level “autonomous” features such as “machine learning” and “genetic algorithms.” The robot has “adaptive” features that allow it to go “beyond” its programming in some sense.


AUTONOMOUS: Definitions of “autonomous” vary. Some roboticists define autonomous simply as “no human finger on the trigger” others consider “autonomous” to imply some “machine learning” capability such that it could “create its own moral reality” (Boella & van der Torre, 2008) Robots that are “autonomous” in this sense do not yet exist (connected to weapons) though they are being researched. Above they are characterized as Type IV “beyond the loop” LAWS.  Responsibility for the acts of such robots is a major issue. The Campaign to Stop Killer Robots would like to see such machines comprehensively and pre-emptively banned.

BAN/REGULATE: There is obviously much grey between the ban and regulate positions. Some nations (e.g. Pakistan) are calling for a ban on remotely piloted drones  which are Type I human “in the loop” weapons that are partly “autonomous.” Most nations are cautious and are seeking better definitions, in order to clarify what exactly should be banned and/or regulated.

Prepared By

This briefing note was prepared by Sean Welsh, a PhD student in the Department of Philosophy at the University of Canterbury.

The working title of his doctoral dissertation is Moral Code: Programming the Ethical Robot. Prior to embarking on his PhD, Sean worked in software development for 17 years.

References / Further Reading

Anderson, K., & Waxman, M. C. (2013). Law and Ethics for Autonomous Weapon Systems: Why a Ban Won’t Work and How the Laws of War Can.   Retrieved 12th Feb, 2015, from

Arkin, R. C. (2009). Governing Lethal Behaviour in Autonomous Robots. Boca Rouge: CRC Press.

Asaro, P. (2009). Modelling the Moral User. IEEE Technology and Society Magazine, 28(1), 20-24. doi: 10.1109/MTS.2009.931863

Boella, G., & van der Torre, L. V., Harko. (2008). Introduction to the special issue on normative multi-agent systems. Autonomous Agents and Multi-Agent Systems, 17(1), 1-10. doi: 10.1007/s10458-008-9047-8

Department of Defense. (2012). Directive 3000.09: Autonomy in Weapons Systems.   Retrieved 12th Feb, 2015, from

Hansard. (2013). Adjourment Debate on Lethal Autonomous Robots.   Retrieved 12th Feb 2015, 2015, from

Heyns, C. (2013). Report of the Special Rapporteur on extrajudicial, summary or arbitrary executions, Christof Heyns.   Retrieved 16th Feb 2015, 2015, from

Holy See. (2014). Statement by H.E. Archbishop Silvano M. Tomasi.   Retrieved Nov 6th, 2014, from$file/Holy+See+MX+LAWS.pdf

Schmitt, M. N., & Thurnher, J. S. (2012). Out of the Loop: Autonomous Weapon Systems and the Law of Armed Conflict. Harv. Nat’l Sec. J., 4, 231.

Sharkey, N. (2009). Death strikes from the sky: the calculus of proportionality. Technology and Society Magazine, IEEE, 28(1), 16-19.

Sharkey, N. (2010). Saying ‘no!’ to lethal autonomous targeting. Journal of Military Ethics, 9(4), 369-383.

Sparrow, R. (2007). Killer Robots. Journal of Applied Philosophy, 24(1), 62-77.

The Campaign to Stop Killer Robots. (2015). from

United Nations. (2014). CCW Meeting of Experts on Lethal Autonomous Weapons Systems (LAWS).   Retrieved 23rd Oct, 2014, from


Dr Ronald C Arkin of Georgia Tech, a well-known contributor to current debates on Lethal Autonomous Weapons Systems (LAWS) and the ongoing debate on their banning / regulation in the UN will give an IEEE SSIT Distinguished Lecture on Tuesday 31st March from 2 – 4 pm  at the University of Canterbury James Hight Undercroft (in the seminar room closest the stairs of the main library entrance).

TITLE: Lethal Autonomous Robots and the Plight of the Civilian.

SPEAKER: Dr Ronald C Arkin

ABSTRACT: A recent meeting (May 2014) of the United Nations in Geneva regarding the Convention on Certain Conventional Weapons considered the many issues surrounding the use of lethal autonomous weapons systems from a variety of legal, ethical, operational, and technical perspectives. Over 80 nations were represented and engaged in the discussion. This talk reprises the issues the author broached regarding the role of lethal autonomous robotic systems and warfare, and how if they are developed appropriately they may have the ability to significantly reduce civilian casualties in the battlespace. This can lead to a moral imperative for their use due to the enhanced likelihood of reduced noncombatant deaths. Nonetheless, if the usage of this technology is not properly addressed or is hastily deployed, it can lead to possible dystopian futures. This talk will encourage others to think of ways to approach the issues of restraining lethal autonomous systems from illegal or immoral actions in the context of both International Humanitarian and Human Rights Law, whether through technology or legislation.

BIOGRAPHY: Ronald C. Arkin is Regents’ Professor and Associate Dean for Research in the College of Computing at Georgia Tech. He served as  STINT visiting Professor at KTH in Stockholm, Sabbatical Chair at the Sony IDL in Tokyo, and in the Robotics and AI Group at LAAS/CNRS in Toulouse. Dr. Arkin’s research interests include behavior-based control and action-oriented perception for mobile robots and UAVs, deliberative / reactive architectures, robot survivability, multiagent robotics, biorobotics, human-robot interaction, robot  ethics, and learning in autonomous systems. Prof. Arkin served on the Board of Governors of the IEEE Society on Social Implications of Technology, the IEEE Robotics and Automation Society (RAS) AdCom, and is a founding co-chair of IEEE RAS Technical Committee on Robot Ethics. He is a Distinguished Lecturer for the IEEE Society on Social Implications of Technology and a Fellow of the IEEE.

The talk will last for the first hour.

The second hour will be available for questions and answers.