Archives for the month of: April, 2015

There is nothing “new” in this “new report” apart from yet another synonym for “killer robot” to add to an already over-long list that includes lethal autonomous robot, lethal autonomous weapons system, unmanned weapons system, autonomous weapons system and autonomous weapon. There are myriad others. We now have “fully autonomous weapon” to add as well.

I’ll stick to the term lethal autonomous weapons system (LAWS) mainly because that is what the diplomats attending the Expert Meeting on the Convention on Certain Conventional Weapons used last year. And that is the term they are using this year.

LAWS is a sensible term that is neither “emotive” (Heyns, 2013) nor an “insidious rhetorical trick” (Lokhorst & van den Hoven, 2011) that covers complex distributed weapons systems that are actually fielded that have multiple integrated components and that are likely to evolve into “off the loop” LAWS and in the absence of regulation from that point to “beyond the loop” weapons systems that might have “machine learning” and “genetic algorithms” that “evolve” and “adapt” and indeed might turn into Skynet in due course.

Walking, talking, human-scale, titanium-skulled killer robots with beady red eyes are not actually fielded by anybody yet except for James Cameron in his Terminator flicks. But they are more scary and the hope of the Scare Campaign is that fright will make right.

Indeed this kind of tabloid trash “argument” might get a headline but to persuade an audience of diplomats, who are very bright and very sharp, the calibre of the argument needs to be far better that the vague and recycled confusions of Mind the Gap.

The report makes various points about “the lack of accountability for killer robots” none of which have not already been made. The two word solution for the “problem” of “killer robot accountability” would be “strict liability” as suggested by the Swedish delegation (among others) last year.

Scare campaigners please put that in your draft Protocol VI of the CCW.

Actually, how about you actually draft a Protocol VI and put it out for discussion?

Clarify what exactly it is that you want.

Mind the Gap does have some mildly original confusion about the meaning of “autonomous” and some spectacular question begging to accompany the well-worn rhetorical tricks.

Line 1:

Fully autonomous weapons, also known as “killer robots,” raise serious moral and legal concerns because they would possess the ability to select and engage their targets without meaningful human control.

Whoa!

So we open with the customary “emotive” and “insidious” tabloid language “killer robots,” we use this recycled and as yet undefined term “meaningful human control” and we blithely assert that fully autonomous weapons (whatever that means) do not have meaningful human control (whatever that means). We beg and blur the decisive question right from the start.

Later in the paper “fully autonomous weapons” are defined as human “off the loop” as distinct from “in the loop” and “on the loop” weapons. This assumes that a strictly causal, human-programmed artefact making delegated decisions on the basis of objective sensor data according to human defined policy norms is not in any sense under “meaningful human control.”

Much confusion is added by careless “personification” of machines. Consider this line:

On the one hand, while traditional weapons are tools in the hands of human beings, fully autonomous weapons, once deployed, would make their own determinations about the use of lethal force.

This language “their own determinations” suggests there is some cognitive element in the programmed machine that is not a human-defined instruction. There is no “I” in the robot. It has no values on the basis of which it can make choices.

Line 2.

Many people question whether the decision to kill a human being should be left to a machine.

People in real wars have been leaving the decision to kill human beings to machines since 1864 and probably earlier. The Union lost several men to Confederate “torpedoes” (landmines) on Dec 13th, 1864 in the storming of Fort McAllister at the end of Sherman’s infamous March to the Sea. Militaries continue to delegate lethal decisions to machines by fielding anti-tank and anti-ship mines which remain lawful “off the loop” weapons.

Line 2 is actually a very fair question and worthy of deeper analysis which, alas, you will not find in Mind the Gap. How exactly a “decision” differs from say a “reaction” and a “choice” (as defined in the Summa Theologica) is a deep and interesting philosophical question.

Moving on.

Fully autonomous weapons are weapons systems that would select and engage targets without meaningful human control. They are also known as killer robots or lethal autonomous weapons systems. Because of their full autonomy, they would have no “human in the loop” to direct their use of force and thus would represent the step beyond current remote-controlled drones.

The tacit assumption here is that the human “in the loop” will guarantee better human rights outcomes. “Meaningful human control” gave us the Somme, the Holocaust and the Rwandan Genocide. Frankly, I am not automatically signed on to this assumed Nirvana of “meaningful human control.”

Meaningful legal control is far more reassuring. And if a programmed robot can be engineered to do this better than the amygdalas of 18-25 year old males with testosterone and cortisol pulsing through their blood-brain interfaces, then I do not (as yet) see compelling reasons as to why such R & D possibilities should be “comprehensively and pre-emptively” banned, especially on the basis of a conceptually muddled scare campaign expressed in tabloid language.

References

Heyns, C. (2013). Report of the Special Rapporteur on extrajudicial, summary or arbitrary executions, Christof Heyns.   Retrieved 16th Feb 2015, 2015, from http://www.ohchr.org/Documents/HRBodies/HRCouncil/RegularSession/Session23/A-HRC-23-47_en.pdf

Lokhorst, G.-J., & van den Hoven, J. (2011). Responsibility for Military Robots. In P. Lin, K. Abney, & G. Bekey (Eds.), Robot ethics: the ethical and social implications of robotics (pp. 145-156). Cambridge MA: MIT Press.

The Professor’s visit got quite a bit of media coverage. Some links here.

Mike Grimshaw Newstalk ZB (Radio)

Idealog

Sydney Morning Herald

NZ Herald

3 News (NZ)

Yahoo! NZ News

Voxy

Scoop NZ

It’s been quite a while since I dealt with media in my capacity as advisor to Warren Entsch … but it’s a bit like riding a bike.

Once you learn, you don’t forget…