Archives for category: killer robots

In Nature last year, Stuart Russell said the “meaning of meaningful” in the phrase “meaningful human control” was “still to be determined”.

I see five key points for “meaningful” human control of LAWS: the policy loop, Article 36 review, activation, the firing loop and deactivation.

two_loops

FIGURE 1: Opportunities for “meaningful” human control.

  1. Policy loop. What rules does the LAWS follow when it targets? Who or what (human or machine) initiates, reviews and approves the rules of targeting and engagement? Control the policy the LAWS executes and you control the LAWS. If the LAWS is a Turing machine, it cannot disobey its rule book.
  2. Article 36 Review. People testing the policy control works and is reliable and predictable is a form of control.
  3. Activation. Turn the LAWS on. If a human decides to activate, knowing what policy the LAWS follows and being able to foresee the consequences, then this is a form of control.
  4. Firing loop. Having a human “in” or “on” the firing loop to confirm or supervise the LAWS firing decisions in real time is a form of control.
  5. Deactivation. Being able to turn the LAWS off or recall it, if it mistargets (the consequences are not as expected at activation) is a form of control.

How much of the above and exactly what variants makes up “meaningful” as distinct from “meaningless” I leave to CCW delegates to figure out.

Firing Loop

Most existing debate centres on the “firing loop” that covers the select and engage functions of a LAWS.

Label Select Confirm/Abort Engage Example
Remote Control Human Human confirms Human Telepiloted Predator B-1
Human “in the loop” Robot Human must confirm Robot Patriot
Human “on the loop” Robot Robot confirms Human can abort Robot Phalanx once activated
Human “off the loop” Robot Robot confirms
Human cannot abort
Robot Anti-tank and naval mines

TABLE 1: Firing loop – Standard “in, on and off the loop” distinctions

Policy Loop

There is relatively little definitional effort going into the “policy loop” that determines the rules the LAWS uses when it identifies and selects targets to engage. Control the rule book, control the Turing machine.

While the “firing loop” is about the execution of policy, the “policy loop” is about the definition of policy.

Label Targeting Rule Initiation Targeting Rule Review Targeting Rule Authorization Example
Human Policy Control Human Human Human Mines
Arkin (2009)
Sea Hunter?
Human “in the policy loop” AI Human Human ?
Human “on the policy loop” AI AI AI authorizes

Human can reject

?
Human “off the policy loop” AI AI AI authorizes

Human cannot reject

Skynet, VIKI, ARIIA

NorMAS blueprint

TABLE 2: Policy Loop – “in, on and off the loop” distinctions

There is a clear case for insisting on humans either having direct policy control (i.e. humans initiate, review and approve lethal policy) keyed into LAWS or having humans review and approve lethal policy devised by AIs.

A ban proposal couched at this level might actually get up.

Engineers following best practice today cannot even circulate a requirements specification without submitting it to ISO 9001 versioning, review and approval processes. We don’t let humans initiate and execute policy without review and approval. There is no case for letting AIs skip review and approval of their policy inventions either.

Trying to get Arkin-type LAWS banned strikes me as a lost cause. NATO opposition is firming up (US, UK, France) are firmly in the “no ban” camp either through words (UK, France) or deeds (US just launched Sea Hunter and is pushing with development of LRASM).

The distinction between “offensive” and “defensive” weapons made in the AI and Robotics open letter is hopeless. Lagatha and Ragnar Lothbrok routinely belt Saxons with their “defensive” shields in Vikings…  Aggressively sail an Aegis ship into enemy waters and it will “defend” itself against manned (or unmanned) air and sea attack by attacking the incoming enemy objects with explosive projectiles (if a sailor hits the activate button).

However, there is still a chance that a ban on something like AlphaGo (a “deep reinforcement learning” war fighting AI) having direct control of lethal actuators with real time lethal policy development on the fly (with no human review or approval) might get up.

Advertisements

The Professor’s visit got quite a bit of media coverage. Some links here.

Mike Grimshaw Newstalk ZB (Radio)

Idealog

Sydney Morning Herald

NZ Herald

3 News (NZ)

Yahoo! NZ News

Voxy

Scoop NZ

It’s been quite a while since I dealt with media in my capacity as advisor to Warren Entsch … but it’s a bit like riding a bike.

Once you learn, you don’t forget…

Dr Ronald C Arkin of Georgia Tech, a well-known contributor to current debates on Lethal Autonomous Weapons Systems (LAWS) and the ongoing debate on their banning / regulation in the UN will give an IEEE SSIT Distinguished Lecture on Tuesday 31st March from 2 – 4 pm  at the University of Canterbury James Hight Undercroft (in the seminar room closest the stairs of the main library entrance).

TITLE: Lethal Autonomous Robots and the Plight of the Civilian.

SPEAKER: Dr Ronald C Arkin

ABSTRACT: A recent meeting (May 2014) of the United Nations in Geneva regarding the Convention on Certain Conventional Weapons considered the many issues surrounding the use of lethal autonomous weapons systems from a variety of legal, ethical, operational, and technical perspectives. Over 80 nations were represented and engaged in the discussion. This talk reprises the issues the author broached regarding the role of lethal autonomous robotic systems and warfare, and how if they are developed appropriately they may have the ability to significantly reduce civilian casualties in the battlespace. This can lead to a moral imperative for their use due to the enhanced likelihood of reduced noncombatant deaths. Nonetheless, if the usage of this technology is not properly addressed or is hastily deployed, it can lead to possible dystopian futures. This talk will encourage others to think of ways to approach the issues of restraining lethal autonomous systems from illegal or immoral actions in the context of both International Humanitarian and Human Rights Law, whether through technology or legislation.

BIOGRAPHY: Ronald C. Arkin is Regents’ Professor and Associate Dean for Research in the College of Computing at Georgia Tech. He served as  STINT visiting Professor at KTH in Stockholm, Sabbatical Chair at the Sony IDL in Tokyo, and in the Robotics and AI Group at LAAS/CNRS in Toulouse. Dr. Arkin’s research interests include behavior-based control and action-oriented perception for mobile robots and UAVs, deliberative / reactive architectures, robot survivability, multiagent robotics, biorobotics, human-robot interaction, robot  ethics, and learning in autonomous systems. Prof. Arkin served on the Board of Governors of the IEEE Society on Social Implications of Technology, the IEEE Robotics and Automation Society (RAS) AdCom, and is a founding co-chair of IEEE RAS Technical Committee on Robot Ethics. He is a Distinguished Lecturer for the IEEE Society on Social Implications of Technology and a Fellow of the IEEE.

The talk will last for the first hour.

The second hour will be available for questions and answers.