Archives for the month of: April, 2016

In Nature last year, Stuart Russell said the “meaning of meaningful” in the phrase “meaningful human control” was “still to be determined”.

I see five key points for “meaningful” human control of LAWS: the policy loop, Article 36 review, activation, the firing loop and deactivation.

two_loops

FIGURE 1: Opportunities for “meaningful” human control.

  1. Policy loop. What rules does the LAWS follow when it targets? Who or what (human or machine) initiates, reviews and approves the rules of targeting and engagement? Control the policy the LAWS executes and you control the LAWS. If the LAWS is a Turing machine, it cannot disobey its rule book.
  2. Article 36 Review. People testing the policy control works and is reliable and predictable is a form of control.
  3. Activation. Turn the LAWS on. If a human decides to activate, knowing what policy the LAWS follows and being able to foresee the consequences, then this is a form of control.
  4. Firing loop. Having a human “in” or “on” the firing loop to confirm or supervise the LAWS firing decisions in real time is a form of control.
  5. Deactivation. Being able to turn the LAWS off or recall it, if it mistargets (the consequences are not as expected at activation) is a form of control.

How much of the above and exactly what variants makes up “meaningful” as distinct from “meaningless” I leave to CCW delegates to figure out.

Firing Loop

Most existing debate centres on the “firing loop” that covers the select and engage functions of a LAWS.

Label Select Confirm/Abort Engage Example
Remote Control Human Human confirms Human Telepiloted Predator B-1
Human “in the loop” Robot Human must confirm Robot Patriot
Human “on the loop” Robot Robot confirms Human can abort Robot Phalanx once activated
Human “off the loop” Robot Robot confirms
Human cannot abort
Robot Anti-tank and naval mines

TABLE 1: Firing loop – Standard “in, on and off the loop” distinctions

Policy Loop

There is relatively little definitional effort going into the “policy loop” that determines the rules the LAWS uses when it identifies and selects targets to engage. Control the rule book, control the Turing machine.

While the “firing loop” is about the execution of policy, the “policy loop” is about the definition of policy.

Label Targeting Rule Initiation Targeting Rule Review Targeting Rule Authorization Example
Human Policy Control Human Human Human Mines
Arkin (2009)
Sea Hunter?
Human “in the policy loop” AI Human Human ?
Human “on the policy loop” AI AI AI authorizes

Human can reject

?
Human “off the policy loop” AI AI AI authorizes

Human cannot reject

Skynet, VIKI, ARIIA

NorMAS blueprint

TABLE 2: Policy Loop – “in, on and off the loop” distinctions

There is a clear case for insisting on humans either having direct policy control (i.e. humans initiate, review and approve lethal policy) keyed into LAWS or having humans review and approve lethal policy devised by AIs.

A ban proposal couched at this level might actually get up.

Engineers following best practice today cannot even circulate a requirements specification without submitting it to ISO 9001 versioning, review and approval processes. We don’t let humans initiate and execute policy without review and approval. There is no case for letting AIs skip review and approval of their policy inventions either.

Trying to get Arkin-type LAWS banned strikes me as a lost cause. NATO opposition is firming up (US, UK, France) are firmly in the “no ban” camp either through words (UK, France) or deeds (US just launched Sea Hunter and is pushing with development of LRASM).

The distinction between “offensive” and “defensive” weapons made in the AI and Robotics open letter is hopeless. Lagatha and Ragnar Lothbrok routinely belt Saxons with their “defensive” shields in Vikings…  Aggressively sail an Aegis ship into enemy waters and it will “defend” itself against manned (or unmanned) air and sea attack by attacking the incoming enemy objects with explosive projectiles (if a sailor hits the activate button).

However, there is still a chance that a ban on something like AlphaGo (a “deep reinforcement learning” war fighting AI) having direct control of lethal actuators with real time lethal policy development on the fly (with no human review or approval) might get up.

Could robot submarines replace the ageing Collins class?

Sean Welsh, University of Canterbury

The decision to replace Australia’s submarines has been stalled for too long by politicians afraid of the bad media about “dud subs” the Collins class got last century.

Collins class subs deserved criticism in the 1990s. They did not meet Royal Australian Navy (RAN) specifications. But in this century, after much effort, they came good. Though they are expensive, Collins class boats have “sunk” US Navy attack submarines, destroyers and aircraft carriers in exercises.

Now that the Collins class is up for replacement, we have an opportunity to reevaluate our requirements and see what technology might meet them. And just as drones are replacing crewed aircraft in many roles, some military thinkers assume the future of naval war will be increasingly autonomous.

The advantages of autonomy in submarines are similar to those of autonomy in aircraft. Taking the pilot out of the plane means you don’t have to provide oxygen, worry about g-forces or provide bathrooms and meals for long trips.

Taking 40 sailors and 20 torpedoes out of a submarine will do wonders for its range and stealth. Autonomous submarines could be a far cheaper option to meet the RAN’s intelligence, surveillance and reconnaissance (ISR) requirements than crewed submarines.

Submarines do more than sink ships. Naval war is rare but ISR never stops. Before sinking the enemy you must find them and know what they look like. ISR was the original role of drones and remains their primary role today.

Last month, Boeing unveiled a prototype autonomous submarine with long range and high endurance. It has a modular design and could perhaps be adapted to meet RAN ISR requirements.

Boeing is developing a long range autonomous submarine that could have military applications.

Thus, rather than buy 12 crewed submarines to replace the Collins class, perhaps the project could be split into meeting the ISR requirement with autonomous submarines that can interoperate with a smaller number of crewed submarines that sink the enemy.

Future submarines might even be “carriers” for autonomous and semi-autonomous UAVs (unmanned aerial vehicles) and UUVs (unmanned undersea vehicles).

Keeping people on deck

However, while there may be a role for autonomous submarines in the future of naval warfare, there are some significant limitations to what they can achieve today and in the foreseeable future.

Most of today’s autonomous submarines have short ranges and are designed for very specific missions, such as mine sweeping. They are not designed to sail from Perth to Singapore or Hong Kong, sneak up on enemy ships and submarines and sink them with torpedoes.

Also, while drone aircraft can be controlled from a remote location, telepiloting is not an option for a long range sub at depth.

The very low frequency radio transceivers in Western Australia used by the Pentagon to signal “boomers” (nuclear-powered, nuclear-armed submarines) in the Indian Ocean have very low transmission rates: only a few hundred bytes per second.

You cannot telepilot a submarine lying below a thermocline in Asian waters from Canberra like you can telepilot a drone flying in Afghanistan with high-bandwidth satellite links from Nevada.

Contemporary telepiloted semi-autonomous submarines are controlled by physical tethers, basically waterproof network cables, when they dive. This limits range to a few kilometres.

Who’s the captain?

To consider autonomy in the role of sinking the enemy, the RAN would likely want an “ethical governor” to skipper the submarines. This involves a machine making life and death decisions: a “Terminator” as captain so to speak.

This would present a policy challenge for government and a trust issue for the RAN. It would certainly attract protest and raise accountability questions.

On the other hand, at periscope depth, you can telepilot a submarine. To help solve the chronic recruitment problems of the Collins class, the RAN connected them to the internet. If you have a satellite “dongle on the periscope” so the crew can email their loved ones, then theoretically you can telepilot the submarine as well.

That said, if you are sneaking up on an enemy sub and are deep below the waves, you can’t.

Even if you can telepilot, radio emissions directing the sub’s actions above the waves might give away its position to the enemy. Telepiloting is just not as stealthy as radio silence. And stealth is critical to a submarine in war.

Telepiloting also exposes the sub to the operational risks of cyberwarfare and jamming.

There is great technological and political risk in the Future Submarine Project. I don’t think robot submarines can replace crewed submarines but they can augment them and, for some missions, shift risk from vital human crews to more expendable machines.

Ordering nothing but crewed submarines in 2016 might be a bad naval investment.

The Conversation

Sean Welsh, Doctoral Candidate in Robot Ethics, University of Canterbury

This article was originally published on The Conversation. Read the original article.