Image 20170412 26736 1cw5f6r
Driverless cars still need to ‘learn’ how to drive on our roads, especially at busy junctions.
Shutterstock/Karsten Neglia

Sean Welsh, University of Canterbury

Learning how to drive is an ongoing process for we humans as we adapt to new situations, new road rules and new technology, and learn the lessons from when things go wrong. The Conversation

But how does a driverless car learn how to drive, especially when something goes wrong?

That’s the question being asked of Uber after last month’s crash in Arizona. Two of its engineers were inside when one of its autonomous vehicles spun 180 degrees and flipped onto its side.

Uber pulled its test fleet off the road pending police enquiries, and a few days later the vehicles were back on the road.

Smack, spin, flip

The Tempe Police Department’s report on the investigation into the crash, obtained by the EE Times, details what happened.

The report says that the Uber Volvo (red in the graphic below) was moving south at 38mph (61kmh) in a 40mph (64kmh) zone when it collided with the Honda (blue in the graphic) turning west into a side street (point 1).

Uber crash – initial collisions.
Alex Hanlon / Sean Welsh based on Tempe Police report

Knocked off course, the Uber Volvo hit the traffic light at the corner (point 2) and then spun and flipped, damaging two other vehicles (points 3 and 4) before sliding to a stop on its side (point 5).

Uber crash – subsequent collisions.
Alex Hanlon / Sean Welsh based on Tempe Police report

Thankfully, no one was hurt. The police determined that the Honda driver “failed to yield” (give way) and issued a ticket. The Uber car was not at fault.

Questions, questions

But Mike Demler, an analyst with the Linley Group technology consultancy, told the EE Times that the Uber car could have done better:

It is totally careless and stupid to proceed at 38mph through a blind intersection.

Demler said that Uber needs to explain why its vehicle proceeded through the intersection at just under the speed limit when it could “see” that traffic had come to a stop in the middle and leftmost lanes.

The EE Times report said that Uber had “fallen silent” on the incident. But as Uber uses “deep learning” to control its autonomous cars, it’s not clear that Uber could answer Demler’s query even if it wanted to.

In deep learning, the actual code that would make the decision not to slow down would be a complex state in a neural network, not a line of code prescribing a simple rule like “if vision is obstructed at intersection, slow down”.

Debugging deep learning

The case raises a deep technical issue. How do you debug an autonomous vehicle control system that is based on deep learning? How do you reduce the risk of autonomous cars getting smashed and flipped when humans driving alongside them make bad judgements?

Demler’s point is that the Uber car had not “learned” to slow down as a prudent precautionary measure at an intersection with obstructed lines of sight. Most human drivers would naturally beware and slow down when approaching an intersection with obstructed vision due to stationary cars.

When it comes to deep reinforcement learning, this relies on “value functions” to evaluate states that result from the application of policies.

A value function is a number that evaluates a state. In chess, a strong opening move by white such as pawn e7 to e5 attracts a high value. A weak opening such as pawn a2 to a3 attracts a low one.

The value function can be like “ouch” for computers. Reinforcement learning gets its name from positive and negative reinforcement in psychology.

Until the Uber vehicle hits something and the value function of the deep learning records the digital equivalent of “following that policy led to a bad state – on side, smashed up and facing wrong way – ouch!” the Uber control system might not quantify the risk appropriately.

Having now hit something it will, hopefully, have learned its lesson at the school of hard knocks. In future, Uber cars should do better at similar intersections with similar traffic conditions.

Debugging formal logic

An alternative to deep learning is autonomous vehicles using explicitly stated rules expressed in formal logic.

This is being developed by nuTonomy, which is running an autonomous taxi pilot in cooperation with authorities in Singapore.

NuTonomy’s approach to controlling autonomous vehicles is based on a rules hierarchy. Top priority goes to rules such as “don’t hit pedestrians”, followed by “don’t hit other vehicles” and “don’t hit objects”.

Rules such as “maintain speed when safe” and “don’t cross the centreline” get a lower priority, while rules such as “give a comfortable ride” are the first to be broken when an emergency arises.

While NuTonomy does use machine learning for many things, it does not use it for normative control: deciding what a car ought to do.

In October last year, a NuTonomy test vehicle accident was involved in an accident: a low-speed tap resulting in a dent, not a spin and flip.

The company’s chief operating officer Doug Parker told IEEE Spectrum:

What you want is to be able to go back and say, “Did our car do the right thing in that situation, and if it didn’t, why didn’t it make the right decision?” With formal logic, it’s very easy.

Key advantages of formal logic are provable correctness and relative ease of debugging. Debugging machine learning is trickier. On the other hand, with machine learning, you do not need to code complex hierarchies of rules.

Time will tell which is the better approach to driving lessons for driverless cars. For now, both systems still have much to learn.

Sean Welsh, Doctoral Candidate in Robot Ethics, University of Canterbury

This article was originally published on The Conversation. Read the original article.

Advertisements

Are we ready for Robotopia, when robots replace the human workforce?

Sean Welsh, University of Canterbury

Automation has disrupted work for centuries. Two hundred years ago in Britain, the Luddites rose in rebellion, smashing the machines that made their weaving skills obsolete.

Today it’s high status cognitive jobs that are under threat. Earlier this year ROSS, a legal version of IBM’s Watson, was launched and hailed as the first artificially intelligent lawyer. Future iterations may put lawyers out of work.

An artificial intelligence (AI) outperformed an air force colonel in a combat simulation, and a robot outperformed human surgeons in stitching up a pig.

Manual jobs continue to disappear. Truckers, bus drivers and taxi drivers are threatened by self-driving vehicles. The Baxter robot threatens warehouse and labouring jobs while Hadrian X threatens bricklaying.

Payback time on robots is shorter than ever, with 47% of US jobs, 69% of Indian jobs and 77% of Chinese jobs vulnerable to automation.

Historically, capitalism has succeeded in generating new jobs to replace the old but past performance is not necessarily a guide to future performance.

While some argue new jobs will be created to replace the jobs lost to automata, many fear economies will be disrupted as never before. Sober professors of computer science and business analysts now routinely predict massive job losses.

If we grant, for the sake of argument, the premise that massive technological unemployment is plausible, how will society cope?

The future is workless

In his newly released book, Why the Future is Workless, author Tim Dunlop accepts the demise of jobs as inevitable. Thus, he says, we must rethink our jobs-based economy.

Not only that, we have to rethink job-centric human values. Currently our purpose and status in society derive mostly from our paid work. In a world where robots work better, how will humans cope?

It is easy to imagine a dystopian future of increasing wealth inequality, where those with robots live in gated communities and those without live in low-tech badlands. A revolt of colonels leading bot-breaking bricklayers is not unimaginable.

How will society migrate from an economy based on human labour to one based on robot labour, without riots and revolts?

Money for nothing

Dunlop, like many from the left, the right
and the tech elite, thinks a universal basic income (UBI) policy is required to handle the transition.

UBI is a no-strings-attached, non-means-tested social dividend. All citizens get one to compensate for being shut out of the means of privatised production.

The political philosopher and writer Thomas Paine defended UBI as a moral quid pro quo for private property.

In the state of nature, humans can forage for their food from the Earth. In a privatised world this natural right is thwarted thus an inalienable rent is due by property owners to society sufficient to cover people’s basic needs.

UBI could be funded by a land or property tax, a sovereign wealth fund, a tax on automata or a mix of measures. Such fiscal revolution would be a steep political challenge.

No major party supported UBI in this June’s referendum in Switzerland. Even so, the Yes vote got 23% support. Supporting No, the Swiss government pointed to the moral hazard of making work optional. They also pointed to cost.

Paying UBI at Australia’s Newstart Allowance levels (about A$13,000 p.a.) to all 24 million Australians with no age conditions would cost A$312 billion. Current Federal tax receipts are A$383 billion of which A$158 billion is spent on social security and welfare.

Australian Budget Expenses.
Australian Treasury

Even assuming UBI replaces all other welfare and social security payments, it requires doubling the social security budget. Eliminating the administrative overhead of means-testing by cutting the 30,000 staff and related expenses in Human and Social Services could only save A$5 billion.

Making UBI less universal by restricting it to Australians of working age would save A$106 billion, bringing the cost of UBI down to $A206 billion: still a huge challenge in a climate of “budget repair”.

More research is needed

While fiscally daunting, UBI could have positive effects. UBI might encourage more innovation and entrepreneurial activity from people freed from wage dependence. It could reduce stress and improve mental health.

If everyone got UBI it would be free of the stigma of the dole. UBI would recognise the value of unpaid work such as volunteering and stay at home parenting.

Some say UBI would be a “bad utopia” preserving capitalism but it might actualise Marx’s 1845 vision of a society where one might “hunt in the morning, fish in the afternoon, rear cattle in the evening, criticise after dinner” as one liked. People could live much like slaveholders of the antebellum South but with robots instead of enslaved humans doing the work.

Certainly, we need to continue the conversation about the threats and opportunities of mass technological unemployment and do more research into UBI. If Robotopia is likely, how we will live our lives and find meaning in a workless world?

The Conversation

Sean Welsh, Doctoral Candidate in Robot Ethics, University of Canterbury

This article was originally published on The Conversation. Read the original article.

In Nature last year, Stuart Russell said the “meaning of meaningful” in the phrase “meaningful human control” was “still to be determined”.

I see five key points for “meaningful” human control of LAWS: the policy loop, Article 36 review, activation, the firing loop and deactivation.

two_loops

FIGURE 1: Opportunities for “meaningful” human control.

  1. Policy loop. What rules does the LAWS follow when it targets? Who or what (human or machine) initiates, reviews and approves the rules of targeting and engagement? Control the policy the LAWS executes and you control the LAWS. If the LAWS is a Turing machine, it cannot disobey its rule book.
  2. Article 36 Review. People testing the policy control works and is reliable and predictable is a form of control.
  3. Activation. Turn the LAWS on. If a human decides to activate, knowing what policy the LAWS follows and being able to foresee the consequences, then this is a form of control.
  4. Firing loop. Having a human “in” or “on” the firing loop to confirm or supervise the LAWS firing decisions in real time is a form of control.
  5. Deactivation. Being able to turn the LAWS off or recall it, if it mistargets (the consequences are not as expected at activation) is a form of control.

How much of the above and exactly what variants makes up “meaningful” as distinct from “meaningless” I leave to CCW delegates to figure out.

Firing Loop

Most existing debate centres on the “firing loop” that covers the select and engage functions of a LAWS.

Label Select Confirm/Abort Engage Example
Remote Control Human Human confirms Human Telepiloted Predator B-1
Human “in the loop” Robot Human must confirm Robot Patriot
Human “on the loop” Robot Robot confirms Human can abort Robot Phalanx once activated
Human “off the loop” Robot Robot confirms
Human cannot abort
Robot Anti-tank and naval mines

TABLE 1: Firing loop – Standard “in, on and off the loop” distinctions

Policy Loop

There is relatively little definitional effort going into the “policy loop” that determines the rules the LAWS uses when it identifies and selects targets to engage. Control the rule book, control the Turing machine.

While the “firing loop” is about the execution of policy, the “policy loop” is about the definition of policy.

Label Targeting Rule Initiation Targeting Rule Review Targeting Rule Authorization Example
Human Policy Control Human Human Human Mines
Arkin (2009)
Sea Hunter?
Human “in the policy loop” AI Human Human ?
Human “on the policy loop” AI AI AI authorizes

Human can reject

?
Human “off the policy loop” AI AI AI authorizes

Human cannot reject

Skynet, VIKI, ARIIA

NorMAS blueprint

TABLE 2: Policy Loop – “in, on and off the loop” distinctions

There is a clear case for insisting on humans either having direct policy control (i.e. humans initiate, review and approve lethal policy) keyed into LAWS or having humans review and approve lethal policy devised by AIs.

A ban proposal couched at this level might actually get up.

Engineers following best practice today cannot even circulate a requirements specification without submitting it to ISO 9001 versioning, review and approval processes. We don’t let humans initiate and execute policy without review and approval. There is no case for letting AIs skip review and approval of their policy inventions either.

Trying to get Arkin-type LAWS banned strikes me as a lost cause. NATO opposition is firming up (US, UK, France) are firmly in the “no ban” camp either through words (UK, France) or deeds (US just launched Sea Hunter and is pushing with development of LRASM).

The distinction between “offensive” and “defensive” weapons made in the AI and Robotics open letter is hopeless. Lagatha and Ragnar Lothbrok routinely belt Saxons with their “defensive” shields in Vikings…  Aggressively sail an Aegis ship into enemy waters and it will “defend” itself against manned (or unmanned) air and sea attack by attacking the incoming enemy objects with explosive projectiles (if a sailor hits the activate button).

However, there is still a chance that a ban on something like AlphaGo (a “deep reinforcement learning” war fighting AI) having direct control of lethal actuators with real time lethal policy development on the fly (with no human review or approval) might get up.

Could robot submarines replace the ageing Collins class?

Sean Welsh, University of Canterbury

The decision to replace Australia’s submarines has been stalled for too long by politicians afraid of the bad media about “dud subs” the Collins class got last century.

Collins class subs deserved criticism in the 1990s. They did not meet Royal Australian Navy (RAN) specifications. But in this century, after much effort, they came good. Though they are expensive, Collins class boats have “sunk” US Navy attack submarines, destroyers and aircraft carriers in exercises.

Now that the Collins class is up for replacement, we have an opportunity to reevaluate our requirements and see what technology might meet them. And just as drones are replacing crewed aircraft in many roles, some military thinkers assume the future of naval war will be increasingly autonomous.

The advantages of autonomy in submarines are similar to those of autonomy in aircraft. Taking the pilot out of the plane means you don’t have to provide oxygen, worry about g-forces or provide bathrooms and meals for long trips.

Taking 40 sailors and 20 torpedoes out of a submarine will do wonders for its range and stealth. Autonomous submarines could be a far cheaper option to meet the RAN’s intelligence, surveillance and reconnaissance (ISR) requirements than crewed submarines.

Submarines do more than sink ships. Naval war is rare but ISR never stops. Before sinking the enemy you must find them and know what they look like. ISR was the original role of drones and remains their primary role today.

Last month, Boeing unveiled a prototype autonomous submarine with long range and high endurance. It has a modular design and could perhaps be adapted to meet RAN ISR requirements.

Boeing is developing a long range autonomous submarine that could have military applications.

Thus, rather than buy 12 crewed submarines to replace the Collins class, perhaps the project could be split into meeting the ISR requirement with autonomous submarines that can interoperate with a smaller number of crewed submarines that sink the enemy.

Future submarines might even be “carriers” for autonomous and semi-autonomous UAVs (unmanned aerial vehicles) and UUVs (unmanned undersea vehicles).

Keeping people on deck

However, while there may be a role for autonomous submarines in the future of naval warfare, there are some significant limitations to what they can achieve today and in the foreseeable future.

Most of today’s autonomous submarines have short ranges and are designed for very specific missions, such as mine sweeping. They are not designed to sail from Perth to Singapore or Hong Kong, sneak up on enemy ships and submarines and sink them with torpedoes.

Also, while drone aircraft can be controlled from a remote location, telepiloting is not an option for a long range sub at depth.

The very low frequency radio transceivers in Western Australia used by the Pentagon to signal “boomers” (nuclear-powered, nuclear-armed submarines) in the Indian Ocean have very low transmission rates: only a few hundred bytes per second.

You cannot telepilot a submarine lying below a thermocline in Asian waters from Canberra like you can telepilot a drone flying in Afghanistan with high-bandwidth satellite links from Nevada.

Contemporary telepiloted semi-autonomous submarines are controlled by physical tethers, basically waterproof network cables, when they dive. This limits range to a few kilometres.

Who’s the captain?

To consider autonomy in the role of sinking the enemy, the RAN would likely want an “ethical governor” to skipper the submarines. This involves a machine making life and death decisions: a “Terminator” as captain so to speak.

This would present a policy challenge for government and a trust issue for the RAN. It would certainly attract protest and raise accountability questions.

On the other hand, at periscope depth, you can telepilot a submarine. To help solve the chronic recruitment problems of the Collins class, the RAN connected them to the internet. If you have a satellite “dongle on the periscope” so the crew can email their loved ones, then theoretically you can telepilot the submarine as well.

That said, if you are sneaking up on an enemy sub and are deep below the waves, you can’t.

Even if you can telepilot, radio emissions directing the sub’s actions above the waves might give away its position to the enemy. Telepiloting is just not as stealthy as radio silence. And stealth is critical to a submarine in war.

Telepiloting also exposes the sub to the operational risks of cyberwarfare and jamming.

There is great technological and political risk in the Future Submarine Project. I don’t think robot submarines can replace crewed submarines but they can augment them and, for some missions, shift risk from vital human crews to more expendable machines.

Ordering nothing but crewed submarines in 2016 might be a bad naval investment.

The Conversation

Sean Welsh, Doctoral Candidate in Robot Ethics, University of Canterbury

This article was originally published on The Conversation. Read the original article.

There is nothing “new” in this “new report” apart from yet another synonym for “killer robot” to add to an already over-long list that includes lethal autonomous robot, lethal autonomous weapons system, unmanned weapons system, autonomous weapons system and autonomous weapon. There are myriad others. We now have “fully autonomous weapon” to add as well.

I’ll stick to the term lethal autonomous weapons system (LAWS) mainly because that is what the diplomats attending the Expert Meeting on the Convention on Certain Conventional Weapons used last year. And that is the term they are using this year.

LAWS is a sensible term that is neither “emotive” (Heyns, 2013) nor an “insidious rhetorical trick” (Lokhorst & van den Hoven, 2011) that covers complex distributed weapons systems that are actually fielded that have multiple integrated components and that are likely to evolve into “off the loop” LAWS and in the absence of regulation from that point to “beyond the loop” weapons systems that might have “machine learning” and “genetic algorithms” that “evolve” and “adapt” and indeed might turn into Skynet in due course.

Walking, talking, human-scale, titanium-skulled killer robots with beady red eyes are not actually fielded by anybody yet except for James Cameron in his Terminator flicks. But they are more scary and the hope of the Scare Campaign is that fright will make right.

Indeed this kind of tabloid trash “argument” might get a headline but to persuade an audience of diplomats, who are very bright and very sharp, the calibre of the argument needs to be far better that the vague and recycled confusions of Mind the Gap.

The report makes various points about “the lack of accountability for killer robots” none of which have not already been made. The two word solution for the “problem” of “killer robot accountability” would be “strict liability” as suggested by the Swedish delegation (among others) last year.

Scare campaigners please put that in your draft Protocol VI of the CCW.

Actually, how about you actually draft a Protocol VI and put it out for discussion?

Clarify what exactly it is that you want.

Mind the Gap does have some mildly original confusion about the meaning of “autonomous” and some spectacular question begging to accompany the well-worn rhetorical tricks.

Line 1:

Fully autonomous weapons, also known as “killer robots,” raise serious moral and legal concerns because they would possess the ability to select and engage their targets without meaningful human control.

Whoa!

So we open with the customary “emotive” and “insidious” tabloid language “killer robots,” we use this recycled and as yet undefined term “meaningful human control” and we blithely assert that fully autonomous weapons (whatever that means) do not have meaningful human control (whatever that means). We beg and blur the decisive question right from the start.

Later in the paper “fully autonomous weapons” are defined as human “off the loop” as distinct from “in the loop” and “on the loop” weapons. This assumes that a strictly causal, human-programmed artefact making delegated decisions on the basis of objective sensor data according to human defined policy norms is not in any sense under “meaningful human control.”

Much confusion is added by careless “personification” of machines. Consider this line:

On the one hand, while traditional weapons are tools in the hands of human beings, fully autonomous weapons, once deployed, would make their own determinations about the use of lethal force.

This language “their own determinations” suggests there is some cognitive element in the programmed machine that is not a human-defined instruction. There is no “I” in the robot. It has no values on the basis of which it can make choices.

Line 2.

Many people question whether the decision to kill a human being should be left to a machine.

People in real wars have been leaving the decision to kill human beings to machines since 1864 and probably earlier. The Union lost several men to Confederate “torpedoes” (landmines) on Dec 13th, 1864 in the storming of Fort McAllister at the end of Sherman’s infamous March to the Sea. Militaries continue to delegate lethal decisions to machines by fielding anti-tank and anti-ship mines which remain lawful “off the loop” weapons.

Line 2 is actually a very fair question and worthy of deeper analysis which, alas, you will not find in Mind the Gap. How exactly a “decision” differs from say a “reaction” and a “choice” (as defined in the Summa Theologica) is a deep and interesting philosophical question.

Moving on.

Fully autonomous weapons are weapons systems that would select and engage targets without meaningful human control. They are also known as killer robots or lethal autonomous weapons systems. Because of their full autonomy, they would have no “human in the loop” to direct their use of force and thus would represent the step beyond current remote-controlled drones.

The tacit assumption here is that the human “in the loop” will guarantee better human rights outcomes. “Meaningful human control” gave us the Somme, the Holocaust and the Rwandan Genocide. Frankly, I am not automatically signed on to this assumed Nirvana of “meaningful human control.”

Meaningful legal control is far more reassuring. And if a programmed robot can be engineered to do this better than the amygdalas of 18-25 year old males with testosterone and cortisol pulsing through their blood-brain interfaces, then I do not (as yet) see compelling reasons as to why such R & D possibilities should be “comprehensively and pre-emptively” banned, especially on the basis of a conceptually muddled scare campaign expressed in tabloid language.

References

Heyns, C. (2013). Report of the Special Rapporteur on extrajudicial, summary or arbitrary executions, Christof Heyns.   Retrieved 16th Feb 2015, 2015, from http://www.ohchr.org/Documents/HRBodies/HRCouncil/RegularSession/Session23/A-HRC-23-47_en.pdf

Lokhorst, G.-J., & van den Hoven, J. (2011). Responsibility for Military Robots. In P. Lin, K. Abney, & G. Bekey (Eds.), Robot ethics: the ethical and social implications of robotics (pp. 145-156). Cambridge MA: MIT Press.

The Professor’s visit got quite a bit of media coverage. Some links here.

Mike Grimshaw Newstalk ZB (Radio)

Idealog

Sydney Morning Herald

NZ Herald

3 News (NZ)

Yahoo! NZ News

Voxy

Scoop NZ

It’s been quite a while since I dealt with media in my capacity as advisor to Warren Entsch … but it’s a bit like riding a bike.

Once you learn, you don’t forget…

Briefing Note for Policy Makers

Lethal Autonomous Weapons Systems (LAWS)

Summary

ISSUE: Regulation or ban of Lethal Autonomous Weapons Systems (LAWS).

LANGUAGE: Some regard “killer robots” as “emotive” or “pejorative” language e.g. Heyns (2013). Lethal Autonomous Weapons Systems (LAWS) is the current diplomatic term.

CONTEXT: Debate on LAWS is on the agenda of a meeting of High Contracting Parties to the Convention on Certain Conventional Weapons (CCW) at the UN in Geneva April 13-17. The issue was discussed at an Expert Meeting in May 2014. Australia and NZ are High Contracting Parties to the CCW. Existing International Humanitarian Law (IHL) already regulates LAWS.

NGO: The Campaign to Stop Killer Robots (http://www.stopkillerrobots.org) is an umbrella group of various human rights organizations. They are lobbying for a comprehensive and pre-emptive ban on killer robots (LAWS). There is an NZ branch (http://www.converge.org.nz/pma/robots.htm).

POLICY: The CCW is generally regarded as the appropriate forum for LAWS.

There are five broad policy options.

  1. Ban LAWS.
  2. Regulate LAWS (by a Protocol VI to be added to the CCW or some other treaty).
  3. Status quo (i.e. rely on existing IHL and take no action). IHL as is regulates LAWS already.
  4. Impose a moratorium pending a later decision to ban/regulate/rely on existing IHL (effectively a temporary version of policy 1 above)
  5. Defer any decision pending further discussion (effectively policy 3 above).

BAN: A ban on LAWS could be modelled on Protocol IV of the CCW (which banned blinding lasers) or the Ottawa Convention (which banned anti-personnel landmines).

The ban argument is based on several claims. Robots cannot technically comply with core principles of IHL. Robots cannot discriminate between combatant and non-combatant. Robots cannot make proportionality calculations. Robots cannot be held responsible. There are also appeals to moral intuition. Robots should not make the decision to kill humans. Robots should not have the power of life and death over humans. There are also proliferation and cultural concerns. Lethal robots will make bad governments worse. Robots will exacerbate the decline and extinction of martial valour already started by drone warfare.

REGULATION: LAWS regulation may be modelled upon Protocol II of the CCW. This regulated anti-personnel mines and defined the conditions of military necessity under which they could be used and provided explicit regulation to protect civilians.

Regulation would explicitly affirm the applicability of IHL to LAWS. It would require that norms be encoded in robots to constrain their behaviour so that they acted in strict accordance to IHL. The main argument against a ban is that lethal autonomy (e.g. Aegis, Patriot, C-RAM, Iron Dome) already exists and such systems will further evolve to make faster decisions. Human cognition will not able to compete with the speed of machine decision making (e.g. future air war between peers). The defence of service personnel in the conduct of their military duties (and the nation more broadly) therefore will require increasing use of autonomous weapons. Thus they should be regulated not banned.

STATUS QUO: IHL is a broad framework designed to deal with the evolution of weapons and warfare. The key principles of necessity, discrimination, proportionality and responsibility are almost universally accepted and of broad scope. Thus if a robot cannot discriminate, calculate proportionality and if responsibility cannot be assigned for its use and the military acts it performs are not necessary, its use would already be illegal and thus there is no need to ban what is already banned. This is the UK position as stated by Under-Secretary Alistair Burt in 2013. (NB. When following the above link, search for the section entitled Lethal Autonomous Robotics some way down the page.)

Given the breadth of scope of LAWS, it may be unworkable to have a treaty instrument that enters into the detail of Protocol II to protect civilians for every conceivable system capable of lethal autonomy. (See in particular the Technical Annex to Protocol II of the CCW which goes into very specific detail defining the requirements for lawful anti-personnel landmines. Anti-personnel landmines as regulated by Protocol II were lawful from 1980 to 1999 when the Ottawa Convention became binding IHL.)

DEFER: There is always the option to have more discussion or to defer a decision. In the meantime, there might be a temporary moratorium or reliance on existing IHL pending an eventual choice of ban, regulation or reliance on IHL.

DEFINITIONS: LAWS are commonly divided into three types. Some refer to a fourth type.

Type Human relation to robot Explanation
I “in the loop” Human must approve a kill decision. Human must act to confirm kill.
II “on the loop” Human can disapprove a kill decision but robot will kill in case of human inaction.
III “off the loop” Human does not approve kill decision and cannot intervene to disapprove kill decision. Humans authorize kill rules. While autonomous in targeting decisions, the robot cannot disobey human authorized rules.
IV “beyond the loop” The robot is “free” to overwrite, reject, vary or supplement the rules put into it. This overwriting would be done on the basis of human-level “autonomous” features such as “machine learning” and “genetic algorithms.” The robot has “adaptive” features that allow it to go “beyond” its programming in some sense.

 

AUTONOMOUS: Definitions of “autonomous” vary. Some roboticists define autonomous simply as “no human finger on the trigger” others consider “autonomous” to imply some “machine learning” capability such that it could “create its own moral reality” (Boella & van der Torre, 2008) Robots that are “autonomous” in this sense do not yet exist (connected to weapons) though they are being researched. Above they are characterized as Type IV “beyond the loop” LAWS.  Responsibility for the acts of such robots is a major issue. The Campaign to Stop Killer Robots would like to see such machines comprehensively and pre-emptively banned.

BAN/REGULATE: There is obviously much grey between the ban and regulate positions. Some nations (e.g. Pakistan) are calling for a ban on remotely piloted drones  which are Type I human “in the loop” weapons that are partly “autonomous.” Most nations are cautious and are seeking better definitions, in order to clarify what exactly should be banned and/or regulated.

Prepared By

This briefing note was prepared by Sean Welsh, a PhD student in the Department of Philosophy at the University of Canterbury.

The working title of his doctoral dissertation is Moral Code: Programming the Ethical Robot. Prior to embarking on his PhD, Sean worked in software development for 17 years.

References / Further Reading

Anderson, K., & Waxman, M. C. (2013). Law and Ethics for Autonomous Weapon Systems: Why a Ban Won’t Work and How the Laws of War Can.   Retrieved 12th Feb, 2015, from http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2250126

Arkin, R. C. (2009). Governing Lethal Behaviour in Autonomous Robots. Boca Rouge: CRC Press.

Asaro, P. (2009). Modelling the Moral User. IEEE Technology and Society Magazine, 28(1), 20-24. doi: 10.1109/MTS.2009.931863

Boella, G., & van der Torre, L. V., Harko. (2008). Introduction to the special issue on normative multi-agent systems. Autonomous Agents and Multi-Agent Systems, 17(1), 1-10. doi: 10.1007/s10458-008-9047-8

Department of Defense. (2012). Directive 3000.09: Autonomy in Weapons Systems.   Retrieved 12th Feb, 2015, from http://www.dtic.mil/whs/directives/corres/pdf/300009p.pdf

Hansard. (2013). Adjourment Debate on Lethal Autonomous Robots.   Retrieved 12th Feb 2015, 2015, from http://www.publications.parliament.uk/pa/cm201314/cmhansrd/cm130617/debtext/130617-0004.htm

Heyns, C. (2013). Report of the Special Rapporteur on extrajudicial, summary or arbitrary executions, Christof Heyns.   Retrieved 16th Feb 2015, 2015, from http://www.ohchr.org/Documents/HRBodies/HRCouncil/RegularSession/Session23/A-HRC-23-47_en.pdf

Holy See. (2014). Statement by H.E. Archbishop Silvano M. Tomasi.   Retrieved Nov 6th, 2014, from http://www.unog.ch/80256EDD006B8954/%28httpAssets%29/D51A968CB2A8D115C1257CD8002552F5/$file/Holy+See+MX+LAWS.pdf

Schmitt, M. N., & Thurnher, J. S. (2012). Out of the Loop: Autonomous Weapon Systems and the Law of Armed Conflict. Harv. Nat’l Sec. J., 4, 231.

Sharkey, N. (2009). Death strikes from the sky: the calculus of proportionality. Technology and Society Magazine, IEEE, 28(1), 16-19.

Sharkey, N. (2010). Saying ‘no!’ to lethal autonomous targeting. Journal of Military Ethics, 9(4), 369-383.

Sparrow, R. (2007). Killer Robots. Journal of Applied Philosophy, 24(1), 62-77.

The Campaign to Stop Killer Robots. (2015). from www.stopkillerrobots.org

United Nations. (2014). CCW Meeting of Experts on Lethal Autonomous Weapons Systems (LAWS).   Retrieved 23rd Oct, 2014, from http://www.unog.ch/80256EE600585943/%28httpPages%29/6CE049BE22EC75A2C1257C8D00513E26?OpenDocument

 

Dr Ronald C Arkin of Georgia Tech, a well-known contributor to current debates on Lethal Autonomous Weapons Systems (LAWS) and the ongoing debate on their banning / regulation in the UN will give an IEEE SSIT Distinguished Lecture on Tuesday 31st March from 2 – 4 pm  at the University of Canterbury James Hight Undercroft (in the seminar room closest the stairs of the main library entrance).

TITLE: Lethal Autonomous Robots and the Plight of the Civilian.

SPEAKER: Dr Ronald C Arkin

ABSTRACT: A recent meeting (May 2014) of the United Nations in Geneva regarding the Convention on Certain Conventional Weapons considered the many issues surrounding the use of lethal autonomous weapons systems from a variety of legal, ethical, operational, and technical perspectives. Over 80 nations were represented and engaged in the discussion. This talk reprises the issues the author broached regarding the role of lethal autonomous robotic systems and warfare, and how if they are developed appropriately they may have the ability to significantly reduce civilian casualties in the battlespace. This can lead to a moral imperative for their use due to the enhanced likelihood of reduced noncombatant deaths. Nonetheless, if the usage of this technology is not properly addressed or is hastily deployed, it can lead to possible dystopian futures. This talk will encourage others to think of ways to approach the issues of restraining lethal autonomous systems from illegal or immoral actions in the context of both International Humanitarian and Human Rights Law, whether through technology or legislation.

BIOGRAPHY: Ronald C. Arkin is Regents’ Professor and Associate Dean for Research in the College of Computing at Georgia Tech. He served as  STINT visiting Professor at KTH in Stockholm, Sabbatical Chair at the Sony IDL in Tokyo, and in the Robotics and AI Group at LAAS/CNRS in Toulouse. Dr. Arkin’s research interests include behavior-based control and action-oriented perception for mobile robots and UAVs, deliberative / reactive architectures, robot survivability, multiagent robotics, biorobotics, human-robot interaction, robot  ethics, and learning in autonomous systems. Prof. Arkin served on the Board of Governors of the IEEE Society on Social Implications of Technology, the IEEE Robotics and Automation Society (RAS) AdCom, and is a founding co-chair of IEEE RAS Technical Committee on Robot Ethics. He is a Distinguished Lecturer for the IEEE Society on Social Implications of Technology and a Fellow of the IEEE.

The talk will last for the first hour.

The second hour will be available for questions and answers.

Je Suis Charlie

Wonderful to see such a huge turnout in Paris yesterday. Millions marching for liberty.

Lethal autonomous robots must be stopped in their tracks

By Robert Sparrow, Monash University

The topic of killer robots was drawn back into the public sphere last week with the widely publicised call for a moratorium on the development and use of “lethal autonomous robotics” by a top UN human rights expert; and inevitably, this conjured up some familiar concerns.

The opening scenes of James Cameron’s 1984 film The Terminator portray people running for cover beneath ruined buildings while hunter-killer robots circle menacingly overhead. Of course, such images must already have a certain contemporary resonance in Pakistan and Afghanistan, where people live in fear of being killed by a Hellfire missile fired by a Predator or Reaper drone, controlled by operators in the United States.

Yet if people are dying in drone strikes today at least a human being has confronted the question of whether the goals the attack is intended to serve are worth killing them for.

Now that military scientists around the world are working on developing autonomous weapons intended to be capable of identifying and attacking targets without direct human oversight – referred to interchangeably as lethal autonomous robots and killer robots – the scenario Cameron portrays in the first few minutes of his film is perhaps closer than we think.

It’s important to stress here that, currently, such weapons are not employed, although various technologies of this sort are in development. And, while not “autonomous”, the sophistication of certain robotics being trialled for the battlefield, as discussed already on The Conversation, gives some insight into where things may be going.

Last week’s discussion on the ethics of lethal autonomous robots at the UN Human Rights Council followed in the footsteps of a November 2012 Human Rights Watch report, Losing Humanity: the Case Against Killer Robots.

But the military logic driving the rapidly expanding use of drones and the development of autonomous weapons has been obvious for some time. It was because we viewed this prospect with alarm that colleagues and I founded the International Committee for Robot Arms Control at a meeting in the UK in September 2009.

Risks and rewards

The development of autonomous weapons would undermine international peace and security by lowering the domestic political costs of going to war and by greatly increasing the risk of conflicts being triggered by accident.

The fear of the public seeing their sons and daughters return in body bags is the main thing that currently prevents governments from going to war. If governments think they can impose their will on affairs in foreign lands using autonomous weapons there will be little to stop them bombing and assassinating those they perceive as their enemies more often than they already do.

The UN’s Christof Heyns has called for a global pause in the development and deployment of “killer robots”.

Of course, as the invasions of Iraq and Afghanistan demonstrate all too well, wars are easier to start than to finish. Similarly, despite the enthusiasm of the West for fighting wars entirely in other people’s countries, the violence of these conflicts has ways of finding its way home.

The stabbing of a British soldier in Woolwich by two men identifying as Muslims has been widely described as an act of terrorism: as Glenn Greenwald has argued, given that it involved an attack on a member of the British armed services in the context of the UK’s involvement in the war in Afghanistan, one wonders if it might not equally well be thought of as a poor man’s drone strike. Misplaced faith in the possibility of risk-free warfare may end up putting more lives at risk.

When autonomous submarines are circling each other in the Pacific 24 hours a day and autonomous planes are poised to strike strategic targets should some particular set of conditions on a checklist maintained by a computer be met, the risk of accidental war will be all too real.

Stop Killer Robots

The philosophical tradition of just war theory, institutionalised in the law of armed conflict, is one of the key institutions which currently limits the scope and destructiveness of war.

This tradition places severe restrictions on the conduct of war, including regarding who is and is not a legitimate target of attack. Civilians are not legitimate targets, nor are soldiers who have indicated a desire to surrender or who are wounded such that they pose no military threat.

Despite the rapid progress of computer science, I am extremely cynical that machines will be able to make the complex contextual judgements required to reliably meet the requirements of just war theory for the foreseeable future.

There is also a peculiar horror associated with the idea of people being killed by robots, which I have been working to elucidate in my research. Even though they are willing to kill each other, enemies at war are in a moral relationship.

At a bare minimum, they must acknowledge their enemy as their enemy and be willing to take responsibility for the decision to kill them. Robots are unable to offer this recognition themselves and arguably obscure the moral relationship between combatants to such an extent as to call into question the ethics of their use as weapons.

For all these reasons, I applaud the recent launch of the Campaign to Stop Killer Robots announced by a coalition of NGOs in London in April this year and support its goal of a global ban on the development and deployment of lethal autonomous weapons.

Further reading:
Robots don’t kill people, it’s the humans we should worry about
Predators or Plowshares? Arms Control of Robotic Weapons

Robert Sparrow was one of the founding members of the International Committee for Robot Arms-Control (ICRAC); ICRAC participates in the Steering Committee of the Campaign to Stop Killer Robots.

The Conversation

This article was originally published on The Conversation.
Read the original article.