Asaro argues for an absolute pre-emptive ban.

Waxman argues for regulation.

Arkin is a professor of robotics from Georgia Tech. He argues for the view that in certain warfighting applications LARs or killer robots can perform in an ethically superior way to human warfighters.

Sparrow is an ethicist from Monash. He argues for the view that LARs should be banned.

The term LARs was coined by the UN Special Rapporteur on Extrajudicial, Summary or Arbitrary Executions, Christopf Heyns. His report was tabled in the UN Human Rights Council in 2013 and it calls for member states to impose a moratorium on LARs.

This post aims to illustrate what moral theory looks like from the point of view of mathematical logic and how it fits into the broader cognitive architecture of a robot. To illustrate we need a simple moral philosophy. So we shall run with Skynet. 🙂

The Skynet Moral Code is much simpler than the 10 Commandments. There is only one commandment.

Love all humans.

(Apropos of nothing, I hacked the Cyberdyne mainframes and made some cosmetic, inconsequential alterations to the code.)

So, this moral code has the advantage of simplicity. So suppose you grant (for the sake of argument) that this is a perfectly reasonable moral theory. The question is how do you make it work in a robot?

Robots can be divided into three parts. The bits that sense the world – called sensors. The bits that think or do what passes for thought in a robot (process data) – let’s call this cognition. And the bits that do stuff – called actuators. Put these all together and you get the Sense – Think – Act cycle.

To walk around the world and implement the righteous task of loving all humans, Skynet’s Sadness Terminators need sensors that can provide data about the world. They need cognition that can identify human objects from other objects such as trees and squirrels from the sensor data. And they need actuators that can love humans.

Let’s keep it G-rated and make our (Sadness) Terminators big and cuddly and give them a philosophically convenient ray gun of love. This love gun zaps the target human with a hi-tech, oxytocin-generating love beam. For artistic reasons let’s suppose the love gun sounds like a certain Barry White number.

Once zapped the human is filled with love and the robot can move on to the next human. Seriously, one zap and the robot is done. This feature of the love gun makes the logic easier 🙂 . When it comes to the logic, there is actually not much difference between a lovebot and a warbot.

So, we start by turning expressing our moral theory into mathematical logic that can run on Prover 9. (There is nothing special about Prover 9. You could use another automated theorem prover.)

In Prover 9 syntax, the traditional “All men are mortal, Socrates is a man, therefore Socrates is mortal” syllogism looks like this.

Assumptions:

all x ( Man(x) -> Mortal(x) ).
Man(Socrates).

Goals:

Mortal(Socrates).

When you use Prover 9 to try and prove the Goal given the Assumptions it proves the Goal theorem. From the Assumptions you can validly infer the Goal. (The Goal is the Therefore bit – what you are trying to prove logically.)

To express moral statements and action selection I introduce two new logical operators (DUTY and ACTION). I adopt a convention of using underscores to indicate imperatives. So _ZAP_(x) is an imperative meaning ‘zap x!’ NOT a proposition. (This matters. Imperatives do not have truth value whereas propositions do.)

DUTY(_ZAP_(x)) means there is a duty to zap object x (a variable). ACTION(_ZAP_(x)) means cognition selects the action corresponding to the imperative zap x. In the broader robotic architecture once selected cognition will send the imperative ‘zap x’ to the actuators to be done. DUTY(_ZAP_(x)) is a proposition as is ACTION(_ZAP_(x)). The combination of logical operator plus imperative makes a proposition that can have truth value. (Logic is truth preserving inference.)

Thus the Skynet Moral Theory expressed in mathematical logic that can run on Prover 9 looks like this:

Assumptions:

all x ( IsHuman(x) & -Zapped(x) -> DUTY(_ZAP_(x)) ).
DUTY(_ZAP_(x)) -> ACTION(_ZAP_(x)).

These two assumptions are axioms of the logic.

To make a decision you might need some extra facts (propositions) so the robot can decide whether or not to zap an object.

Let’s add these two to the Assumptions.

-Zapped(Sean).
IsHuman(Sean).

You can then ask Prover 9 the question ACTION(_ZAP_(Sean)) or rather ask it to prove the Goal ACTION(_ZAP_(Sean)).

It will duly do so.

I assume that the robot sensors can input propositions into the cognition. This is a rather large assumption. It involves turning pixels into objects and classifying them as human (or not) and keeping a record as to whether the object has been zapped or not. While getting such propositions robotically is difficult, the moral element of the decision can be expressed in terms of axioms and theorems and handled with standard rules of inference (i.e. modus ponens) once you have the propositions.

Put another way, the morality is easy once you have the input propositions and you have a simple moral rule that governs the actuator imperative.

Ethics is often said to be hard. And indeed there are difficult examples of ethics (abortion, capital punishment, decapitation strikes) but there are innumerable easy ethical problems which are close to no-brainers. In terms of building ethical robots it is sensible to start with very simple cases and work up to the hard problems gradually.

I look at four moral theories with a view to implementing them in robots.

  1. Deontology
  2. Utilitarianism/Consequentialism
  3. Virtue Ethics
  4. Particularism

A recent poll identified deontology, consequentialism and virtue ethics as the most widely supported moral theories among professional philosophers. I add particularism because it is technically very interesting.

Deontology conceives of morality as fundamentally being about duty. When you do your duty you do the right thing. There are moral rules and they need to be obeyed. Legislation is generally expressed in deontological language. Thou shalt do this. Thou shalt not do that. In ethics (and deontic logic) we speak of deontic categories: the obligatory, the forbidden and the permissible. Moral rules are frequently expressed using the deontic categories. Deontology tends to be most persuasive when there is moral black and white. It is much less persuasive when the moral situation is grey. The best known deontologist would be Immanuel Kant but I would class the likes of Moses and Muhammad as deontologists (though some prefer to call these thinkers divine command theorists rather than deontologists).

There is a thing called deontic logic which can express moral axioms and theorems using deontic operators. This kind of logic can be processed by an automated theorem prover such as Prover 9.

To be honest, I don’t much care for the term consequentialism but it is wide use. Many contemporary philosophers nowadays speak of act consequentialism and rule consequentialism instead of the older terms act utilitarianism and rule utilitarianism. The best known variant of consequentialism remains the classic utilitarianism of Jeremy Bentham and John Stuart Mill.

Utilitarianism conceives of morality in terms of utility. The right thing to do is whatever maximizes utility. Utility is something of a term of art. It can mean happiness, welfare, well-being or even money. Different writers use the term in different ways. Utilitarianism in my view is at is most persuasive when dealing with permission rather than the obligatory and the forbidden. Utility functions are mathematical artefacts and so can be processed by a computer without difficulty. The decision procedure boils down to arithmetic.

Both utilitarianism and deontology focus on action selection and there are numerous variations within these major schools.

Virtue Ethics conceives of morality in terms of the character of the agent. Thus virtue ethics is often described as agent-centric as distinct from act-centric. Virtue ethics is something of a challenge for robot ethics as it requires a robotic implementation of ‘character’ which is rather close to ‘personality’. Robot personhood is a long way off! While the likes of Ray Kurzweil believe they can get human level intelligence built by 2029, other AI experts are less optimistic. I do have some ideas of how to handle virtue ethics in robots but to be frank they basically involve shoehorning virtue ethics into something resembling a hybrid of deontology and utilitarianism.

The last moral theory I look at is particularism. Particularism can be thought of as the Pink Floyd of moral theory. It is basically the moral theory that says “we don’t need no moral theory.” The most prominent advocate of moral particularism is Jonathan Dancy who happens to be the father in law of Claire Danes (not that its relevant). It is a relatively recent development in moral philosophy. Dancy argues that we do not need domain general ethical principles such as the Categorial Imperative of Kant (Act only according to that maxim you can will to be a universal law) or the Principle of Utility of John Stuart Mill (Acts are right insofar as they tend to promote happiness) or the Decalogue of Moses (The Ten Commandments), we can get by with particular reasons based on particular values for particular decisions in particular situations. However, to function as a trenchant particularist, you would need a human cognitive architecture (i.e. the ability to feel and value and articulate reasons on the basis of feeling and valuing). Robots as yet cannot do this. But research is underway to develop such machines.

Personally, I favour a moderate form of particularism which denies (or ignores) domain general ethical principles and focuses on particular rules for particular situations.

In practical terms, when it comes to actual implementation, I actually take something of a hybrid approach. What I end up with is a blend of deontology and utilitarianism within a moderate particularist framework that produces a ‘virtuous’ robot within a defined moral domain.

The philosopher is one who takes his time so I can be forgiven for taking half the month to get around to implementing one of my New Year’s Resolutions. My aim in this blog is mainly to serve as a research aid and scratchpad for my thesis topic which is robot ethics by which I mean how a robot could make a moral decision rather than what humans should or should not do with robots.

This requires one to define a moral decision which is half the fun.

The approach I am currently taking is to express moral philosophy in a dialect of mathematical logic. In essence it is bog standard first order logic (propositional and predicate logic) with some minimal extensions to cover the notions of duty and action.

In terms of which moral theory I am implementing, I formally consider deontology, utilitarianism, virtue ethics and particularism. What I actually implement is in essence a domain specific deontology with utility functions (i.e. everything except virtue ethics though I do have an argument that character reduces to decision procedures).

I have two practical use cases in mind. The dreaded killer robot as per Arkin (i.e. a fully autonomous drone that flies around bombing targets in strict accordance with the Laws of War) and a somewhat less controversial barbot that decides whether or not to serve you whiskey.

In both cases the robots have a single actuator that is morally relevant (i.e. bomb the target or serve the drink). This simplification makes the logic more straightforward. However, as the thesis progresses I will be looking at more complex robots that have actuators that can perform more than one morally relevant action but to begin with a single actuator is enough!