This post aims to illustrate what moral theory looks like from the point of view of mathematical logic and how it fits into the broader cognitive architecture of a robot. To illustrate we need a simple moral philosophy. So we shall run with Skynet. 🙂

The Skynet Moral Code is much simpler than the 10 Commandments. There is only one commandment.

Love all humans.

(Apropos of nothing, I hacked the Cyberdyne mainframes and made some cosmetic, inconsequential alterations to the code.)

So, this moral code has the advantage of simplicity. So suppose you grant (for the sake of argument) that this is a perfectly reasonable moral theory. The question is how do you make it work in a robot?

Robots can be divided into three parts. The bits that sense the world – called sensors. The bits that think or do what passes for thought in a robot (process data) – let’s call this cognition. And the bits that do stuff – called actuators. Put these all together and you get the Sense – Think – Act cycle.

To walk around the world and implement the righteous task of loving all humans, Skynet’s Sadness Terminators need sensors that can provide data about the world. They need cognition that can identify human objects from other objects such as trees and squirrels from the sensor data. And they need actuators that can love humans.

Let’s keep it G-rated and make our (Sadness) Terminators big and cuddly and give them a philosophically convenient ray gun of love. This love gun zaps the target human with a hi-tech, oxytocin-generating love beam. For artistic reasons let’s suppose the love gun sounds like a certain Barry White number.

Once zapped the human is filled with love and the robot can move on to the next human. Seriously, one zap and the robot is done. This feature of the love gun makes the logic easier 🙂 . When it comes to the logic, there is actually not much difference between a lovebot and a warbot.

So, we start by turning expressing our moral theory into mathematical logic that can run on Prover 9. (There is nothing special about Prover 9. You could use another automated theorem prover.)

In Prover 9 syntax, the traditional “All men are mortal, Socrates is a man, therefore Socrates is mortal” syllogism looks like this.

Assumptions:

all x ( Man(x) -> Mortal(x) ).
Man(Socrates).

Goals:

Mortal(Socrates).

When you use Prover 9 to try and prove the Goal given the Assumptions it proves the Goal theorem. From the Assumptions you can validly infer the Goal. (The Goal is the Therefore bit – what you are trying to prove logically.)

To express moral statements and action selection I introduce two new logical operators (DUTY and ACTION). I adopt a convention of using underscores to indicate imperatives. So _ZAP_(x) is an imperative meaning ‘zap x!’ NOT a proposition. (This matters. Imperatives do not have truth value whereas propositions do.)

DUTY(_ZAP_(x)) means there is a duty to zap object x (a variable). ACTION(_ZAP_(x)) means cognition selects the action corresponding to the imperative zap x. In the broader robotic architecture once selected cognition will send the imperative ‘zap x’ to the actuators to be done. DUTY(_ZAP_(x)) is a proposition as is ACTION(_ZAP_(x)). The combination of logical operator plus imperative makes a proposition that can have truth value. (Logic is truth preserving inference.)

Thus the Skynet Moral Theory expressed in mathematical logic that can run on Prover 9 looks like this:

Assumptions:

all x ( IsHuman(x) & -Zapped(x) -> DUTY(_ZAP_(x)) ).
DUTY(_ZAP_(x)) -> ACTION(_ZAP_(x)).

These two assumptions are axioms of the logic.

To make a decision you might need some extra facts (propositions) so the robot can decide whether or not to zap an object.

Let’s add these two to the Assumptions.

-Zapped(Sean).
IsHuman(Sean).

You can then ask Prover 9 the question ACTION(_ZAP_(Sean)) or rather ask it to prove the Goal ACTION(_ZAP_(Sean)).

It will duly do so.

I assume that the robot sensors can input propositions into the cognition. This is a rather large assumption. It involves turning pixels into objects and classifying them as human (or not) and keeping a record as to whether the object has been zapped or not. While getting such propositions robotically is difficult, the moral element of the decision can be expressed in terms of axioms and theorems and handled with standard rules of inference (i.e. modus ponens) once you have the propositions.

Put another way, the morality is easy once you have the input propositions and you have a simple moral rule that governs the actuator imperative.

Ethics is often said to be hard. And indeed there are difficult examples of ethics (abortion, capital punishment, decapitation strikes) but there are innumerable easy ethical problems which are close to no-brainers. In terms of building ethical robots it is sensible to start with very simple cases and work up to the hard problems gradually.

Advertisements