I look at four moral theories with a view to implementing them in robots.

  1. Deontology
  2. Utilitarianism/Consequentialism
  3. Virtue Ethics
  4. Particularism

A recent poll identified deontology, consequentialism and virtue ethics as the most widely supported moral theories among professional philosophers. I add particularism because it is technically very interesting.

Deontology conceives of morality as fundamentally being about duty. When you do your duty you do the right thing. There are moral rules and they need to be obeyed. Legislation is generally expressed in deontological language. Thou shalt do this. Thou shalt not do that. In ethics (and deontic logic) we speak of deontic categories: the obligatory, the forbidden and the permissible. Moral rules are frequently expressed using the deontic categories. Deontology tends to be most persuasive when there is moral black and white. It is much less persuasive when the moral situation is grey. The best known deontologist would be Immanuel Kant but I would class the likes of Moses and Muhammad as deontologists (though some prefer to call these thinkers divine command theorists rather than deontologists).

There is a thing called deontic logic which can express moral axioms and theorems using deontic operators. This kind of logic can be processed by an automated theorem prover such as Prover 9.

To be honest, I don’t much care for the term consequentialism but it is wide use. Many contemporary philosophers nowadays speak of act consequentialism and rule consequentialism instead of the older terms act utilitarianism and rule utilitarianism. The best known variant of consequentialism remains the classic utilitarianism of Jeremy Bentham and John Stuart Mill.

Utilitarianism conceives of morality in terms of utility. The right thing to do is whatever maximizes utility. Utility is something of a term of art. It can mean happiness, welfare, well-being or even money. Different writers use the term in different ways. Utilitarianism in my view is at is most persuasive when dealing with permission rather than the obligatory and the forbidden. Utility functions are mathematical artefacts and so can be processed by a computer without difficulty. The decision procedure boils down to arithmetic.

Both utilitarianism and deontology focus on action selection and there are numerous variations within these major schools.

Virtue Ethics conceives of morality in terms of the character of the agent. Thus virtue ethics is often described as agent-centric as distinct from act-centric. Virtue ethics is something of a challenge for robot ethics as it requires a robotic implementation of ‘character’ which is rather close to ‘personality’. Robot personhood is a long way off! While the likes of Ray Kurzweil believe they can get human level intelligence built by 2029, other AI experts are less optimistic. I do have some ideas of how to handle virtue ethics in robots but to be frank they basically involve shoehorning virtue ethics into something resembling a hybrid of deontology and utilitarianism.

The last moral theory I look at is particularism. Particularism can be thought of as the Pink Floyd of moral theory. It is basically the moral theory that says “we don’t need no moral theory.” The most prominent advocate of moral particularism is Jonathan Dancy who happens to be the father in law of Claire Danes (not that its relevant). It is a relatively recent development in moral philosophy. Dancy argues that we do not need domain general ethical principles such as the Categorial Imperative of Kant (Act only according to that maxim you can will to be a universal law) or the Principle of Utility of John Stuart Mill (Acts are right insofar as they tend to promote happiness) or the Decalogue of Moses (The Ten Commandments), we can get by with particular reasons based on particular values for particular decisions in particular situations. However, to function as a trenchant particularist, you would need a human cognitive architecture (i.e. the ability to feel and value and articulate reasons on the basis of feeling and valuing). Robots as yet cannot do this. But research is underway to develop such machines.

Personally, I favour a moderate form of particularism which denies (or ignores) domain general ethical principles and focuses on particular rules for particular situations.

In practical terms, when it comes to actual implementation, I actually take something of a hybrid approach. What I end up with is a blend of deontology and utilitarianism within a moderate particularist framework that produces a ‘virtuous’ robot within a defined moral domain.

Advertisements