The philosopher is one who takes his time so I can be forgiven for taking half the month to get around to implementing one of my New Year’s Resolutions. My aim in this blog is mainly to serve as a research aid and scratchpad for my thesis topic which is robot ethics by which I mean how a robot could make a moral decision rather than what humans should or should not do with robots.

This requires one to define a moral decision which is half the fun.

The approach I am currently taking is to express moral philosophy in a dialect of mathematical logic. In essence it is bog standard first order logic (propositional and predicate logic) with some minimal extensions to cover the notions of duty and action.

In terms of which moral theory I am implementing, I formally consider deontology, utilitarianism, virtue ethics and particularism. What I actually implement is in essence a domain specific deontology with utility functions (i.e. everything except virtue ethics though I do have an argument that character reduces to decision procedures).

I have two practical use cases in mind. The dreaded killer robot as per Arkin (i.e. a fully autonomous drone that flies around bombing targets in strict accordance with the Laws of War) and a somewhat less controversial barbot that decides whether or not to serve you whiskey.

In both cases the robots have a single actuator that is morally relevant (i.e. bomb the target or serve the drink). This simplification makes the logic more straightforward. However, as the thesis progresses I will be looking at more complex robots that have actuators that can perform more than one morally relevant action but to begin with a single actuator is enough!

Advertisements