Modelling human moral and ethical decisions for use in driverless vehicles

Researchers from The Institute of Cognitive Science at the University of Osnabrück, Germany, have conducted a study that has found, for the first time, that human morality can be modelled meaning that machine based moral decisions are possible, in principle. This suggests that autonomous vehicles could be programmed to make ethical decisions in the same way human drivers do.

Their experiments, investigating human behaviour and moral assessments, used immersive Virtual Reality to study human behaviour in simulated road traffic scenarios. The participants were asked to drive a car in a typical suburban neighbourhood on a foggy day when they experienced unexpected unavoidable dilemma situations with inanimate objects, animals and humans and had to decide which was to be spared.

Until now it has been assumed that moral decisions are strongly context dependent and therefore cannot be modelled or described algorithmically, however the researchers found quite the opposite.

The results were conceptualised by statistical models leading to rules, with an associated degree of explanation for the observed behaviour. The research showed that moral decisions in the scope of unavoidable traffic collisions can be explained, and modelled, by a single value-of-life for every human, animal, or inanimate object.

This implies that human moral behaviour can be well described by algorithms that could be used by machines and will have major implications in the debate around the behaviour of self-driving cars, and other machines, in unavoidable situations.

Professor Gordon Pipa, a senior researcher on the study, said: “We need to ask whether autonomous systems should adopt moral judgements. If yes, should they imitate moral behaviour by imitating human decisions? Should they behave along ethical theories? If so, which ones? And critically, if things go wrong who or what is at fault?”

The study's authors say that autonomous cars are just the beginning as robots in hospitals and other artificial intelligence systems become more commonplace. They warn that we are now at the beginning of a new epoch with the need for clear rules.

Senior researcher, Professor Peter König added: “Now that we know how to implement human ethical decisions into machines we, as a society, are left with a double dilemma.”

For example, within the 20 new ethical principles drawn up by the German Federal Ministry of Transport and Digital Infrastructure, a child running onto the road would be classified as significantly involved in creating the risk, thus less qualified to be saved in comparison to an adult standing on the footpath as a non-involved party. The problem here is to work out if this a moral value held by most people and how large the scope for interpretation is.