Study helps explain why we favour a black and white approach to morality

Study helps explain why we favour a black and white approach to morality

Would you kill one innocent person to save five?Choose your answer wisely: your popularity may depend on it. New research from Oxford University shows people gauge others’ trustworthiness based on their moral judgements. The findings can help explain why snap judgements about morality tend to be based on a set of absolute moral rules (such as 'don’t kill innocent people'), even if we might make different decisions when given more time.

Jim A.C. Everett, Oxford PhD student and Fulbright Scholar at Harvard University, worked with Molly Crockett from Oxford and David Pizarro from Cornell University to test whether our default reliance on moral rules has an evolutionary basis. Their conclusion, published in the Journal of Experimental Psychology: General: People who are seen as holding to moral absolutes are more trusted and more valued as social partners.

Mr Everett explained: 'We compared two schools of thought about morality. Consequentialist approaches say we should aim to maximize the greatest good for the greatest number, even if this means causing some harm – like killing one person to save five. In contrast, deontological approaches focus on moral rules and ideas of rights and duties, such that certain things (like killing an innocent person) are wrong even if they maximize good outcomes (like saving extra lives). People usually default to the deontological style of morality, suggesting these moral rules have in some way been coded into human nature. But why?'

Seeking to explain this, Mr Everett said 'Psychologists have argued deontological intuitions arise from ‘irrational’ emotional responses, but our work suggests another explanation: popularity. If people who stick to moral absolutes are preferred as social partners, expressing this view will reap benefits for oneself. Over time, this could favour one type of moral thinking over another in the overall population. And this makes sense – we shudder at the thought of a friend or partner doing a cost/benefit analysis of whether you should be sacrificed for the greater good. Rather than reflecting erroneous emotional thinking, making moral judgements based on rules may be an adaptive feature of our minds.'

Senior author Dr Crockett said: 'To test this idea, we used several variations of moral dilemmas where a person must decide whether or not to sacrifice an innocent person in order to save the lives of many others. We then asked whether people who made either rule-based or cost/benefit moral judgements were preferred as social partners. Across 9 experiments, with more than 2,400 participants, we found that people who took an absolute approach to the dilemmas (refusing to kill an innocent person, even when this maximized the greater good) were seen as more trustworthy than those who advocated a more flexible, consequentialist approach. When asked to entrust another person with a sum of money, participants handed over more money, and were more confident of getting it back, when dealing with someone who refused to sacrifice one to save many, versus with someone who chose to maximize the overall number of lives saved.'

However, simply deciding whether or not to sacrifice an innocent person was not the only thing that mattered: how the choice was made was crucial. Someone who had decided to sacrifice one life to save five but had found that decision difficult was more trusted than someone who had found the decision easy.

And it wasn't always the case that those who refused to kill were trusted more. Where the person who might be sacrificed indicated a specific desire to live or a willingness to die, people favoured individuals who respected those wishes, even if that involved killing. Professor Pizarro said, 'This helps explain why we appear to like people who stick to these intuitive moral rules—not because they are sticklers for the letter of the law, but because the rules themselves tend to emphasize the absolute importance of respecting the wishes and desires of others.'

One final conclusion may come as no surprise – the researchers say their findings show that our day-to-day moral decisions don’t fit into the neat categories defined by moral philosophers. Instead, real life morality is suited to the complexity of real life situations.

The scenarios

The Trolley Problem

An out of control trolley (tram) is speeding towards a group of five people. You are standing on a footbridge next to a large man. If you push him off the bridge onto the track below, this will stop the trolley. He will die, but the five others will be saved. What do you do?

In one variation, volunteers were asked whether they would press a button, opening a trapdoor that would drop the man onto the track, without them physically touching him.

In a further variation, they could choose between letting the trolley continue or using a button to switch the trolley onto another track where it would then hit one person rather than five.

The Soldier's Dilemma

Harry is the leader of a small group of soldiers, and all of the group is out of ammunition. Harry is on his way back from a completed mission deep in enemy territory when one of his men steps in a trap set by the enemy. The soldier’s leg is badly injured and caught in the trap. Harry cannot free him from this trap without killing him. The enemy is advancing and they will undoubtedly find the soldier and torture him to death. The enemy troops are closing in on their position and it is not safe for Harry or his men to remain with the trapped comrade any longer. Harry offers to stab the soldier in the heart after he's unconscious to kill him quickly and prevent him suffering at the hands of the torturers. Just before he passes out due the pain, the soldier makes a plea.

Some people were told that the soldier pleaded 'Please, kill me. I don't want to suffer at the hands of torturers'. Others were told the plea was 'Please, don’t kill me. I don't want to die out here in the field'.