The Trolley Problem and Objective Morality

I've spent some time in this blog talking about two fairly well-known Christian apologists — Francis Collins, who is famous for being the head of the human genome project and, more recently, being appointed as director of the National Institute of Health; and William Lane Craig, a theologian mostly famous for debating secular-minded scholars and scientists.

In Francis Collins' book The Language of God, he discussed what he called the "Moral Law". He argues in the book that our ability to discern right from wrong is a sign of God's existence, something that cannot be explained by evolution. I spent some time arguing against his position in my critique of his book here. William Lane Craig has posited a similar argument with his argument about "Objective Morality". Craig argues that if there is no God acting as an absolute authority, morality cannot be "objective"; thus what is considered right or wrong would be subjective and arbitrary, equating godless morality to some sort of moral nihilism. Of course, as a non-believer, I don't think morality has anything to do with some mystical deity. It's not taught to us, nor is it divinely imbued; morality is a sociocultural outgrowth of behaviors deeply embedded in us by evolution, and there is a great thought experiment called the "Trolley Problem" that has been the subject of much scientific research which can not only shed some light on the biological mechanisms at play, but also demonstrate the short-sightedness of Craig and Collins' arguments.

The Trolley Problem

The Trolley Problem gives us two scenarios. In the first, a trolley is out of control, charging down the track. There are five workers on the track who will be killed by the trolley. However, you, the unlucky observer, are standing near a lever that will divert the trolley to another track, where there is only one worker who will surely be killed be the trolley. The question is: is it acceptable to push the lever to kill one person instead of five?


In the second scenario, there is again an out of control trolley. This time, however, there is only one track. You are standing on a bridge, overlooking the track where the trolley is barreling toward five workers. There is a large man standing on the bridge. If you push him off the bridge to his death, his body will stop the trolley. Is it morally acceptable to push the bystander off the bridge?

The trolley problem, though silly (I'm sure a slightly more believable versions of similar scenarios could be told) has been used extensively in cognitive research. If you are like most respondents, in the first scenario you would have chosen, without much if any hesitation, to throw the lever. And, if you were like most respondents, your reaction to the second problem would be somewhat more reluctant; you may or may not have chosen to push the man off the ledge, but it is likely that, after making your decision, you figured out a way to rationalize it.

If you do a Google search for "trolley problem", you will find many philosophy aficionados offering a "solution". However, that is missing the real point of the scenario in the first place. What makes the trolley problem interesting is that functionally, the observer is faced with the identical scenario — kill one person to save five. Why, then, do most people hesitate on the second scenario but not the first, and why do people have difficulty agreeing on right course of action in the second scenario?

The answer has to do with two conflicting parts of our brain. One part of our brain is dealing with the rational, utilitarian decision; another, more primitive part of our brain is dealing with the emotional response. In the first scenario, all that's required is to throw a lever. The workers seem like equal players in the game, so losing one is clearly better than losing five, and there is little or no emotional engagement in the observer. But the second scenario requires that the observer take a physical action against the bystander, to push him off the bridge to his death. This triggers a basic, emotional reaction in our brains, a reaction that says you should not harm another person. So the hesitation that people experience is the attempt to resolve the conflict between the rational and emotional parts of the brain.

It's quite interesting to note that, when sociopaths have been studied with this experiment, the part of the brain responsible for moral feelings is not active; the sociopath is making a purely utilitarian decision. And a study published in the journal Nature in 2007 found that people who had suffered damage to that emotional center of the brain make radically different moral judgments than people with healthy brains. From the New York Times:

Previous studies showed that this region was active during moral decision-making, and that damage to it and neighboring areas from severe dementia affected moral judgments. The new study seals the case by demonstrating that a very specific kind of emotion-based judgment is altered when the region is offline. In extreme circumstances, people with the injury will even endorse suffocating an infant if that would save more lives.

“I think it’s very convincing now that there are at least two systems working when we make moral judgments,” said Joshua Greene, a psychologist at Harvard who was not involved in the study. “There’s an emotional system that depends on this specific part of the brain, and another system that performs more utilitarian cost-benefit analyses which in these people is clearly intact.”

Why the Trolley Problem creates problems for Collins and Craig

William Lane Craig argues that without God, morality is arbitrary and relative rather than objective. But can we really demonstrate that moral intuitions are objective when we have strong empirical evidence that moral decisions are governed by subjective emotional responses? While it's true that many thinkers have claimed to find a solution for the Trolley Problem, any rationalization thereof is strictly post hoc. The decision is made in the moment, well before any detached and unbiased thought process can occur. This is not a problem that can be easily resolved with a religious code of behavior, because the problem inherently creates conflict within us at a biological level.

Lest you think that scenarios such as the Trolley Problem are not representative of real-life dilemmas, consider for a moment the situation faced in a hospital following a natural disaster. With thousands of injured and dying people pouring into the hospital, who is treated as a priority? Who receives the limited resources available? If you were a doctor in that hospital, how would you decide the degree of measures you should take to save a patient's life? These are not "objective" moral decisions that lend themselves to simple solutions that could be parroted from one-liners in religious texts; they are complex decisions whose outcomes will be decided in a battle between the empathetic and rational regions of our brain, and there are no unambiguous answers.

The dilemmas faced in such scenarios also illuminate the biological and evolutionary nature of morality. Empathetic responses to others served to foster group cohesion and improve the likelihood of survival for all members; rational decision-making allows us to make the most prudent use of our resources, again improving the likelihood of survival for all group members. However, it's unsurprising that there would occasionally be serious conflicts between these two domains. It's also worth noting that the capacity for empathy is not unique to the human species; indeed, primates and many other animals share with us the ventromedial prefrontal cortex, the region of the brain responsible for empathy, or the "moral feeling" portion of the brain in the Trolley Problem. Frans de Waal, author of Our Inner Ape and Primates and Philosophers: How Morality Evolved has extensively documented altruistic behavior in primates. The rational portion of the human brain is of course more sophisticated and evolved than that of a primate, making our moral decisions into complicated dilemmas between our thoughts and our feelings.

Morality is more flexible than we would like to believe

The takeaway here is that morality is not something that can be understood or practiced dispassionately and objectively. Our moral judgments are primarily subjective, emotionally-driven decisions; logical rationalizations of these decisions invariable follow rather than proceed them. The theist will often attempt to create a false dilemma by suggesting that without an absolute divine authority, we have only our whims to discern between right and wrong; or, as it's been phrased by many a theologian, "Without God, anything is permissible". However, such a near-sighted view overlooks the profound importance of "moral" judgments in our evolutionary history. We are part of a species that is innately interdependent; non of us have the luxury of moral autonomy, because we are all inexorably dependent on one another for all aspects of our mental and physical well-being. Moreover, our tendency to feel empathy toward one another is not borne from complex logical thought processes, but is deeply embedded in our biology — something that can be observed even in the behavior of toddlers, or in our simpler evolutionary cousins. Were our tendencies for empathy not so deeply embedded in us, we wouldn't have survived long enough to evolve rational thought processes. Or, as Christopher Hitchens is so fond of pointing out, if the Israelites really thought murder and perjury were permissible, they wouldn't have survived long enough to make it to Mount Sinai.


  1. Mike-

    Excellent post. Empirically backed science 1. Religious conjecture 0. But whose keeping count? ;)

  2. I'd have to believe that atleast a few moral principles are learned. For instance, stealing may be something that one has to learn to refrain from because they lack the understanding of monetary exchange and not freely having access to everything and anything they want. The reason being, my son recently stole a book from a bookstore. I don't think that he knew what he was doing was wrong. I think that he thought it was similar to a library where you can get books for a time without monetary exchange. We had to explain that you can't just take things that don't belong to you without giving something first. Anyway, great post!


Post a Comment

Popular Posts