Bryan Caplan  

Evolution and Moral Intuition

PRINT
Good bye to the book as we kno... Recent Reading: Adam Thierer, ...
When backed into a corner, most hard-line utilitarians concede that the standard counter-examples seem extremely persuasive.  They know they're supposed to think that pushing one fat man in front of a trolley to save five skinny kids is morally obligatory.  But the opposite moral intuition in their heads refuses to shut up.

Why can't even utilitarians fully embrace their own theory?  The smart utilitarian answer blames evolution.  Scott Sumner:
Other "counterexamples" take advantage of illogical moral intuitions that have evolved for Darwinian reasons, like discomfort at pushing a fat man in front of a trolley car to prevent even more deaths.
I'm the first to concede that human beings haven't evolved to be perfect truth-seekers.  But what's the epistemically sound response to the specter of evolved bias?  "Be agnostic about every belief that, regardless of its truth, helps your genes," is tempting.  But it's also absurd. 

How so?  Virtually every moral philosophy - including utilitarianism - agrees that a happy life is better than (a) death, or (b) suffering.  But evolutionary heavily favors these value judgments!

If you aren't convinced that life is better than death, or that happiness is better than suffering, you swiftly drop out of the gene pool.  And since human beings are social animals, we're evolved to value the lives and happiness of the people around us as well as our own.  Should we therefore dismiss our anti-death, anti-suffering views as "illogical moral intuitions that have evolved for Darwinian reasons"?

The moral nihilist, who bites even more bullets than the utilitarian, can enthusiastically agree.  Everyone else, however, has to say, "Yes, it's logically possible that we're evolved to falsely believe that life and happiness are better than death and suffering.  But after calm reflection on this potential bias, I remain convinced of the merits of life and happiness."  And if you use this approach for life and happiness, why not try it for murdering innocent fat guys?

None of this means that moral intuition is infallible.  Serious intuitionists question their moral intuitions all the time.  The reasonable response to evolved biases, though, is to calmly review suspect beliefs - not dismiss the obvious. 



COMMENTS (30 to date)
Dan S writes:

In what objective sense is happiness "better" than death and sadness? It certainly matters a lot to human beings, but the universe really doesn't care one way or the other. It just keeps doing it's thing according to the laws of physics.

Ghost of Christmas Past writes:

[Comment removed for policy violations. Please read our Comment Policies. Email the webmaster@econlib.org to request restoring your comment privileges. Please do not re-paste to EconLog entire comments or other extended material that has been published elsewhere previously. The EconLog comment section is reserved for new, independent contributions and ideas from our commenters.--Econlib Ed.]

Bostonian writes:

How do we know that the suffering of Korean comfort women in World War II exceeded the pleasure gained by Japanese soldiers? Instead of pretending that we can quantify their negative utility and show that it exceeded in magnitude the positive utility of the soldiers, we condemn the actions of the Japanese because it violated the natural rights of the women.

Utilitarianism cannot be the sole basis of ethics. Libertarians should be especially wary of efforts by governments to supposedly maximize total utility while trampling on natural rights.

Don Geddis writes:

Evolution favors happiness over suffering? Really? Carnivores eat other animals alive, old elephants lose their teeth and slowly starve to death, etc. Human brains mislead their owners into thinking that achieving difficult goals will result in increased happiness and contentment, despite all objective evidence that it does no such thing.

Sure, individuals attempt to take actions (that they believe will) increase their own happiness and reduce their own suffering. But evolution, as a whole, increasing happiness and reducing suffering? Hardly. The evolution objective is merely to propagate genes. Happiness is a minor motivation for that goal, and suffering is almost orthogonal to it.

(And death is a great counterexample too: almost all living creatures eventually die of old age ... but not quite all. It's clear that death is not required by physics. Instead, it's an adaptive trait that is bad for the individual but good for the gene pool, hence selected strongly by evolution for essentially every species.)

Ghost of Christmas Past writes:

Ever notice how Utilitarian hypotheticals always involve sacrificing someone else?

Trolley problem: should you kill person P to save persons V,W,X,Y,Z?

Why don't Utilitarians ask: "you see a trolley car carrying five people headed for a crash. You can save those five people by throwing yourself in the path of the trolley, but you will suffer severe injury or death. Should you sacrifice yourself to save five others?"

The proper Utilitarian answer is "yes, I'm willing to commit suicide to save five others" but how often (soldiers and grenades notwithstanding) will anyone answer that way?

Utilitarians believe in redistributing other people's happiness.

Utilitarianism is like Marxism in that appeals to people with high verbal IQ but not much scientific bent, and especially to those with political ambitions. Utilitarianism provides philosophical window-dressing for things like the liquidation of the kulaks. Gotta sacrifice a few to bring the golden age for the many!

Jim H writes:

I would object to the statement that virtually every moral philosophy agrees that happiness is better than death or suffering, at least if you only consider temporal happiness. Christianity, for example, places the ideal life as one that lives in the light of the gospel - meaning that submitting to the will of God is better than life and will likely entail suffering. Please note Jesus' words in John 16: "In this world you will have trouble, but take courage; I have overcome the world." In addition, note the well established willingness of Christians throughout history to suffer, following the exhortation of the writer of Hebrews (chapter 12) to follow the example of Jesus, "who for the joy set before Him endured the cross, despising the shame." In this tradition is a whole line of philosophers who reject temporal happiness in favor of suffering and death under certain circumstances. Rather than natural selection weeding this philosophy out, Christianity has grown most when the church has endured the most suffering and death - starting with the suffering and death of Jesus.

RPLong writes:

I'm confused about what Caplan is calling a "bias" here. It reads to me as though he's suggesting that propagation of the human species and the general happiness of its members are evolutionary biases that prevent us from seeing a truer, more accurate ethics.

But what is ethics, other than propagation of the human species and the happiness of its members? What is this other sort of ethics that Caplan has in mind? The ethics of the Ubermensch? The computations of the Great Singularity? The will of God?

Anthony writes:

A better response on the part of the utilitarian would be to argue that our reasoning about moral scenarios in which we have anti-utilitarian intuitions is systematically defective.

There is in fact some evidence for this thesis. I'll post a link below for a good popular discussion of some of the evidence, but I'l very briefly summarise it first. It seems that when reasoning about those scenarios I referenced above, the emotional centres of our brain are far more involved in our evaluations than in cases in which we have utilitarian intuitions. This is relevant insofar as the increased employment of emotional centres of the brain in the evaluation process tends to result in poor cognitive performance. Two examples: emotional centres are involved in the process that leads to our difficulty in reading out, when prompted, the colour a word is written in rather than the word which is written (i.e. the difficulty we have in answering "red" when reading the word "green" written in red). Secondly and more interestingly, emotional centres are heavily involved in reasoning which is significantly affected by confirmation bias.

Anyway, it's worth an examination. The link is below:

http://lesswrong.com/lw/74f/are_deontological_moral_judgments_rationalizations/

[Broken html for url fixed. Please check that your links work by Previewing your comments.--Econlib Ed.]

Mike H writes:

Dan S:

In what objective sense is happiness "better" than death and sadness? It certainly matters a lot to human beings, but the universe really doesn't care one way or the other. It just keeps doing it's thing according to the laws of physics.

Well, since humans are part of the universe, how can you be sure the universe does not care?

David writes:

I often see that "cases in which we have utilitarian intuitions" our about maximizing our personal happiness, while cases in which our intuition stems from "the emotional centres of our brain" tend to be about societies best interests.

Our emotions seem to have evolved to make society stronger. For example, Vulcans would push the Fat man in front of the train - but, knowing that, no Vulcan would go anywhere near any train tracks; presumably that would be a detriment to society. Adding in an emotional directive not to push people in front of trains allows you to travel near train tracks with safety.

To generalize it slightly, if you use pure logic then there are many cases in which murder is justified. But, taken to its logical conclusion that would destroy society (as the "winner" would be the one that thought murder was justified in the widest range of circumstances). So we have emotional directives not to murder, even if we think it is the logical thing to do - and even if it really is the logical thing to do in a lot of cases. Staying far away from murder keeps the casual murderer exceptional enough to find and eliminate from society.

The best debunking arguments against deontology do not crudely try to debunk anti-utilitarian beliefs by appealing to the general fact that evolution caused them. Instead, the claim is that the specific cognitive mechanisms implicated in the formation of anti-utilitarian beliefs are much less reliable--i.e., less truth-tracking--than the mechanisms implicated in the formation of utilitarian beliefs. See Anthony's post above for some examples of these more sophisticated debunking arguments.

Dan S writes:

Mike H,

Humans are part of the universe indeed. But their feelings about something do not create objective normative truth.

I like to think of myself as a pragmatist. I don't "know" that that tiger running at me is objectively real, but if I fail to act on what my eyes are telling me, I'll quickly be dead. I really can't say the same thing about "objective" rules of morality. If I choose to disobey them, the world keeps spinning, regardless of how anybody feels about it.

Arthur B. writes:

Save? Evolutionary psychology does to utilitarianism what evolution does to creationism.

Before evolutionary psychology, there were no good explanations of our moral sense. Why should we care for things that do not benefit us directly? Why do we prefer not to steal or not to murder, even when the odds of getting caught are low?

In this context, I see how one could be tempted to embrace some form of moral realism.

However, we now have a clear explanation why moral preferences arise. Moral preferences are entirely explained by evolutionary game theory. In fact, it can be simulated in silico.

Evolve a bunch of prisoner dilemma playing creatures on a computer and you will see concepts such as betrayal, revenge and forgiveness arise.

Given this insight, it is absolutely nuts to believe in any kind of moral realism, including utilitarianism. Just because utilitarianism sounds like it's objective doesn't change the fact that moral preferences are inherently sujective (as in, are internal, which does not mean they are arbitrary)

Dan S writes:

Arthur B,

Right on. For the life of me I don't understand how otherwise science- and physics- believing people will suddenly believe in the moral realism, essentially a form of voodoo.

Mark V Anderson writes:

Ghost, it is not utilitarians who come up with those goofy examples like whether to push someone in front of a train. It is from those trying to prove that utilitarianism doesn't make sense by counter-example. So maybe it is the anti-utilitarians you should be condemning.

MikeP writes:

Can we please do away with this statement of the trolley problem? I wouldn't push the fat man because it is in no way obvious that that would stop the trolley. Indeed, the most likely result is six dead people, one of whom you killed through unimaginably ludicrous negligence.

The problem should be posed as a trolley on a track will kill five people, but you can throw a switch to shift it onto a track where it will kill only one. (He can be fat if you feel that's necessary.)

Not so hard a problem anymore, is it.

JL writes:

These discussions seem to conflate short and long run outcomes. Sure, the short run utilitarian answer is to push the man in front if the train. But in the long run, a society that permits such behavior is likely to be an unhappy one. Perhaps our moral intuitions focus on long run consequences, not short run outcomes.

James writes:

Utilitarians really should avoid the subject of evolution.

If moral realism is false then there are no moral facts and all moral claims, including "People ought to act so as to seek the greatest good for the greatest number!" are either false or meaningless.

If moral realism is true then there are moral facts but utilitarianism is still probably wrong. If we act with the goal of maximizing preference fulfillment, the actions we take will be in accord with those moral facts only in the special case that evolution has given humans just the right set of preferences such that the actions one would take to maximize the satisfaction of those preferences are the very same actions one would take if one always acted in accordance with the moral facts, whatever they are.

What utilitarians need to argue (though I doubt they'll ever take up the burden) is that there are moral facts *and* that evolution or God or whatever else has endowed humans with just the right set of preferences so that acting to fulfill those preferences to the greatest extent possible for the greatest number possible will have the unintended side effect that our actions will be in accord with the moral facts.

MikeP writes:

The problem should be posed as a trolley on a track will kill five people, but you can throw a switch to shift it onto a track where it will kill only one.

And by the way, this restatement can allow for you to be the one person. Or, even harsher, your significant other or child.

As Ghost of Christmas Past notes above, this should separate the honest to goodness utilitarians -- a set that is likely of size pretty close to zero -- from the quasi-utilitarian rule consequentialists and from everyone else not afraid to take an obvious action that saves four lives.

Peter writes:

I never understand why "Darwinian" is a pejorative term. If a soldier dives on a grenade it's not because he's a utilitarian. Much more likely to be the Darwinian group attachment that is much easier to see in human behaviour than utilitarianism. The reason any significant degree of utilitarianism is wrong is that it fundamentally conflicts with who we are. This is the argument from conscience. If we've learned anything in the last 100 years of so, surely it is that the process of creating the New Man leads to some pretty dark places.

My happiness is greatly dependent on the welfare of my family and friends. I expect that same applies to nearly everyone. Therefore, my happiness is reduced when policies prejudice my ability to affect the welfare of the people I care about. I expect that is why people seem to like autonomy and low taxes for themselves and lots of control of and redistribution from others.

Steve Roth writes:

You seem to shift the ground of the argument here.

SS points to aggregate utility as his measure of worthiness, while yours (here) seems to be the individual decider's happiness. Talking past each other I think.

Also talking past the rather troubled fact: evolution doesn't care whether we're happy, but at the same time, it does reward "fit" behavior with pleasure -- which can make you happy.

AP writes:

One problem with the trolley example is that in real life that's probably not the real or only choice. Just as in real life you don't just have a simple choice between stealing from the store and letting your kids starve. There are other options out there. Our intuition rebels against a false choice that doesn't reflect reality.

AP writes:

Also, JL above is exactly right.

The trolley problem reflects a conflict between collectivism and individualism.

Thus, the real choice is not between one life and five lives, but between a society in which one can be secure in his life and property, and one in which at any time your life can be taken away arbitrarily for the greater good. We have examples of such collectivist societies, and we know what happens when people aren't secure in their lives and property.

To restate JL's objection, there's an argument that, once you account for all the consequences of the choice and not just the short-term obvious ones, the true utilitarian choice is to do nothing and let the trolley run. Though that may redefine "utilitarian" to mean something else than conventionally understood.

PFC writes:

"If you aren't convinced that life is better than death, or that happiness is better than suffering, you swiftly drop out of the gene pool."

Not exactly. If a certain gene is responsible for causing the action/behaviour of an individual that would kill the individual but at the same time benefit the population, then the gene would be a winning one in the pool.

I would like social scientists to read up on evolution before they bring that into the discourse, or don't bring it in at all.

Eelco Hoogendoorn writes:
Not so hard a problem anymore, is it.

Indeed it isn't. All else being equal (no people dear to me involved) I still would pretend not to know what is happening in front of me, and let events run their course. And that's not a bias, but rational behavior which I fully stand behind.

There exists a big asymmetry in terms of moral consequences between 'killing' and 'letting die', and there are perfect game-theoretic reasons for this, arbitrary as the distinction may seem (or completely meaningless even, from the utilitarian perspective).

Kill one person, and you have potentially many enemies. Let a million people die, and you are still part of a comfortable 7-billion people natural coalition of other innocent-bystanders-in-crime.

Unless those who see utility in extracting revenge may plausibly single you out without picking a fight with literally everybody else, letting people die isn't that big of a deal, consequence-wise.

The more explanatory power a particular moral theory has, the less pretty it is, isn't it?

Chris writes:

You can choose life and happiness without believing they are moral in the sense of anything more than your own wants. The moral nihilist does not have to be indifferent between happiness and suffering.

Mike H writes:

Arthur B.

Evolve a bunch of prisoner dilemma playing creatures on a computer and you will see concepts such as betrayal, revenge and forgiveness arise.

Given this insight, it is absolutely nuts to believe in any kind of moral realism, including utilitarianism. Just because utilitarianism sounds like it's objective doesn't change the fact that moral preferences are inherently sujective (as in, are internal, which does not mean they are arbitrary)

So, if you create a simulation, you can predict the unique moral framework that will arise, and somehow this convinces you there is no moral realism?

Utilitarianism is wrong, but that doesn't mean there is no moral realism. Moral realism arises from just the situation you claim disproves it.

drycreekboy writes:

Late to the thread, but here goes:

"If you aren't convinced that life is better than death, or that happiness is better than suffering, you swiftly drop out of the gene pool."

No, no, and no. Being convinced, or believing that happiness and life are better than suffering or death doesn't matter a fig for whether you drop out of the gene pool. It isn't a matter of what you are convinced of, but what you do: i.e., breed. It's not belief, but behavior that matters from a peculiarly Darwinian standpoint.

In any case, the historical evidence is quite strong that artistic, morose, "oh, cruel life" types are copious in breeding behavior, and such live among us still.

Katja Grace writes:

I responded here.

I would enjoy reading Bryan's explanation of what it means to "reflect" on whether life is better than death.

Comments for this entry have been closed
Return to top