Bryan Caplan  

What Is the Forced Organ Donation Hypothetical?

Faustian Economics... Mencken on Government...
I often appeal to the forced organ donation hypothetical.  See for example my common-sense case for pacifism.  But what precisely is the hypothetical?  Here's an excellent explanation, courtesy of Judith Jarvis Thomson:
[I]magine yourself to be a surgeon, a truly great surgeon. Among other things you do, you transplant organs, and you are such a great surgeon that the organs you transplant always take. At the moment you have five patients who need organs. Two need one lung each, two need a kidney each, and the fifth needs a heart. If they do not get those organs today, they will all die; if you find organs for them today, you can transplant the organs and they will all live. But where to find the lungs, the kidneys, and the heart? The time is almost up when a report is brought to you that a young man who has just come into your clinic for his yearly check-up has exactly the right blood-type, and is in excellent health. Lo, you have a possible donor. All you need do is cut him up and distribute his parts among the five who need them. You ask, but he says, "Sorry. I deeply sympathize, but no."

Would it be morally permissible for you to operate anyway? Everybody to whom I have put this second hypothetical case says, No, it would not be morally permissible for you to proceed.
P.S. Thomson credits Phillipa Foot, but the latter's discussion is fairly cursory:
Why then do we not feel justified in killing people in the interests of cancer research or to obtain, let us say, spare parts for grafting on to those who need them? We can suppose, similarly, that several dangerously ill people can be saved only if we kill a certain individual and make a serum from his dead body. (These examples are not over-fanciful considering present controversies about prolonging the life of mortally ill patients whose eyes or kidneys are to be used for others.) Why cannot we argue from the case of the scarce drug to that of the body needed for medical purposes?

Comments and Sharing

COMMENTS (24 to date)
Bedarz Iliaci writes:

I may be missing something simple but why is the moral rule "Do not murder" insufficient or inapplicable here?

Reed Roberts writes:

You could apply it but it turns out people aren't very consistent in its application. In the trolley problem (5 people about to be hit by a runaway train but you can pull a lever to switch the train to a track where it will kill only one) the majority of people will pull the lever. It's really hard to find a good reason that that situation is different from the above.

Lars P writes:


Good example. Some ideas.

In the trolley situation, we don't choose who should die. We just pick the smallest number. In the transplant case, we can pick any person out of millions. So it seems very unfair for one surgeon to get to pick whoever they want.

If you came up with a fair and impartial system to pick the donor, and the system was known, people might be OK with it.

One pragmatic difference is that the 5 patients are probably old and mortally ill anyway. They might get 30 years extra life from the procedure, while the young man loses 60 years.

Bedarz Iliaci writes:

In the trolley problem the majority of people will pull the lever.

Is it a known fact and how is it known?
And what is the revealed preference here?
That the people show by their actions, irrespective of what they choose to say confronting an unrealistic situation.

While the organ hypothetical is much more realistic
and thus popular intuition may be expected to be more rational.

Rob writes:

Game theory.

X pulled a lever, leading to the death of Y instead of 4 other people
is not the same Schelling point as
X pushed Y off a bridge, saving 4 other people
X killed Y without consent, removing his organs to save 4 other people
The social reaction template to "X attacked Y" is a Schelling point of considerable social importance and with considerable consequences. Since the Schelling point depends on the framing of the action, expectation of the framing of the action is the crucial distinction. Consequentialists must know that, unless they are naive.

In war, of course, many other layers of game theory apply in addition.

Chris H writes:


Actually the statement of the problem is made specifically that we cannot pick one out of millions. The only person we can pick in time is the one person who came to us and is a perfect match. So that fairness divide between the trolley problem and organ donor problem is non-existent.

@Bedarz Iliaci

The "just don't murder" rule works fine if you want to have non-consequentialist ethics. If however you want consequentialist ethics then this requires explanation especially since there is a divide people show between the trolley problem set up and a set up like this.

Of course, most people aren't following either consequentialist ethics or non-consequentalist ethics. What the authors of the paper I linked to say is this is a sign of emotional heuristics rather than a consistently thought out procedure to determining the right answer. From the conclusion they write:

Human capacity for moral conduct might stem not so much from some reasoned principle, but from our biological profile. Human criteria for moral assessment might thus derive precisely from that capacity, instead of from some higher value handed down to lay people by means of moral theories. As shown in many experimental researches (including ours above), we are capable of quickly answering moral questions although we might ignore exactly why. Afterwards we can reason about the situation and try to make up a story that justifies our intuitive answer. Such justification, if successful, is likely to become some sort of rule that we keep following,reinforcing through time our conviction that we are doing the right thing. The doctrine of double effect might be interpreted like such justification. On the other hand, the general moral opinions we entertain may be inconsistent with our moral judgements and the justifications we assemble for them, perhaps because they, too,are intuitively generated.

I think this is mostly right. Coming up with a consistent reason as to why people are seemingly inconsistent can be a fun mental exercise, but it's probably not what was actually driving behavior.

Joe Teicher writes:

I think the solution would be to create a special class of people who we can morally cut up for parts. The problem with just specifying that it is a random person who walks into the clinic is that it could be me, or someone I care about. That is unacceptable! But if it is a person specifically bred for the purpose of having their organs removed then I know that can't be me or anyone I care about so my sense of moral outrage goes away.

This was the theme of the "feel good" hit film "Never Let Me Go"

Dan S writes:

File this one as "rason #784 why there's no such thing as objective morality." The reason why people are never going to agree on this question is because there is no correct or incorrect answer. The universe simply does not care what happens here, who lives and who dies, and who "deserves" it. Our responses are ultimately subjective. They may be well-thought out and logically applied, but ultimately at their core they are based on an individual's subjective feelings and values.

Bryan is a very intelligent and logical guy, but I will never understand his undying attachment to objective morality despite the fact that, ya know...the universe is physical, and there's no scientific evidence that different states of the world can be "preferred" in some cosmic sense.

Dan S writes:

Ack!! *reason!

This is why you don't comment while late for work.

Steve Z writes:

Dan S: A philosopher expressing a similar point by quipping that moral realists were searching for the fundamental particle of morality: the moron.

In my view, ethics and morality are just an extension of evolution, either cultural or biological. All else equal, the tribe/group/civilization with superior ethics will out-reproduce or otherwise dominate the culture with inferior ethics. So we get the ethics that was helpful for reproduction or cultural dominance at one point or another. Morality is the remnant of violence exercised long ago.

Despite the compelling arguments in favor of pacifism Prof. Caplan has offered, it is unlikely that the majority of people will ever abandon the concept of the just war any time soon; groups of people likely to accept those arguments have been selected against. Similarly, people tend to think it is more awful to rape a woman--even if she is unaware--than it is to savagely beat and/or rape a man, even if the woman is infertile, because they associate women with fertility.

With respect to the forced organ donation hypothetical, the doctor surreptitiously taking the organs undermines the social norm of placing trust in doctors (or, more generally, people who tend to you while helpless), and so it is rejected. It is no answer to mess with the hypothetical, for example by stipulating that nobody will ever find out, that the doctor will do it once and only once, etc. People don't apply social norms based on truth-tables; they use heuristics and look for reasons why they must still apply in edge cases.

The best way to change somebody's mind about a moral point is to make it more costly for them to hold a contrary view than to hew to their current view. This can be accomplished by force, but it can also be accomplished by putting forward a narrative that it simpler to understand. Take the end of chattel slavery in America as an example.

It was surely mentally costly for slavery supporters to fraternize with slaves, see with their own eyes that they were human, but maintain that they were also less than human, all the while hewing to republican ideals from the Glorious Revolution. Abolitionists offered a more parsimonious, and thus less mentally costly, narrative. Then, slavery was abolished by force, making the slavery narrative even more costly to maintain.

One final point. As with voting, people have little incentive in most cases to think about morality carefully on a granular level. Heuristics from childhood do just fine in most cases. And if you think that morality is just the study of what works in a society, it is impossible to say what is right and wrong absent an event, like the American Civil War, that forces people to think about a moral topic on a granular level by giving them skin in the game. So typically moral topics just kind of drift around in society---which is good, because that is a sign of peaceful times.

Danyzn writes:

If all the premises of the hypothetical are in fact satisfied, and the choice will not have any other consequences (such as people suspecting that they may be murdered every time they go for a checkup), all with complete certainty, then the morally right thing to do would in fact be to commit murder and proceed with the operation. Saving five lives for the cost of one, with no other consequences - how can that be wrong?

I'm also certain that I would fail to do the morally correct thing if somehow I ended up in the situation posed by this hypothetical. My abhorrence of murder is just too strong. But my strong abhorrence of something does not make it wrong always and everywhere.

People tie themselves in knots because they want their moral theory to always prescribe an action that they would be comfortable carrying out themselves rather than one from which they recoil in horror. But there is no reason why the correct moral theory should have this property. I too would almost certainly do the intuitively less abhorrent thing and refrain from the operation, but that doesn't change the fact that it would be the wrong action.

Finch writes:

> Everybody to whom I have put this second
> hypothetical case says, No, it would not be
> morally permissible for you to proceed.

I just don't believe this for a minute. Either he asked a really odd set of people, or they wanted to signal their moral status. People make decisions like this _all_ _the_ _time_. This is what taxes are, for example. Outside of circles like this almost no one considers them immoral.

Even in the transplants and murder hypothetical, you just need to change a few details to change the moral outcome. Someone upthread mentioned the recipients were likely old. What if the recipients were children? What if one was your child? What if the man was an evil person? If little details can sway the outcome, it's not much of a basis for reasoning. You're just doing a cost-benefit analysis with a little murder taboo thrown in.

Daublin writes:


It is slightly more complicated. If you decline the forced transplant, then you are sort of murdering the five other people.

It's a murder by negligence rather than a murder by overt act, but then again, it's five to one.

It's a great hypothetical. It has an interesting parallel to military conscription....

Dan S writes:

Steve Z: I'm with you completely. I just wish more people would accept that our moral urges are physical products of biological and cultural evolution rather than edicts from the cosmos. (According to the Philpapers survey,, 56% of philosophers surveyed side with moral realism, with only 27% against. That kills me!)

I'm reminded of a scene from 3:10 to Yuma, when the one guy is electrocuting Russel Crowe, and then the guy who plays Ann's dad on Arrested Development says, "you can't do that. It's immoral." And the guy just keeps on doing it anyway. Notice how Ann's dad's appeals to objective morality fall on deaf ears, because in a physicalist, descriptive universe, they have no teeth. The "rules" of "objective" morality can be obeyed or disobeyed at will, so in what meaningful sense are they objective?

ColoComment writes:

I am reminded of the recent child lung transplant conundrum where the rules/regulations stipulated that a person under 12 years had to wait for a child donor, but the parents successfully waged a public relations war that got their child on an adult waitlist. The child obtained an adult lung transplant that ultimately failed. The child then underwent a second transplant operation and to date [I believe] survives.

However, that child's first lung transplant also deprived some adult, somewhere, of a lung transplant, the consequences of which we do not know.

How do we justify that choice? That the child had more life to live? We don't know that. That we know the child from the publicity, but not the alternate donee? Should that matter? What else comes into play? Why does the social pressure always play one way and not t'other?

Perhaps, a la Sumner, we should call that alternate adult donee "The Forgotten Man," eh?

dw writes:

Simple solution: kill one of the sick people and use their organs for the other 4. The patient was going to die anyway!

Ghislain writes:

The people that need a lung trade it for a kidney. The only person that dies is the one needing a heart.

This assumes compatibility, but if one donor can save them all, they are probably close anyway? (I am not a surgeon)

ColoComment writes:

"It is slightly more complicated. If you decline the forced transplant, then you are sort of murdering the five other people."

They are each already dying of their ailments. Until there is a "cure" for human mortality, we're all going to die of something. Is it really "murder" (by neglect if not by action) if we fail to intercede in someone's final act of natural mortality?

To me it would be immoral to take an action to kill the [presumably] healthy one person so as to intercede in the natural path of the others to their mortal end. I would never assume to myself the god-like powers over life and death that that would entail.

Bedarz Iliaci writes:

There is absolutely nothing negligent in a doctor refusing to murder a person in order to harvest his organs.

The concept of 'sort of murdering' is very loose thinking.

The moral premises 'Do not murder' etc are bedrock and require no justifications whatsoever.

The axioms that underlie a given moral theory must have some connection with reality and the moral axioms underlying consequentialism --greatest good for greatest number or such like- do not and can not have greater weight or self-evidence than "Do not murder".

Finch writes:

> Is it really "murder" (by neglect if not by
> action) if we fail to intercede in someone's final
> act of natural mortality?

If you look at people's attitude to things like welfare and medical care, apparently the "commonsense" attitude is "Yes, failing to help when you could do so is wrong."

I don't agree with that, but it illustrates how sloppy and error prone this type of reasoning is. I thought Steve Z had the best comment on here. Though I admit I like dw's as well...

Ricardo writes:

Solve the problem with trade. Each kidney-needer trades with a lung-needer: one kidney for one lung. That solves 80% of the problem. The guy with the bad heart dies, but he probably asked for it anyway, by drinking extra-large sodas all his life.

Rob writes:

@Bedarz Iliaci

The axioms that underlie a given moral theory must have some connection with reality and the moral axioms underlying consequentialism --greatest good for greatest number or such like- do not and can not have greater weight or self-evidence than "Do not murder".

I would turn this on its head: "Do not murder" is sorely needed for the greatest good for the greatest number or such like.

Only under extremely exceptional hypothetical circumstances does this fail to be true, especially given the legal definition of murder, which doesn't apply to all killings.

James writes:

Dan S:

That's a great argument to make if you ever meet a moral realist who uses the phrase "objective rule" to mean "rule that is physically impossible to violate." In my limited experience, I've never met a moral realist who meant anything like that. But there sure seem to be a lot of moral skeptics who argue against such a position.

Ben Mathew writes:

I think the fact that everyone would now be frightened to walk into a clinic has a lot to do with why we think the transplant should not occur.

Pesky ol' economics and its efficiency criteria butting into philosophy...

Comments for this entry have been closed
Return to top