Bryan Caplan  

How Deserving Are the Poor? Debate Wrap-Up

Notes from the Welch-Goldberg ... A Brief Letter on Signaling...
The resource page for last week's Caplan-Smith debate is now up, complete with full video.  Here's Karl's post-debate statement.  It's basically a more detailed version of his original statement.  But he does introduce two new points I want to answer:

1. Genetic determinism.  Here's Karl:
As it happened I was also debating Bryan Caplan, who I thought and still think, would admit that one's actual level of conscientiousness is probably genetically determined. And, further that this personality attribute underlies most of what the normal world would call "laziness."
Actually, I've explicitly disavowed genetic determinism for any interesting behavioral trait.  So does every behavioral geneticist.  The proof is simple: if genetic determinism were true for any trait X, identical twins would have exactly the same value of X.  They almost never do.  Conscientiousness is a case in point; heritability estimates are typically 40-60%.  None approaches 100%.

In any case, genetic determinism is a red herring.  You could just switch to a "genetic + environmental determinism" hybrid view, then reiterate Karl's fundamental position.  Which brings us to:

2. Free will.  Karl:
[I]f one is sympathetic towards those born blind does it not follow that one should be sympathetic towards those born lazy?

Now, that having been said I recognize that there will be a huge visceral aversion to this line of reasoning. And, so I want to do what I can to calm that aversion.

My point was that the reason we feel so differently about disabilities like blindness as opposed to disabilities like laziness, is that its really difficult to fake being blind. Thus there is much less concern that the blind person is taking advantage of you by lying about their blindness.

Its much more difficult to confirm laziness. So much so that people are hesitant to think of it as not a innate property of the person at all. However, our psychological research strongly suggests that this is not true.

But laziness is totally different from blindness: laziness is a choice, and blindness isn't.  Karl ably explained my reasoning during the debate: Laziness, unlike blindness, responds to sufficiently extreme incentives, and something can only respond to incentives if you are able to do otherwise. 

Consequentialists naturally tend to misinterpret this statement as saying, "We should punish laziness in order to reduce laziness."  But my point is about philosophy of mind, not policy.  The responsiveness of laziness to incentives shows that being lazy is a choice.   

Of course, you could just bite the bullet and insist that what appear to be choices are never "really" choices.  But that goes against all mental experience, and should be dismissed as absurd.

One last point: Many people (Scott Sumner among them, I fear) would be tempted to complain that I stubbornly cling to whatever moral intuitions I deem to be "obvious," while Karl actually tries to prove his moral conclusions.  My reply: Karl rests all of his moral conclusions on a single utilitarian premise.  And what is that premise?  If you say, "Just another intuition," you're being generous.  The utilitarian intuition is a paper tiger, subject to a long-standing list of devastating counter-examples.  Utilitarians' standard replies are to (a) change the subject by denying the empirical importance of the counter-examples, and (b) dogmatically accept every absurd implication of their view while criticizing the "dogmatism" of everyone who demurs.  If this isn't ridiculous enough, utilitarians proceed to continuously violate their own ethic by failing to spend all their spare resources on desperate strangers.

I'm not saying that human happiness isn't morally important.  I'm saying that human happiness is one morally important thing on a long list of morally important things: desert, justice, honesty, achievement, truth, beauty, and liberty are merely the beginning.  The only way to weigh them against each other is with clarifying examples and reflection.  Morality would be a lot simpler if utilitarianism were true.  But it's better to be broadly right than simply wrong.

Comments and Sharing

COMMENTS (17 to date)
Kevin Dick writes:

You can't just test whether something responds to incentives in isolation and say "free will".

You've read _Thinking Fast and Slow_. You know about ego depletion. So you can get people to decrease "laziness" on one dimension with incentives, but only at the expense of increasing "laziness" on some other dimension.

I'm not sure to what extent we've measured the variance in people's reserves of willpower, but I think our prior should be that it follows a normal distribution like most measures of human performance.

So some guy is poor because he was allotted low intelligence and low conscientiousness through a combination of bad genetics and bad childhood environment. And you think he's "undeserving"?

Let's turn this around by going back to the hunter-gatherer environment. Some guy has high intelligence and high conscientiousness, but is weak and slow. He can't contribute as much to the tribe in that environment. So you're saying the tribe shouldn't help him?

hacs writes:

Hi Prof. Caplan,
If laziness is a consequence of shortsighted planning, a sort of cognitive impairment, would you change your position about it? Also, if even healthy children are born shortsighted planners, and parents are responsible for planning and teaching life planning to them at the early ages, would you change your position about parenting?

Lee Kelly writes:

Suppose that you built a car with the potential to travel over 100mph, and, years later, the car is destroyed in a car wreck. Suppose, moreover, that the car never actually traveled faster than 70mph during its "life". What if I then said that the car never really had the potential to drive faster than 100mph, because all the events that came before determined that it would never actually travel over 70mph.

That would be a weird thing to say, right? Certainly, my claim wold have an element of truth to it, but it also seems to miss the point. Even if we assume determinism is true, we don't normally evaluate the potential of a system in the context of all that has happened before, but rather in an abstract realm of theoretical possibility. It may be true that a poorly constructed bridge has yet to actually collapse, but it also seems right to say that it has had a great potential for collapse.

Actions that get attributed to "free will" are those where it is felt that we could learn to do otherwise. That is, we have the potential, under the right circumstances, to adapt our behaviour in accordance with moral constraints. Ethical argument, incentives, punishment, and just deserts are merely how we try to bring about those conditions. Returning to my original analogy, if we want your car to realise its potential for going 100mph, we have to try and bring about circumstances to make that happen. We give people a chance to show that they can adapt before we throw up our hands and say it's not their fault.

This brings me to the last point. How do we treat things who do not adapt, those who persistently resist all contrary incentives, punishment, and other undesirable consequences? In the extreme case, we call these things inanimate objects, like rocks, trees, water, etc. These things have neither moral agency nor rights; they are merely resources to be used for our various ends. Living things that are similarly incapable or severely limited in this respect have fewer rights and privileges, e.g. young children, animals, the insane, and people who spend most of their lives in prison. We don't much care about how happy they are with their lots, at least not when the alternative threatens the well-being of others.

What I'm saying is that framing low conscientiousness as something like blindness actually undermines the moral agency of people with low conscientiousness--they can't help it, the poor fools; they're more like animals, really. I think most people with low conscientiousness would rather live with their deserts than live with what else that might entail.

Lee Kelly writes:

Free will, in the traditional sense, is a mental mirage. As soon as you can explain why I made a choice, then free will disappears, because the explanation for my choice must posit things about which I had no choice. However, if there is no possible explanations for our choices, then our actions might as well just be random spasms. Free will is a nonsense idea. It's defining characteristic is that the closer you seem to get to explaining it, then further away it suddenly appears from explanation.

To make any sense, free will has to be defined as something explicable and understandable, otherwise, saying 'free will did it' is as good an explanation as 'God did it'.

Ari T writes:

How can you say something is a choice if it is decided 60% by something totally outside "free will". Add some environment in the mix too. How much does the percent have to be to not have any free will? 90%? 99%? Why is that you don't have discounted free will at least way before 100%?

Making good "choices" when you are sick is much harder than when you are not. This should be very obvious to pretty much anyone. Why assume that working hard is an easy choice for the lazy as it is to you. If you would be put up with a musician to play an instrument, he might say hey its easy because he has the talent, and think very little of your "choice" to play bad.

Also to believe in free will is basically to believe in things completely outside current physics. Also what is the average expert opinion of those who are experts of free will. You might call them absurd, but as rational Bayesian can you disagree with the experts?

You may disagree with utilitarianism, but how many moral philosophers agree with more or less deontological libertarian ethics?

How does responsiveness to incentives prove anything? Animals learn to avoid an electrified fence too. Point a gun to anyone's belly, any many people are willing to do many nasty things to save their lives. Some people break at different points (have different supply curve of morality), but what does this prove? I imagine "laziness" would go to same category. On some abstract level this feels like religion, to pick an arbitrary scale.

I think many this all is just a form of lottery. The morals follow the institutions and (human) physics that shape them. Had universe, earth, history etc. been different whose marginal utility is what, and who deserves what would be vastly different.

Fundamentally my guess is that as we get better to understand human brain functionality and statistics gets better at predicting human behavior, free will get out-cornered. This will probably mean that most moral philosophers will never agree with strong axiomatic rights for whatever world happened to give to them.

DocMac writes:

[Comment removed pending confirmation of email address. Email the to request restoring this comment. A valid email address is required to post comments on EconLog and EconTalk.--Econlib Ed.]

Joshua Macy writes:

Anti-free will arguments are self-refuting. When you write to Bryan to lay out your anti-free will argument, you are acting as if the meaning of words plus logic are sufficient to form a causal chain that results in a mental event (belief in free will)...but that's all that's necessary for free will to be true in the relevant sense.

If opponents of free will find it impossible to act as if the belief they claim to have is true, why should we either believe or act as if it were true ourselves? If anybody was consistently anti-free will, they would hold that it's not only the lazy people who can't help it, people who don't want to be taxed to give money to the lazy can't help it, people who would vote to raise or lower taxes can't help it, people who agree or disagree after hearing their argument can't help it, and they themselves can't help writing the argument out in the comment box even though if true it must be ineffective.

Lee Kelly writes:


That's all that's necessary to for free will to exist in the colloquial sense, but it doesn't contradict metaphysical determinism.

Suppose you have a long drive ahead of you and head to a gas station to fill up. I might react by saying 'y'know, your decision to fill up the tank is completely ineffective since it's already determined whether you'll run out of gas.' I hope you'd look at me like I'm stupid. If a decision is ineffective because it doesn't violate physical laws, then that's just a useless definition of 'ineffective'. Whatever is determined to happen in the future is, in part, determined by your free choice (in the colloquial sense) to fill up your car in the present.

Even if the future is intrinsically indetermined, it doesn't really help the case for free will, since if indetermined just means stuff happens kind of randomly, then so what? That's not free will. Like I said, it's a mental mirage--it slips away the moment you think it's in your grasp.

Frankly, I don't think such debates matter nearly as much as people suppose. Everyone petty much agrees that the past is entirely deterministic; what has happened has happened, no exceptions. However, we still are comfortable talking about how people made effective decisions in the past, even though, from our perspective in the present, they couldn't have done otherwise. Even if free will doesn't exist in the lofty philosophical sense, we've all been making some kind of useful distinction all these years--what is it?

For all this talk of metaphysics, the basic problem of how we arrange our institutions so to promote peace and prosperity doesn't seem any less interesting or worth solving. If that means working with some conception of free will and personal responsibility, then it really doesn't matter if "free will" really exists.

Daniel writes:

Suppose someone has a congential condition which makes it excruciatingly painful to use his hands. Given sufficiently extreme incentives (say a situation where he can flip a switch to divert a trolley from hitting and killing him and many others), he can and will use his hands. But, being pain-averse, he chooses not to do any routine work that requires using his hands and as a result earns a very low income. Would you say he is just making a choice and deserving of no sympathy?

Lee Kelly writes:

Lots of people are deserving of sympathy. It doesn't follow that they're deserving of anything else. One can feel sorry for someone without feeling that an injustice has been committed upon them.

I feel sorry for myself on occasion, but I realise that I mostly deserve my lot. Feelings are often self-serving; one should be as critical of them as any comment read on the internet.

Julien Couvreur writes:

Interesting debate.

Bryan's argument is compelling, but I think Karl is correct in pointing out the problem of subjectivity (people do their best given their preferences, knowledge, personality, ...).

Ironically, the argument cuts both ways and puts a burden on Karl to define poverty in a universal sense. Why impose a certain standard (food, shelter) on those who have other priorities?

If that person is wired (preferences or mental constraints) to value playing Skyrim over getting a job, why would I (who got a job, even though Skyrim is more fun) feel sympathy for that person? How is that person poor? That person is poor by my standards (I value food more than Skyrim), but that person isn't poor by his or her own standards.

Am I not poor too, in the sense that I play less Skyrim than what that person would choose? Should other people feel sympathy for my being wired or having preferences of a hard worker?

There is a further aspect of the debate which was touched on by Bryan, but may have been slightly off-topic: even if we agree this poor is deserving, how do you justify using force (taxation) to help that person.

Ari T writes:

People who say who deserves what are in my opinion signalling "toughness" rather than understanding the complexity of the issue, which people studying their whole life comprehend. In my opinion any strong political or moral statements are just a far-fest.

To give a simple example, if we had a Malthusian catastrophe, and everyone would be living on subsistence level wages, would you think people "deserve" their wages. Whether this happens is hard to predict, and it'll depend a lot of on eg. AI but the moral story there is much more important. This is just one concrete issue.

Matt Zwolinski writes:

I agree with the comments above that responsiveness to incentives isn't going to do the work you need it to do.

The real issue here isn't free will vs. determinism. It's luck vs. responsibility. People can have free will and still be less than completely responsible for the situations they find themselves in and the choices they make. They can be less than completely responsible in a *causal* sense, and if they are sufficiently non-responsible, or if the difficulty of them making the "right" choice is sufficiently high, it is reasonable for us to not hold them responsible in a *moral* sense.

Hold a gun to an addict's head, and you can convince him not to give in to his addiction. While you're standing there. But that doesn't mean he's not an addict, and that doesn't mean that it's not much harder for him to abstain from the behavior than it would be for you. Seems like we can say the same about some of the poor.

More thoughts here:

Michael Wiebe writes:

Suppose A is genetically/environmentally predisposed to be lazy, and B isn't. Then A faces a higher cost than B in choosing to be conscientious. If both A and B are lazy, would you say that A is less blameworthy than B?

Conservative_stupidty writes:

[Comment removed pending confirmation of email address. Email the to request restoring this comment. Dissent is welcome on EconLog, but a valid email address is required to post comments regardless of your viewpoint.--Econlib Ed.]

Evan writes:


I'm not saying that human happiness isn't morally important. I'm saying that human happiness is one morally important thing on a long list of morally important things: desert, justice, honesty, achievement, truth, beauty, and liberty are merely the beginning. The only way to weigh them against each other is with clarifying examples and reflection. Morality would be a lot simpler if utilitarianism were true. But it's better to be broadly right than simply wrong.
I think this is a straw man. Utilitarianism used to be based on pleasure and happiness as the only good. However, the limitations of this approach have been recognized for a long time and, as a consequence preference utilitarianism was developed. Preference utilitarianism addresses your complaint quite well, as it recognizes that people have other desires than happiness, and that these desires should be taken into account. Preference utilitarianism isn't perfect, but it's a step in the right direction, and most utilitarians have recognized its superiority to pleasure utilitarianism.

@Ari T

Some people break at different points (have different supply curve of morality), but what does this prove? I imagine "laziness" would go to same category. On some abstract level this feels like religion, to pick an arbitrary scale.
Your phrase "Supply curve of morality" is amazing. I wish I'd thought of it first. I think that understanding the desire to be moral as a desire like any other is the key to cracking moral philosophy. Understanding that (most) humans already have an innate desire to be moral neatly solves the is-ought problem, allowing the discussion to move to the more relevant concern of how to be moral most effectively. I'll be sure to use that phrase the next time I discuss the issue.

@Julienne Couvruer:

There is a further aspect of the debate which was touched on by Bryan, but may have been slightly off-topic: even if we agree this poor is deserving, how do you justify using force (taxation) to help that person.
This is the main reason I initially became a libertarian, but it eventually lost its power for me as I encountered people who were willing to bite the bullet and admit that sometimes it's okay to use force on others (it took a while to find them because their voices were lost among the shrill cries of indignant leftists who vehemently denied taxation was a form of force). I'm still a libertarian for consequentialist reasons, and I still think taxation is theft, but I've now realized that doesn't necessarily invalidate taxation because there are some instances where it's morally acceptable to steal. For instance, I would totally mug someone in order to save a toddler from starving to death.

The obvious bad consequences of stealing mean that you're morally obliged to prove taxation will do an awful lot of good before going through with it (most politicians neglect this obligation, of course). But there are likely some circumstances where taxation has net positive consequences and helping the deserving poor could be one of them.

Curt Doolittle writes:

The problem is the use of the term 'deserving' which is not a possible economic concept. It rquires knowledge that cannot be possessed by others in an economy.

The question is one of exchange. What would the lazy person be willing to offer in exchange to the hard working person? Framed that way, it's an actionable political question.

The other question is cultural. People are culturist, racist, and everything else that's possible - regardless of what they say, people demonstrate an affinity for people who share cultural and morphological similarities. They will not tolerate the transfer of status signals. It violates the sense of reciprocity. So as long as the empire is large and heterogeneous, the people will not brook transfers.

The nordics are small homogenous protestant monarchies with no threats to their borders. Of course they're egalitarian.

As Spengler would say, westerners are faustian. The are heroic for attempting the impossible, even though they know it's impossible. Some of us are smarter than that. We realize it's impossible.

Comments for this entry have been closed
Return to top