Scott Sumner  

I feel pain therefore I am (a utilitarian)

PRINT
Kahneman's Thinking, Caplan's ... A Non-Conformist's Guide to Su...

Bryan Caplan asks me for a defense of utilitarianism, and specifically a reason for rejecting other strongly held moral intuitions.

Like many poorly educated people, I know little about philosophy other than that Descartes said "I think therefore I am." When people feel pain in their own bodies they instinctively think it is bad. There is no "why?" The real question is why do utilitarians think public policy should maximize aggregate utility. Before answering that question, let's consider a more basic problem. Why should I care about anyone else's pain?

Little boys famously respond to answers of "why" questions with another "but why is that," until the adult becomes exasperated. Richard Rorty said that philosophy cannot provide an ultimate justification for liberalism. He suggested that (paraphrasing Judith Shklar) all we can say is that liberals are people who believe "cruelty is the worst thing we do." Rorty did have views on where that belief comes from---he said it's the narrative arts. Think about the novels of Dickens, or Uncle Tom's Cabin, or a pro-gay film like Philadelphia. These put us in the mind of others---they make us sympathize with "the other." Milan Kundera calls Europeans "the children of the novel." The narrative arts (especially TV and film) are the principle reason why young people today are far more pro-gay rights than their parents. I'm 58, and I don't recall any gay characters on TV when I was growing up.

What Rorty called 'liberalism' I call 'utilitarianism,' a term that includes all the various forms of liberalism (classical, neo-, social democratic, etc.) But how do you get from the narrative arts and sympathy for others to utilitarianism? By adding math and logic. If you become a liberal through the narrative arts, and are also a social scientist, you think about a rigorous model for your moral intuition that pain is bad and happiness is good. And what better model than "maximize aggregate utility?"

Yes, we have lots of other moral intuitions, but they seem contingent. We once thought buying life insurance for a family member was disgusting---imagine gambling on the death of one's spouse! We thought wearing a bikini was immoral (still do in Arabia.) We thought the idea of gay marriage was preposterous. But we gave these issues a second thought, and realized "who does it really hurt if gays get married?" And "gays are people too, with their own minds and preferences, we should also care about their happiness." And who does it really hurt if a girl wears a bikini? Our hatred for pain is eternal, but our other moral intuitions are contingent on complex social factors, level of education, etc.

In a previous post I speculated that some of our bogus moral intuitions might have evolved for Darwinian reasons. I should add that they also might be cultural adaptations to one type of living environment, and inappropriate in another. Bryan Caplan correctly notes that one can make the same argument about utilitarianism:

If you aren't convinced that life is better than death, or that happiness is better than suffering, you swiftly drop out of the gene pool. And since human beings are social animals, we're evolved to value the lives and happiness of the people around us as well as our own. Should we therefore dismiss our anti-death, anti-suffering views as "illogical moral intuitions that have evolved for Darwinian reasons"?

The moral nihilist, who bites even more bullets than the utilitarian, can enthusiastically agree. Everyone else, however, has to say, "Yes, it's logically possible that we're evolved to falsely believe that life and happiness are better than death and suffering. But after calm reflection on this potential bias, I remain convinced of the merits of life and happiness." And if you use this approach for life and happiness, why not try it for murdering innocent fat guys?


His last comment is a reference to the famous trolley problem in philosophy, where most people are reluctant to push a fat man onto the tracks to stop a trolley, even if it will definitely save 5 lives further down the track. Let me take up the challenge.

I said the narrative arts put us in the minds of other people. So does standing right next to another human being. But let's say that instead I had been spending the past hour chatting with the 5 people who were chained to the track and endangered by an oncoming trolley. They've told me all about their spouses and children, their goals in life. I look to the distance and see a platform where one guy is contemplating pushing a fat man to save the 5 people I've just been speaking with. What is my moral intuition then?

PS. I'm tall and thin, and always feel guilty discussing this example. So apologies to my pleasantly plump readers. If you like, substitute an example where only by sacrificing a tall man can 5 lives be saved.

PPS. I'm not convinced that life is better than death (I view it as a plausible hypothesis), and I didn't drop out of the gene pool.

PPPS. This paper has my views on the relationship between utilitarianism and liberalism.


Comments and Sharing


CATEGORIES: moral reasoning



COMMENTS (47 to date)
Thomas writes:

Your belief in aggregate utility suggests that you would find it acceptable if a poor person were to rob you of, say, $10,000 -- as long as his gain in utility would exceed the amount by which your utility would diminish as a result of the robbery. I doubt that you (or more than a tiny fraction of the populace) would accept that proposition. Moreover, how do you measure, compare, and aggregate the utility (or marginal utility) of disparate persons?

JG writes:

the real question is whether utilitarians believe in moral objective truth, and the answer is no. By definition utilitarianism denies moral objective truth. "if you had the opportunity to rid the world of war, hatred, inequality, and create heaven on earth, would you do it, if first, you must torture just one innocent child?" The utilitarian must answer yes.

David R. Henderson writes:

Thomas asks the question I’ve been asking as I’ve observed this debate: given that we know that we can’t compare utility across people--it’s ordinal, not cardinal--how do we maximize utility?

Scott Sumner writes:

JG, Like Richard Rorty I not only deny "moral objective truth" but also scientific objective truth. "Truth is what our peers let us get away with."

Thomas and David.

I certainly wouldn't want anyone robbing me even if I thought it would make the world a better place. I'm selfish.

But more seriously, I don't think robbery makes the world a better place. If you could costlessly transfer funds from me to a poor person it would make the world a better place more often than not. Although we can't measure interpersonal differences in utility, I believe the marginal utility of a dollar is usually higher for a poor person that a rich person. That's not "objective truth" just a hunch.

The problem is that it's impossible to costlessly transfer money from the rich to the poor, and robbery is one of the most costly techniques of all.

It's extremely difficult for public policy to maximize aggregate utility, given that we can't measure individual utility, and we don't know all the secondary and tertiary effects of public policy. All I can say is that we do our best in a very uncertain world. My own view is that most utilitarians are far too optimistic about the beneficial effects of various government programs aimed at boosting aggregate utility.

David R. Henderson writes:

@Scott Sumner,
Just so you know, the only part of Thomas’s comment I was agreeing with was what was implicit in his question. I get the point about robbery.

Thomas writes:

To Scott and David:

Defenders of big government (i.e., government that does more than defend us from predators, foreign and domestic) will take offense at what I'm about to write, namely: Taking money from those who earn it and giving money to those who don't (I include crony "capitalists" among those who don't) is tantamount to robbery. I see no functional difference between forced income redistribution and robbery.

To Scott:

Here's my "hunch" about the marginal utility of a dollar. I wouldn't begrudge a poor, starving man a $5 meal at a fast-food restaurant, even if the starving man obtained the $5 because he saw an obviously well-fed person drop it, and chose to buy a meal instead of returning the $5. But that's an easy case, and it doesn't really get to the question of utility, anyway. It's just a preference (a "moral intuition") that many would share. The real difficulty arises when the $5 becomes a-lot-more-than-$5, when the recipients of a-lot-more-than-$5 aren't starving (or anywhere close to it), and when a-lot-more-than-$5 is being redistributed not by willing donors but by the coercive state -- for purposes that have nothing to do with utility, except the utility of politicians who redistribute in order to buy votes.

"Given that we can't measure individual utility," as you say, it's more than "difficult for public policy to maximize aggregate utility," it's impossible.

I would say that everyone who supports big government -- whether from utilitarian or less exalted motives -- is far too optimistic about the beneficial effects of government programs, period. And far too willing to ignore the negative effects of most government spending. The overarching effect is a lower rate of economic growth because money that the "rich" would have been made available for investment is instead given not just to the truly poor and helpless, but also in large measure to the "less than rich," both of which groups spend it mainly on consumption goods.

Even I find it hard to begrudge redistribution to the truly poor and helpless (though I suspect private charity would do the job, were it not for the heavy hand of government), but it's bad policy for everyone else, the "less than rich" and temporarily poor included.

Hmm... I'm beginning to sound like a utilitarian.

Greg G writes:

Richard Rorty liked being provocative but I don't think it was the existence of objective moral or scientific truths that he was denying. What he wanted to deny was that we have any access to justifiable certainty in these matters. Obviously an absolute claim that such truths do not exist would be an example of the very type of certainty he is arguing against.

The most foundational scientific principle is that all knowledge is provisional. Rorty realized that, as a practical matter, we are forced to make choices all the time in choosing scientific and moral beliefs. And he encouraged us to do so. He just wanted it done without the claim of metaphysical certainty.

Scott Sumner writes:

David, I thought so.

Thomas, You said:

"I'm beginning to sound like a utilitarian."

Embrace your inner utilitarian.

Greg, He's no longer around, but I believe he was getting at something else. There's no clear distinction between objective and subjective knowledge. There is no difference between saying "X is true" and saying "I believe X is true." Statements about truth are ALWAYS statements about belief.

Rorty was presumably certain that 1+1 = 2

eccentric-opinion writes:

The problem with utilitarianism is that it underestimates the importance of the agent-relativity of value. "Pain is bad and happiness is good", yes, because experiencing pain is unpleasant and experiencing happiness is pleasant for the person experiencing them. This means that an agent has good reasons to care about his own happiness and pain, but doesn't mean that the agent should care about the happiness and pain experienced by people in general. He cares about others' hedonics to the extent that they affect his own, e.g. he wants his loved ones to be happy because it makes him happy, and not for any other reason.

Sean writes:

I think you meant to put a different link in the PPPS (currently, it's the same link as the one at the top of the post).

vikingvista writes:

"And what better model than "maximize aggregate utility?""

Primum non nocere?

Garrett writes:

The whole "costless transfer" idea that keeps coming up in these utilitarianism discussions seems to be based on the assumption that doing so raises aggregate utility due to the differences in elasticities between the consumption functions of the rich person and the poor person. A better way to analyze the problem is to instead compare the impact of a dollar donated to charity to a dollar invested in a company that does business in developing countries. (Malaysian stocks are doing pretty well right now.) The profit motive driving these firms much better incentivizes them to create value, and the capital they have at their disposal allows them to leverage the people they hire to become more productive.

The recent econtalk with Chris Blattman talked about the efficiency gains from simple cash transfers over charity efforts to provide specific goods and services to the poor. But even this is outmatched by the potential of investments.

The obvious issue is that these desperately poor countries don't have the institutions to support profitable investments. Blattman points out that the UN's millennium villages didn't lead to the sorts of firms or markets that we have in the developed world. That these countries still have such poor markets and institutions seems to be a great argument for more liberal immigration policy, since leveraging well established institutions, markets, and firms seems to have much bigger aggregate utility gains than other options.

More open borders > investments in firms > cash transfers > donations of goods/services

What's interesting to me is that I can get to a similar conclusion as Caplan with regards to open borders while using the type of utilitarian thought process (at least I think that's what I'm doing) that he seems to be against.

TravisV writes:

Prof. Sumner,

I think the last link you provided in this post was the wrong one.....

Thomas Sewell writes:

I'm not a real utilitarian (although I certainly have some sympathies), but the fat man trolley problem has always bothered me as an example. I've never understood what's so hard to explain about it.

The problem with the trolley example is similar to all of the arguments in favor of elitist decision making for other. It's phrased with an unrealistic certainty of knowledge that doesn't generally actually exist in the real world.

People know that intuitively based on their life experience. It makes sense for us to have a bias against doing definite, predicable and deliberate harm to someone in exchange for a hope our action will result in a positive result later.

We know we don't actually function with perfect knowledge in the real world. Its imagining that part of the trolley problem that I would get hung up on. I'd never push the fat man because my speculation about how to save the other five people isn't the same as the surety that I'm going to harm the fat man. I might be willing to knowingly take some risks myself to try and save five others, but I'm responsible for myself and my own choices.

I find it morally abhorrent and the ultimate in pride to think that I would have such a perfect knowledge of future expected events that I am entitled to calculatedly harm someone else. It makes sense for humans to have a bias against taking action we know is harmful in exchange for a hope for better results. It doesn't generally work out, hence the rule "First, do no harm..." for physicians.

If we rephrase the trolley problem to take the element of unknown results truly out of it in a way people will understand, I think you'd get a much different result.

In addition, the moral intuition of the means not justifying the ends is a short-cut for understanding that (like the robbery example), a short-term utilitarian gain can be negatively outweighed by the future consequences of a particular means to that utilitarian end.

Bottom line, it's not utility maximizing to push the fat guy in front of the trolley in real life.

I suppose that's essentially the rule-based utilitarian defense against the fat man trolley problem.

Note: Cross-posted at comeletusreasontogether.com

Joe Teicher writes:

The idea of selfish utilitarianism is interesting. Suppose in the trolley problem that the fat man has the ability to hold his ground and not let you push him in front of the trolley. What is your opinion of the morality of his doing so? An ordinary utilitarian would say that he should not save himself since that will kill 5 others. But the selfish utilitarian has to either say it's ok for the guy to save himself, since selfishness is fine or he has to bite the hipocrisy bullet and say selfishness is fine for himself but not for others.

To give a practical example, I'm sure there are plenty of people who benefit from the central bank keeping real interest rates high even in a severe recession. Do those people have a moral duty to suffer for the greater good, or should they do everything they can to pressure the Fed to always be hawkish?

Brian writes:

Scott,

Then you would say that 1 +1 = 2 is not objectively true? In what way? Obviously, both you and I believe it to be true, but that doesn't accurately cover its status. I also believe that everyone else believes it to be true and, more importantly, that no one is capable of believing it not to be true. Doesn't that make the statement more than subjectively true, and indeed objectively true?

Scott Sumner writes:

Eccentric, That's a problem, but not necessarily an argument against utilitarianism.

Sean and Travis, Thanks, I fixed the link.

Garrett, Good points.

Thomas, Yes, and that relates to some other arguments I've used in other posts. There are lots of cases where people say "assume X is optimal on utilitarian grounds" where X seems horrible. Then they use that as an argument against utilitarianism. But the listener never really buys the "optimal on utilitarian grounds" assumption.

Joe, I say it's optimal not to be selfish, but we are generally wired to be selfish, so I'm not going to criticize someone for being mildly selfish. I am somewhat selfish. If they are extremely selfish, then I will criticize them. The goal is to shame them into being less selfish.

In the case of great sacrifice, you treat the extremely unselfish people as heroes, or honor their family if they die.

Scott Sumner writes:

Brian, It makes more sense to speak in terms of beliefs held with a great deal of confidence, and those of which one is less confident. There is no external referee to tell us whether our confidence is justified. Hence to way to discriminate between subjective belief and objective truth.

Some people want to discriminate between "we think it's true" (subjective) and "we think it's true, and it's true." (objective) But that's a meaningless distinction.

Read Rorty, he can explain it all much better than I can.

Brian writes:

Scott,

With regard to utilitarianism, you seem to claim that the idea that pain is bad and happiness is good is less contingent than other moral intuitions. Here is why that seems wrong. Our experiences of happiness and pain to the behaviors that, in the past, led to genetic success. Given that in these times conditions are different, and that genetic success may no longer be of interest to us, it would appear that pain and happiness are no longer linked to desirable outcomes and therefore are not moral. For example, sexual pleasure is meant to help us procreate early and often, which leads to more genetic success, but now we have good reasons and a strong preference for delaying procreation. Also note that humans often choose to do painful things to achieve other desired outcomes, including things that really don't make us happy, even in the long run. Not only do the standards of pain and happiness appear to be very contingent, they seem to be vestiges of an evolutionary past that is ill suited to today's needs. Such things can hardly be the basis for a system of morality.

Brian writes:

Scott,

But a statement that is logically proved, like 1+1=2, is known with certainty, not just with high confidence. It is not a meaningless distinction in saying "I know it is true" versus "I think it is true."

eccentric-opinion writes:

Scott,

I think pointing out the agent-relativity of value is an argument against utilitarianism, because utilitarians claim that the good to be maximized is world utility. But that admits an agent-neutral good, which is impossible if all value is agent-relative. In practice, that means that when someone tells you that you should donate to charity because it would increase world utility, you can respond by saying you happen to assign value to other things, and would personally derive more utility from spending money on something else.

(If by "utilitarianism" you mean "consequentialism that uses utility", this argument doesn't apply, but that's not how that word is typically used.)

Greg Heslop writes:

I enjoy these posts very much, but I still think some rather serious problems with utilitarianism are inadequately addressed. Sure, as individuals we tend to find it quite sensible that what gives more utility is to be preferred to what gives less, but the leap is rather great if one wants to make this the founding principle of moral theory (am I truly immoral if I fail to maximize my own utility, all else equal? is it not up to me what I do?), especially when the same principle ignores differences between individuals.

For instance, since utilitarianism means that total utility should be maximized, it cannot matter how it is distributed over time or how many individuals are to "share" in the utility. I have raised the first issue before in relation to introspection: is the stream of utility, all else equal, of (1,000,003, -500,000, -500,000) really just as good as (1, 1, 1)? The utilitarian answer is "yes" and remains so even if the three periods are three generations.

Utilitarianism also neglects a distinction between what is good and what is right, essentially saying that what is good must be right. This goes back to ignoring differences between individuals. Some may be sacrificed for others. And again, if no-one but me is impacted by my decision not to maximize utility, maybe I should be thought of as irrational or stupid, but immoral, too? If on the other hand it is up to me what I do in such a situation, my dignity as an individual is respected, which critics of utilitarianism may label right (but not good, since it fails to maximize utility).

Nathan Smith writes:

Scott uses utilitarianism as a solvent of irrational moral views such as opposition to gay marriage, and that it's immoral for women to bikinis. But it's child's play for me to put my opposition in utilitarian terms.

1. Bikinis fill men with futile lust. Their animal nature induce them to be superficially attracted, but this really coarsens their characters and distracts them from much more valuable thoughts. Modesty makes life more beautiful and edifying, the world a better place. The greater happiness is served.

2. Marriage is about self binding, traditionally in law, today in social custom. Self binding often harms people because they mistake their long run happiness. Now, mainly because straight marriages lead to children but also for sociobiological reasons involving the way our gendered instincts make us feel about sexual bonds, it is normally good for heterosexual coupling, dating aside, to be exclusive and permanent. So language and law serve the greater happiness by recognizing and reinforcing that. But homosexual couples have different opportunities and different instincts. Empirically, their relationships are less stable and less exclusive, and there is no reason we should either expect or desire that they resemble heterosexual ones. If we really forced or strongly pressured "married" gay couples to stay together, we'd create net unhappiness. If we don't, they confuse the language and dilute for everyone else social norms surrounding marriage that have great utilitarian value.

Greg G writes:

Brian

I continue to think that Rorty was urging us to think in terms of probability rather than certainty. As a practical matter, many things are so highly probable that it makes sense to act as if they were certain. Rorty doesn't dispute that.

"Logically proved" doesn't mean "can't be wrong." it means true IF the premises are true. It was once thought that Euclidian geometry and Newtonian physics were bedrock certain knowledge that described the way the world always works at all scales. Then exceptions were found. Quantum physics revealed reveled problems with Newtonian physics that undermined the assumptions we had considered safest from challenge.

There may well be a hard boundary between the objective and the subjective but we lack ability to be prove exactly when we have found it. We do have a quite useful ability to often be able to tell when we are getting farther away from it.

Dan S writes:

Scott,

Where would you recommend starting off on reading Rorty?

Daublin writes:

What Nathan said. I think your examples are not ideal, Scott.

I feel like marriage is important and helpful for a society, but it's not because of direct "harms" that are prevented by it. It has to do with the many and varied effects of two people committing to stay together. I think that married people are often better people, and they are better to be around.

For people who take this perspective on marriage, you have to design the institution so that it is effective for real people, with all of our biological and social instincts and daily incentives. I don't know what social institutions are right for gay people, but it's not obvious that it's best to make them act like straight people.

Pajser writes:

Few independent thoughts ...

eccentric-opinion: "In practice, that means that when someone tells you that you should donate to charity because it would increase world utility, you can respond by saying you happen to assign value to other things, and would personally derive more utility from spending money on something else."

Utilitarian can admit that his problem is "what leads to overall best possible world", and that he doesn't claim that overall best world is de facto highest value for anyone. But why should "objectively best world" (whatever shape it should have) be any value for anyone? It is not the problem specific for utilitarianism but for all non-subjectively based ethical theories. Standard Marxist answer is that creation is human essence, and perfect world, or at least best possible world is the greatest possible creation. It is impossible that human does not have that goal, although he typically has other, conflicting, more selfish, less human-specific needs and goals. But this goal will be inevitably more and more important. It is similarly (but less directly expressed) in Maslow's hierarchy of needs.

Greg Heslop: And again, if no-one but me is impacted by my decision not to maximize utility, maybe I should be thought of as irrational or stupid, but immoral, too? If on the other hand it is up to me what I do in such a situation, my dignity as an individual is respected, which critics of utilitarianism may label right (but not good, since it fails to maximize utility).

Utilitarianism is not incompatible with that view; it is enough that you believe that one's independence, as you described has large utility - for some, or for all people.

Brian: Given that in these times conditions are different, and that genetic success may no longer be of interest to us, it would appear that pain and happiness are no longer linked to desirable outcomes and therefore are not moral.

I doubt happiness, but not lack of pain. It seems great value, no matter of its physical or theological explanation. Neither one explanation I could imagine seems satisfactory, because its seems that described phenomenon could exist without emotions. And vice versa, maybe every physical process somehow causes emotions, but I cannot recognize it. The emotions are mystical.

Hugh writes:

Utlitarianism assumes that we can know the effect of a policy on total utility.

Utilitarians tend just to look at first order effects in their examples.

Thus taking money from Scott (and Bryan) to give to a poor person may seem to increase the world's utility - but does it? What about the decreased incentive for that poor person to find a job?

Soon the debate about a policy devolves into our familiar politics and we are back to square one.

Scott Sumner writes:

Brian, Get a list of 1,000,000 things we know. Divide them up into things we know with certainty and things we don't know with certainty. Compare the division with others. The lists will differ.

But you are still missing the main point. We have no outside referee to validate whether our list of things we know with certainty is correct. Epistemology gets no where if you start off trying to list things we know with certainty. The issue is how do we know which things we know with certainty?

I don't understand your criticism of utilitarianism. The fact that sex originally had a Darwinian purpose has no bearing on the utility it provides.

eccentric, I don't follow. If you are saying that some people will not follow the implications of utilitarianism, I agree. Indeed I don't always behave in a utilitarian way in my personal life. Nor do Christians--should they abandon Christianity for that reason?

Greg, You said;

"am I truly immoral if I fail to maximize my own utility, all else equal? is it not up to me what I do?"

It's up to you to decide in a legal sense, but your mom might want to criticize you if you choose to become a heroin addict--because she loves you.

Regarding your numerical example, someone might suffer really hard to get through medical school, or to climb Mt Everest, for a great reward later.

Nathan and Daublin, I was just trying to show how utilitarian reasoning had led society to change its view. You are quite right that society might be mistaken. Being a utilitarian doesn't magically solve all disputes. I disagree with many utilitarian economists who are liberal.

Greg G, I mostly agree but you concede too much when you say there might be a clear divide between the subjective and objective. That's not possible. All we can say is there might be a divide between things that are true and not true, in the mind of God.

Dan, I only read this book:

http://www.amazon.com/Contingency-Irony-Solidarity-Richard-Rorty/dp/0521367816

And I read a smaller book where he debates another philosophy on the nature of truth. If you want to start earlier, the following book had a big impact, but I never read it (yet):

http://www.amazon.com/Philosophy-Mirror-Nature-Richard-Rorty/dp/0691020167

Hugh, You said:

"Utilitarianism assumes that we can know the effect of a policy on total utility.

Utilitarians tend just to look at first order effects in their examples."

The first statement is wrong. All we claim is that we can come up with useful guesses as to which of two alternative policies is better by that criterion. That's all.

The second statement identifies a very real problem in the thinking of lots of people, not just utilitarians. But no one can claim I ignore secondary effects---indeed I'm in the top 0.00001% in terms of paying attention to secondary effects.

Greg Heslop writes:

@ Pajser,

"Utilitarianism is not incompatible with that view; it is enough that you believe that one's independence, as you described has large utility - for some, or for all people."

But the premise said that utility would be slightly lower, only that the effect would be borne purely by me. Since utility is lower, such decisions must be immoral (not just stupid) according to utilitarianism.

@ Scott Sumner,

"It's up to you to decide in a legal sense"

But it should not be up to me according to utilitarianism. This is one way of illustrating how it does not respect the dignity due to the individual (though maybe there is none and utilitarianism is right).

As for the numerical examples, my point is not that it is wrong to choose one way or another (an individual gets to choose whatever he wants in my view). Rather, it may make a difference how and when utility is derived. Could it all come to people living in the year 2525 with negative utility to everyone else, could utility come from activity X rather than activity Y, etc.? If total utility is the same it should make no difference.

acarraro writes:

Can you really say we know that utility is ordinal rather than cardinal? I always thought that it was mostly an assumption. What would be an example of an experiment that proves that statement? I would say that in most real time situation we behave more consistently with cardinal utility: surely cost benefit analysis is pretty popular.

Nonetheless I would tend to agree that determining the moral course of action is extremely hard if you have an utilitarian framework. I would agree it's impossible to be certain that any action is the morally correct one.

On the other hand, you really need free will and moral desert to justify libertarian morality. I find really hard to believe in either. There seems to be so little proof that you deserve what you get in life that I cannot really share the libertarian view.

The utilitarian view seems more attractive to me. I would certainly concede it's probably somewhat unnatural. My genes only care about their own survival and they are unlikely to worry about fairness.

I also note that almost all societies have some form of taxes. So redistribution cannot be that bad if it evolved more or less independently in many circumstances. That seems a good reason to believe that "taxes are robbery" is as bad as "property is theft" as a moral philosophy.

Greg Heslop writes:
"On the other hand, you really need free will and moral desert to justify libertarian morality."

Without free will, I don't see much point in moral discussions at all, but the idea that one deserves what one gets is not necessary for a libertarian morality. How about the Nozickean idea of rightful appropriation? A few thousand people voluntarily give Wilt Chamberlain two bits to watch him play and everyone is better off as a result. It's libertarian but whether deserts are just or not in the sense of being in accord with character is a non-issue.

acarraro writes:

I don't think the lack of free will invalidates moral discussion at all. I don't think an utilitarian would see free will as a requirement for example. Pain and pleasure still exists even if we have no free will. Most people would agree you have some moral obligations (e.g. no cruelty) to animals even if they have no free will.

Surely even libertarians worry about the fairness of initial conditions. When did I ever consent to be less smart than a Nobel prize or less quick than Bolt? Voluntary exchange only guarantees there is no increase in unfairness. I think you still need to assume the fairness of the initial condition to be happy with the final distribution after exchange.

Greg Heslop writes:

@ acarraro,

You have a point that lack of free will does not imply absence of pain and pleasure. I was getting at the idea that, since we are not morally responsible for our actions in a world of no free will, morality loses significance. I suppose discussions and "ethics" could still serve some purpose, however. But trying to make people change their minds (e.g. by discussion) on epistemically sound grounds is not something I would consider worthwhile if there is no free will. After all, chemicals in the brain would really decide what a person thinks, not reason and argument (though maybe those chemicals respond to epistemically sound reasons?).

Initial appropriation (of unowned things) is a cause for concern for libertarians. However, I am not sure such appropriation would have had to happen according to moral desert. It would suffice if nobody's (negative) rights were violated. I have never seen a really good argument for why being born less fast than Usain Bolt would violate anyone's negative rights.

eccentric-opinion writes:

Pajser:

"It is not the problem specific for utilitarianism but for all non-subjectively based ethical theories."

Yes, and for that reason, I reject all non-subjective (or, more properly, agent-neutral) ethical theories. The utilitarian claims that there's an objectively best world, but "objectively best" in the sense that it's used here is an ill-formed concept because value is agent-relative, and one agent's best world may not be another agent's best world. The Marxist argument assumes that there is such a thing as "human essence", which is a baseless metaphysical assertion.


Scott:

Not only can people not follow the implications of utilitarianism, they have no reason to be utilitarians. Utilitarianism is a normative ethical theory, which means that it makes claims about how people ought to act. If utilitarianism is true, that means that people have reasons to act as utilitarianism requires. But if people don't have a reason to act as utilitarianism requires (which is what I argued above), then utilitarianism is false.

James writes:

All you opponents of utilitarianism keep getting suckered into the honeypot of arguing against utilitarianism on the grounds that utilitarianism has unreasonable implications. That battle is surely yours, but it's irrelevant until an actual utilitarian shows up.

They'll be easy to spot because the language they use will sound like something out of an operations research textbook (sensitivity analysis, gradient descent, model risk, additive separability, etc) because that's what matters when you treat moral questions as though they were optimization problems.

In practice, no utilitarian talks like this because no utilitarian does this, not even the most committed professional utilitarians publishing in philosophy journals. Has anyone ever come across a utilitarian studying convex sets or numerical methods just to be sure the ideas they recommend are moral? No way. Do Richard Brandt or Peter Singer claim that sound moral reasoning requires partial derivatives? Of course not. The optimization talk my be aspirational or it may just be window dressing but it's not at all descriptive of what any self proclaimed utilitarian actually does.

Brian writes:

Scott,

Wait a minute. You said you deny scientific objective truth. That implies that no examples of scientific objective truth should exist. For me to prove otherwise, I only need to find ONE example that everyone agrees is true, not make up a list of 100,000.

Also, you say that there is no external authority to verify that something is true. But I don't need that to show objectivity. Once everyone agrees that something is true, and indeed cannot not be true, it is already clear that the truth I've subjectively identified is not limited to me or my imagination. I would say that 1+1=2 is one of many such examples.

Finally, here is what you are missing in my evolution argument. Take a step back and ask what the purpose of ethics/morality is in the first place. Why do we need such a system anyway? The purpose is to promote what is good and avoid what is bad (or evil). At its heart, utilitarianism equates happiness/pleasure with the good and pain with the bad.

Evolution implies that this equating is fundamentally in error. Why? For evolution, "good" is genetic fitness. Pain and pleasure are merely the means by which organisms achieve this good; they are not ends in themselves. But in the modern world, "good" is no longer simply a matter of genetic fitness, so the means are no longer well suited to achieving "good." On top of that, goodness is an end in itself. Conflating that end with the means of pleasure and pain, as utilitarianism does, is a fundamental categorical error. As means, pleasure and pain are highly contingent and cannot serve as the basis for morality.

Brian writes:

Greg G,

Thinking in terms of probability is important, to be sure, but that doesn't imply that nothing is certain. Some things ARE certain, like 1+1=2, and we shouldn't pretend otherwise. Statements are proved true within a given system, which is defined by various definitions and axioms. Such statements are objectively true within that system. Everyone capable of understanding the claim would agree that the angles of a triangle add up to 180 degrees in flat space. It's objectively true REGARDLESS OF WHETHER PHYSICAL SPACE IS FLAT OR NOT.

I think the distinction between subjective and objective is pretty obvious and straightforward. Subjectivity implies that it depends on me as an individual. Objectivity implies that it is external or independent of me and my cognition. Surely the observation that everyone agrees on something shows that it is objective.

Pajser writes:

eccentric-opinion: "because value is agent-relative, and one agent's best world may not be another agent's best world."

True, but it doesn't follow that objectively best world is meaningless concept. Let us imagine racist who value life of white man much more than life of black man. Then he learns something and concludes that his earlier system of values was wrong. Most people will see that as improvement in his system of values. If we accept that we are onto something when we say that his subjective system of values objectively improved, then we can search for objectively best system of values.

"The Marxist argument assumes that there is such a thing as "human essence", which is a baseless metaphysical assertion."

Search for essence is usual philosophical and scientific procedure. For instance, in Encyclopedia Britannica, in article Life, author, the biologist, tries to determine what is the essence of life. i.e. how exactly is life different from non-life. It is not meaningless question. Many books on capitalism try to identify the essence of capitalism. How exactly is capitalism different from other economic systems? Then, there is little reason not to ask what is the human essence.

However, one can be uncomfortable with this kind of ambitious philosophical notions, and insist on more usual notions. It is not the problem. Instead of claim that creation is human essence, I can say something much weaker but still sufficient: that people have strong passion for creation and improvement, even if such creation and improvement has no obvious pragmatic purpose. Logical consequence is that people already want, or will want to create perfect or best possible world. I believe that most of the people on this blog already discuss from that perspective.

eccentric-opinion writes:

Pajser:

The case of the racist is an example of people agreeing with his new values and disagreeing with his old ones. If society were full of racists, him becoming anti-racist would be seen as him adopting worse values. What seems like an objective improvement is merely movement in a direction that people agree with. If objective values are possible, they are not to be found by looking at people's opinions of different values, because people can have all kinds of opinions.

Regarding essence, it's written about well here.

If people have a passion for improvement that has no pragmatic purpose, why call it improvement? If something isn't judged to be better according to someone's values, in what sense is it better? Of course people want to create a better world, but what "better world" means varies depending on each individual's preferences. It is perhaps meaningful to talk of a shared better world when making Pareto improvements, but beyond that, one person's better world may not be another's.

Floccina writes:

If pushing the fat man onto the tracks could lead to a degradation of the entire society so maybe in the long run it leads to more death and destruction.

Pajser writes:
eccentric-opinion: "What seems like an objective improvement is merely movement in a direction that people agree with. If objective values are possible, they are not to be found by looking at people's opinions of different values, because people can have all kinds of opinions."
You are right, knowledge about values cannot be found by looking only at the opinions. One must dig deeper. But point is the notion of the improvement of the system of values. People do not say only that ex-racist's system of values is now closer to their; they say that his system of values improved. That means, they do not think that systems of values are purely subjective. If they believe that system of values can be improved, they have good reason to search for objectively best system of values.
If something isn't judged to be better according to someone's values, in what sense is it better?
It is not only that world is judged by the system of values; it goes other way as well. One judges and improves his system of values on the base of information he has. One might judge that Picasso's paintings are not valuable. In attempt to understand why some people disagree, it may happen that he changes his system of values. People question and improve their systems of values all the time. It must be they have some motive for that. Empathy and the passion for creation are possible motives for move toward objective system of values.
Nathan Smith writes:

@ Scott: Fair enough, and thanks for responding. But when I hear people argue for gay marriage, they're usually not making a utilitarian case, they usually talk about "equal rights," which is a very un-utilitarian way of thinking. If "thinking" isn't too generous a term to use to describe such an incoherent mantra as "equal rights."

eccentric-opinion writes:

Pajser:

People do not say only that ex-racist's system of values is now closer to their; they say that his system of values improved.

People also say that some books and movies are better than others, not just that they like them more, but that doesn't mean that certain books or movies are actually objectively better than others. From the inside, from the point of view of the agent doing the valuation, it may feel like something that they like more is objectively better, and so they may express their preferences in terms of objective values, but that doesn't mean that the objective values are actually there.

One judges and improves his system of values on the base of information he has.

One's instrumental values, yes, one judges them based on how well they achieve a terminal value. But one's terminal values are foundational and aren't subject to being disproven by information.

Pajser writes:
eccentric-opinion: "From the inside, from the point of view of the agent doing the valuation, it may feel like something that they like more is objectively better, and so they may express their preferences in terms of objective values, but that doesn't mean that the objective values are actually there."
True, but it still means that they believe that objective system of values exist, and that move in that direction is improvement. It seems as answer on question "why would anyone move from his subjective system of values." Maybe one can say that "objective system of values" is subjective value for many individuals.

Maybe "objective system of values" is not everyone's subjective value. An architect can face the dilemma whether he should design the house on the way most profitable for him, or the best house he can design with less profit. If he at least sometimes chooses the second, it seems like indication, almost evidence that he already moved toward objective system of values. I can imagine people who do not have any tendency toward objective system of values. But all people I know at least sometimes do "the right thing".

"One's instrumental values, yes, one judges them based on how well they achieve a terminal value. But one's terminal values are foundational and aren't subject to being disproven by information."
Why? Let's say that my terminal value is my own happiness, and your terminal value is moderate altruism. I can analyze your results, and conclude that, surprisingly, you are happier than I am exactly because your terminal value is such as it is. I can try to act like I'm moderate altruist although I'm not. But it doesn't give so good results. So, I have rational reason to change my terminal value from my own happiness to moderate altruism.
eccentric-opinoin writes:

Pajser:

"they believe that objective system of values exist, and that move in that direction is improvement. It seems as answer on question 'why would anyone move from his subjective system of values'."

We're talking about two different people here, and it's important to keep them straight. The first is someone whose changing their values (e.g. the racist), and the second is the one who's evaluating whether the first's change in values is for the better. For the first, he's not changing his values because he wants to move closer to "objective values", he's changing an instrumental value because he find out that racism is empirically mistaken, or he discovers that racism isn't implied by the rest of what he believes and is making himself more consistent within his own value system. The second person, who is not a racist, judges the first's change to be positive. From the second person's perspective, it feels like the first is moving towards objective values, but what's really happening is that the first person is merely agreeing more with the second.

"An architect can face the dilemma whether he should design the house on the way most profitable for him, or the best house he can design with less profit. If he at least sometimes chooses the second, it seems like indication, almost evidence that he already moved toward objective system of values."

If an architect values things other than profit, he can design houses that aren't optimal from a profit-only perspective. It doesn't mean anything for objective values, it just means that he has more than one (subjective) value.

"I can analyze your results, and conclude that, surprisingly, you are happier than I am exactly because your terminal value is such as it is. I can try to act like I'm moderate altruist although I'm not. But it doesn't give so good results. So, I have rational reason to change my terminal value from my own happiness to moderate altruism."

You could try that, but it would be a failure because when you'd finally be happy, you wouldn't care about it anymore, so you'd fail to achieve your values anyway.

Pajser writes:
eccentric-opinoin: The second person, who is not a racist, judges the first's change to be positive. From the second person's perspective, it feels like the first is moving towards objective values, but what's really happening is that the first person is merely agreeing more with the second.
Your thesis is that when second person says "ex-racist's system of values improved" that he is wrong, that he has an illusion. It is strong thesis. It may be true or false.

But it is not essential, because illusions provide motives equally as objective truths. People run away from illusion of lion just as they run away from real lion. So, if one believes that moving toward objective system of values is possible, and that it is improvement - he has the reason to move to that system of values. I think it addresses your original criticism of utilitarianism.

Now, back to thesis that utilitarian system of values is not objective system of values, or even step toward it. Move to utilitarian system of values is similar to move to heliocentric system. Assumption that one is in the center of the world is removed. If objective truth exist, it seems as step toward it. If our theories do not have any grain of objective truth, removal of subjective assumptions doesn't push us closer to objective truth. But I think we have that grain. That is the claim that pain exists and it is bad, i.e. negative value. Even with assumption that we live in Matrix, pain still exists and it is still bad. Even with assumption that we do not exist, as Buddhists believe, pain still exists and it is still bad. So, it seems that surprisingly radical skepticism is needed to reject existence of objective system of values.

"You could try that, but it would be a failure because when you'd finally be happy, you wouldn't care about it anymore, so you'd fail to achieve your values anyway."
I don't know that I'll fail to achieve my new system of values.
Comments for this entry have been closed
Return to top