I’ve never found critiques of utilitarianism to be persuasive. Here are a few I’ve run across:

1. The critic describes a horrible scenario, and then asks the reader to assume that it results in greater total utility. The scenario might be war, slavery, or the entire world’s wealth being held by one man. This thought experiment is supposed to persuade us that utilitarianism is flawed, because it might lead us to approve of various horrible political/economic/social systems. I react differently, thinking that all they show is that the horrible scenario being envisioned would almost certainly make the world a less happy place, and hence my support for utilitarianism becomes even stronger.

2. Other criticisms seem based on cognitive illusions like innumeracy. Imagine great harm to a single person, and a bit of extra pleasure to millions, where the total effect is a net positive to aggregate utility. We are supposed to think this is a cruel and immoral trade-off. But those examples simply take advantage of the fact that we can much more easily imagine great harm to a single person, than small gains to millions. In our everyday life we (correctly) are willing to do things that will make exactly that sort of trade-off. We drive to the store rather than walking, knowing there is a slight chance of dying in an auto accident. We might inoculate millions of people against an uncomfortable but nonfatal disease, knowing a few might die from the vaccine. Indeed the entire cost/benefit approach to things like highway safety improvements relies on exactly that sort of utilitarian logic.

3. We are told that utilitarianism might lead to the conclusion that people would prefer to leave “real life” and be hooked up to a virtual reality “happiness machine.” Poll results supposedly show that this is not so. But the polls are faulty, and merely show a bias for the here and now over the “elsewhere.” Even worse, society seems to be rushing headlong into a virtual reality world anyway.

4. Other “counterexamples” take advantage of illogical moral intuitions that have evolved for Darwinian reasons, like discomfort at pushing a fat man in front of a trolley car to prevent even more deaths.

In a recent book review in the NYR of Books, Cass Sunstein has a very thoughtful critique of the way philosophers go about constructing anti-utilitarian arguments. Here’s one excerpt, but you should read the whole thing:

Edmonds has written an entertaining, clear-headed, and fair-minded book. But his own discussion raises doubts about a widespread method of resolving ethical issues, to which he seems committed, and of which trolleyology can be counted as an extreme case. The method uses exotic moral dilemmas, usually foreign to ordinary experience, and asks people to identify their intuitions about the acceptable solution. . . . But should we really give a lot of weight to those reactions?

If we put Mill’s points about the virtues of clear rules together with a little psychology, we might approach that question in the following way. Many of our intuitive judgments reflect moral heuristics, or moral rules of thumb, which generally work well. You shouldn’t cheat or steal; you shouldn’t torture people; you certainly should not push people to their deaths. It is usually good, and even important, that people follow such rules. One reason is that if you make a case-by-case judgment about whether you ought to cheat, steal, torture, or push people to their deaths, you may well end up doing those things far too often. Perhaps your judgments will be impulsive; perhaps they will be self-serving. Clear moral proscriptions do a lot of good.

For this reason, Bernard Williams’s reference to “one thought too many” is helpful, but perhaps not for the reason he thought. It should not be taken as a convincing objection to utilitarianism, but instead as an elegant way of capturing the automatic nature of well-functioning moral heuristics (including those that lead us to give priority to the people we love). If this view is correct, then it is fortunate indeed that people are intuitively opposed to pushing the fat man. We do not want to live in a society in which people are comfortable with the idea of pushing people to their deaths–even if the morally correct answer, in the Footbridge Problem, is indeed to push the fat man.

On this view, Foot, Thomson, and Edmonds go wrong by treating our moral intuitions about exotic dilemmas not as questionable byproducts of a generally desirable moral rule, but as carrying independent authority and as worthy of independent respect. And on this view, the enterprise of doing philosophy by reference to such dilemmas is inadvertently replicating the early work of Kahneman and Tversky, by uncovering unfamiliar situations in which our intuitions, normally quite sensible, turn out to misfire. The irony is that where Kahneman and Tversky meant to devise problems that would demonstrate the misfiring, some philosophers have developed their cases with the conviction that the intuitions are entitled to a great deal of weight, and should inform our judgments about what morality requires. A legitimate question is whether an appreciation of the work of Kahneman, Tversky, and their successors might lead people to reconsider their intuitions, even in the moral domain.

Nothing I have said amounts to a demonstration that it would be acceptable to push the fat man. Kahneman and Tversky investigated heuristics in the domain of risk and uncertainty, where they could prove that people make logical errors. The same is not possible in the domain of morality. But those who would allow five people to die might consider the possibility that their intuitions reflect the views of something like Gould’s homunculus, jumping up and down and shouting, “Don’t kill an innocent person!” True, the homunculus might be right. But perhaps we should be listening to other voices.