Bryan Caplan  

The Common Sense of Bayesianism

Surveillance Costs and Benefit... Useless Information...

Bayes' Rules is central to modern economics and modern psychology. According to Bayes' Rule, a rational person starts with some beliefs about probabilities (his "priors") and changes them in a particular way as new information arrives, in order to reach new beliefs (his "posteriors"). Psychologists usually emphasize that people should use Bayes' Rule; economists are more likely to assume that people do use Bayes' Rule.

The main problem with Bayes' Rule is that it doesn't say where priors come from, or which ones you should have. It is tempting to say that every prior is just your last posterior. But where did your first prior come from? If you picked the wrong one, then everything based on it could be wrong as well.

Although he didn't use Bayesian language, this is one of the main questions that the neglected Scottish philosopher Thomas Reid grappled with. His answer is simple: your priors should be based on common sense.

Without studying Reid's works, it is easy to dismiss his claim as either (a) Foolish, because common sense is so often wrong, or (b) Platitudinous, because everyone claims that their views are common sense. But in the end, he prevails. It is true that common sense has been wrong before: Contrary to appearances, the earth moves around the sun. But how do we unseat a common sense claim? Only by using even stronger common sense claims - like "perception is valid" - against weaker ones. And it is not true that everyone claims that their views are common sense - many philosophers delight in doing the opposite.

The quickest way to understand Reid is with a simple example. Suppose someone shows that (A implies B). B is totally contrary to common sense. Should you accept B? Hardly. If (A implies B), then (Not-B implies not-A). That's basic logic. Therefore you have not one but two valid arguments. Which one should you accept? The one with the more obvious premise. You have to ask yourself, "Which is more likely? That A is true, or that not-B is true?"

This may seem far away from economics, but it's not. Reid's emphasis on common sense is very helpful when you read or conduct empirical research. For example, unlike many critics of Card and Krueger's research on the minimum wage, I never angrily dismissed their work as dishonest trickery. My reaction was simply that they were probably wrong because their conclusion went against common sense. Which is more likely? That employers do not buy less labor when wages rise? Or that econometrics is not that reliable in this area? I go with the latter. Of course, per Bayes' Rule, C-K's research made me less confident than I was before. But not too much. (See here, pp.81-3, for exact calculations).

P.S. Philosopher Michael Huemer of the University of Colorado defends the philosophy of common sense better than Reid himself did. For a sampler, check out here, here, and here. Yes, sometimes the sequel is better than the original.

Comments and Sharing

COMMENTS (6 to date)
John Thacker writes:

It's worth pointing out that with enough common observation and data, and many trials, almost all posteriors converge to the same distribution, regardless of the original prior. (Exceptions made for the set of measure close to zero of "insane" priors.)

Danno writes:

While the logical demonstration is true, I think it's been pretty clearly evidenced that most people don't inherently understand the principle of modus tollens or basic logic generally.

I forget the name of this demonstration, but it goes like this, "A set of cards has two sides, a letter and a number. If there's a vowel on the front of a card then there's a prime on the back. You see 4 cards in front of you with B, E, 3, and 4. How many cards do you need to turn over to verify that the rule is true?"

What this means, near as I can say, is that while common sense evaluations of premises as true or false is probablly fairly accurate, making sound logical reasoning is not something you'll commonly see people exhibit without training.

Timothy writes:

The first commenter is right: so long as you don't start with a dogmatic prior (1 or 0) then over enough trials the probabilities will limit out to the "right" number.

Matt McIntosh writes:

What John Thacker and Timothy said. They beat me to it.

And Danno, that's call the Wason selection task.

Stormy writes:

I would not touch a card. But then logic is not about common sense. Unfortunately, the real world does not provide us with logical problems that are nothing more than tautologies.

Logic is fine for unraveling implications of something we think is true. Then we can check for inconsistencies; then return, making a different assertion.

But what happens if we live in a world where something can be both true and false simultaneously…in short, Schroeder’s Cat. Meow.

Andrew Gelman writes:


You write, "The main problem with Bayes' Rule is that it doesn't say where priors come from..." I'd rather say, "The main problem with Bayes' Rule (along with other parametric statistical procedures) is that it doesn't say where the likelihood comes from." In the basic statistical setup, the prior comes in once, the likelihood comes in n times, once for each data point. Accepting the likelihood and worrying about the prior distribution is like swallowing a camel and straining at a gnat.

(This is not a criticism of the rest of your entry, just a clarification of this one point.)

Comments for this entry have been closed
Return to top