Bryan Caplan  

Why Aren't Academic Economists Bayesians?

An Insider and and Outsider... So You Want to Do Good...
Almost all economic models assume that human beings are Bayesians: They start with some prior beliefs about how the world works, and update those beliefs using Bayes' Theorem as new information arrives.  Behavioral economists often question whether people are in fact Bayesians, but they agree that we should be.  (See e.g. the epilogue to The Winner's Curse).  It is striking, then, to realize that academic economists are not Bayesians.  And they're proud of it!

This is clearest for theorists.  Their epistemology is simple: Either something has been (a) proven with certainty, or (b) no one knows - and no intellectually respectable person will say more.  If no one has proven that Comparative Advantage still holds with imperfect competition, transportation costs, and indivisibilities, only an ignoramus would jump the gun and recommend free trade in a world with these characteristics.

Empirical economists' deviation from Bayesianism is more subtle.  Their epistemology is rooted in classical statistics.  The respectable researcher comes to the data an agnostic, and leaves believing "whatever the data say."  When there's no data that meets their standards, they mimic the theorists' snobby agnosticism.  If you mention "common sense," they'll scoff.  If you remind them that even classical statistics assumes that you can trust the data - and the scholars who study it - they harumph.

I frequently encountered this anti-Bayesian mind-set among Princeton labor economists, who scorned e.g. all common-sense doubts about Card-Krueger's research on the minimum wage.  Most of the skeptics quibbled with the quality of the research.  They couldn't bear to admit that (a) the research was high-quality, but (b) it would take vastly more research of vastly higher quality to convince them that employers buy just as much labor (or more!) when its price rises.  If the critics had been thorough Bayesians, they would have said something like what I say during the first week of my Ph.D. Micro class:

Bayes' Rule provides a natural framework for scientists to relate hypotheses to evidence. Let A be your hypothesis and B be some evidence; then calculate P(A|B).

Ex: The P(minimum wage causes unemployment|Card/Krueger study's findings).  Suppose P(CK findings|m.w. does cause unemployment)=.3, P(CK findings|m.w. does not cause unemployment)=.8, P(m.w. does cause unemployment)=.99, and P(m.w. does not cause unemployment)=.01.  Then the conditional probability comes out to .3*.99/(.3*.99+.8*.01)=97.4%.
My question: Why aren't academic economists Bayesians?  If even iconoclastic behavioral economists agree that rational agents should be Bayesians, what excuse have academic economists got to be anything else?

Comments and Sharing

TRACKBACKS (1 to date)
TrackBack URL:
The author at Fahreunblog in a related article titled La Scienza messa su un rigo writes:
    La scienza non è altro che una grande statistica? La mia risposta è sì maiuscolo. I logici del Novecento hanno ridotto Aristotele ad un paio di paginette. Ma qualcuno, già nel Settecento, aveva ridotto la Scienza, anche quella a venire, ad un solo r [Tracked on November 18, 2009 2:49 AM]
COMMENTS (19 to date)
Doc Merlin writes:

As per the earlier Card/Krueger stuff.
Immigration doesn't just increase the supply of labor, it also increases the demand for products and goods, and thus the demand for labor. When you expand adult population, you aren't just expanding labor, you are expanding the entire economy. It also allows for increases in economies of scale and in specialization, thus there is upside without much downside, so long as the supply of housing, food, etc isn't restricted.

Mike writes:

I think that to come out and say something like that makes the professor look rigid and unwilling to change which is a bigger concern in academia than being wrong abut something, so the obvious choice is to hold contrary results to higher standards, looking flexible while maintaining rigidity.

aretae writes:


I'd say something almost opposite Mike's comment, though it may be for misreading him. Bayesianism requires that you change your mind (probabilistically) when new evidence arrives. This doctrine is highly corrosive to sacred cows.
Great post/question.

US writes:

Whether or not your claim is correct, I'd say as an economist you're supposed to know the answer to your own question:

It's all about the incentives people face, right?

If economists aren't honest bayesians, it's probably because they do not have any strong incentives to be honest bayesians.

Peter writes:

Austrian economists likely dismiss Bayesian reasoning because it looks like math, which was proven to be disjoint with economics in 1949 by Mises :) Other economists either reject the Bayes framework altogether, or they just prefer not to concede any ground because that would signal weakness.

Another reason is because if everyone used Bayes methods and showed their priors the arguments would amount to "I don't like your prior". That's not a very interesting argument.

Robin Hanson writes:

I respond here.

Richard A. writes:

While immigration is causing the demand curve for labor to shift to the right, the supply curve for unskilled labor has been shifting to the right at an even faster rate because immigrants nowadays are on average less skilled than the natives.

Ed Hanson writes:

I suspect the answer is much simpler. Bayes' Theorem is too difficult for most economist to understand. It would be an interesting pop quiz at some meeting such as the yearly Jackson Hole.

Les writes:

I have been taught Bayesian methods, and my comments are not based upon ignorance of Bayesian methods.

My concern with Bayesian methods is that it elevates mere hypothesis or opinion to the status of observed past frequencies. While I respect hypothesis or opinion, I do not see why it rises to the level of observed past frequencies

Peter Twieg writes:

I think Les has a good point. More generally, I'd say that although most economists would probably support Bayesianism in the abstract, when it comes to trying to convince others to change their opinions to be in line with yours... Bayesianism can become a hindrance because you can run into the "I accept your evidence for not-P, but my prior for P was 99.9999% coming into this debate and now it's 99.99%, so I'm still going to argue for P." It's very easy to hide behind priors that are difficult, if not impossible, to scrutinize.

Consequently, there's a tendency to want to say that only evidence that has been put out "on the table" is worthy of being a part of the relevant information set. If people are poor Bayesians, then appealing to Bayes' Rule might actually be suboptimal if most of their errors are hidden from correction.

Norman writes:

Peter: 'if everyone used Bayes methods and showed their priors the arguments would amount to "I don't like your prior".'

This is why Bayesian statistics typically will do prior sensitivity tests. If the prior doesn't matter much, then an opponents dislike of it also doesn't matter much.

Les: 'it elevates mere hypothesis or opinion to the status of observed past frequencies.'

This isn't really a problem if your initial priors have very low precision. And again, indicating how sensitive results are to the prior can mostly alleviate this.

I think the key problem is that Bayesianism imposes an exact degree of certainty or tentativeness on an individual's knowledge, regardless of the context. Thinking of this post on persuasion at MR, this restriction could be a rhetorical problem for economists.

If we want to be convincing we need to be able to seem tentative around audiences who think of us as experts (the general public), but more confident around those who think of us as less authoritative (other economists). Bayesianism doesn't allow this level of manipulation.

Walt French writes:

Per the logic of @Les: suppose that I hypothesize that the core of the moon is made of blue cheese; I take the recent finding of water on the surface ("having oozed out of the cheese") as weak, but supportive evidence for my hypothesis. p(cheese|water) looks better than p(rock|water), maybe even, since you can't get water out of actual rock.

Bayesian logic is a great insight -- I use a derivative of it for monitoring over a billion dollars of investment strategies for my firm -- but hardly an end-all, be-all. You still need to account for the quality of your hypothesis.

Brandon Berg writes:

Why is anybody still talking about Card and Krueger? Given the counterintuitive nature of the findings, combined with the controversy around the study, there must have been follow-up studies.

If Card and Krueger's findings were duplicated, then it's not just about Card and Krueger; it's about a body of evidence. If not, then it's likely that they were wrong. Either way, why is anybody still talking about that one study?

MikeDC writes:

Several thoughts:
1. Economists as academic entrepreneurs hope their writings will make a substantial difference in their fields. Thus, they ignore Bayseian thinking and put too much import on the flow of new ideas rather than the stock of existing ideas.

The Card-Krueger example is relevant here. If a well done study that yields a result contrary to the status quo belief only changes the probability from 99% likely to 97.4% likely, why would someone undertake such a survey in the first place. You wouldn't bother, and if you did, you wouldn't expect to get published by studying a question resolved to a high level of certainty.

By discounting the mountainous stock of existing evidence that should settle questions (like the effect of price floors), we allow many economists a nice make-work living doing impressive looking and politically useful stuff instead of a difficult and ambiguous living answering and dealing with unsettled questions.

2. We don't have any objective means of quantifying these probabilities any more than we have of calculating utils.

Tracy W writes:

How experienced are most economists with Bayesian reasoning?
I was taught Bayes rule in statistics and in econometrics. But none of my professors ever went through the process of changing their views formlly based on Bayesian economics, nor did they require us to do it.
Dan Wllingham makes an important distinction between rote knowledge, "inflexible knowledge" and "deep knowlege". Rote knowledge is mere memorisation, inflexible knowledge is beyond rote, but the student will struggle to apply the knowledge to slightly different new applications, and won't think of it naturally. Deep knowledge, to quote Willingham directly "can be accessed out of the context in which it was learned and applied in new contexts".
Now while everyone wants deep knowledge, or flexible knowledge, inflexible knowledge appears to be an essential stage for most people in learning something new. People have to spend time practising something and seeing it in a variety of examples before they really have a deep understanding that can be applied to new problems.

So if anyone wants to icrease the amount of Bayesian analysis amongst economists, or any other group, they should be modelling it a lot more and running workshops and generally expecting it will take a lot of time to change people's thinking habits.

Daublin writes:

A simple explanation would be that no one really trusts all the studies going around. Perhaps paper publishing is largely a credentialling process, not really part of scientific inquiry.

On a related note, like Tracy asks, how much training in research methods do economics grad students really get? They might not even know in theory how to search for truth.

axg writes:

A lot of scientific research is not really - and should not be - inference or decision making. Instead, it is best for all if the research try to focus on presenting the results or evidence that she _herself_ has gathered (or proven, or ...).

Ultimately, there are end-uses of research, and in a Bayesian context the end-user can bring in their own prior, evidence from whereever he can find it (including the published research), his own purposes, and his own utility functions. Often he would be ill-served if some researcher takes it upon themselves to muddy their data with the researchers' own priors and utilities.

(For a Bayesian, this distinction is actually fairly easy to see and we have a great tool for the task of _presenting evidence_ - at least where statistical - : likelihood functions.)

Classical statistics, on the other hand, is such an incoherent muddle that it's hard to honor the inference vs evidence dichotomy even if you are clear idea as to what you actually are doing. But if you look at the actual methods that have become conventional, you'll see that in practice many of them seem to have evolved so that published research retains some value from the evidence-presentation perspective. "I'll believe just what my data says (and not let any prior knowledge or common sense intrude)" is silly and pompous, and looks sillier still if the analysis talks of "hypothesis tests" and such nonsense.
But the _effect_ is published research that is a fairly pure distillation of the new evidence added by the researcher herself, and often this
is what is in fact most appropriate.

Nelk writes:

To be bayesian or not is a more extended issue than to be a bayesian economist or not. There is a long term episthemic saga beyond the specific problem of the use of bayesian methods in econometrics. Although I agree that computational problems and hystheresis effects produced by them explain some part, I think the old philosophical question of the proper weight between apriori ideas (reason) and empirical observarion (data) is the underlying big issue. This is not just a little methodological dispute between academic economists, but one of the central problems of philosophy.

Comments for this entry have been closed
Return to top