I've been pondering a post that development economist Chris Blattman wrote earlier this month. It's disturbing, partly on its own and partly because it meshes with other things that are clearly observable in the economics profession.
Here's part of his post:
. I predict that, to get published in top journals, experimental papers are going to be expected to confront the multiple treatments and multiple outcomes problem head on.
. This means that experiments starting today that do not tackle this issue will find it harder to get into major journals in five years.
. I think this could mean that researchers are going to start to reduce the number of outcomes and treatment they plan to test, or at least prioritize some tests over others in pre-analysis plans.
. I think it could also going to push experimenters to increase sample sizes, to be able to meet these more strenuous standards. If so, I'd expect this to reduce the quantity of field experiments that get done.
. Experiments are probably the field's most expensive kind of research, so any increase in demands for statistical power or technical improvements could have a disproportionately large effect on the number of experiments that get done.
. This will probably put field experiments even further out of the reach of younger scholars or sole authors, pushing the field to larger and more team based work.
. I also expect that higher standards will be disproportionately applied to experiments. So it some sense it will raise the bar for some work over others. Younger and junior scholars will have stronger incentives to do observational work.
. On some level, this will make everyone more careful about what is and is not statistically significant. More precision is a good thing. But at what cost?
. Well for one, I expect it to make experiments a little more rote and boring.
. I can tell you from experience it is excruciating to polish these papers to the point that a top journal and its exacting referees will accept them. I appreciate the importance of this polish, but I have a hard time believing the current state is the optimal allocation of scholarly effort. The opportunity cost of time is huge.
. Also, all of this is fighting over fairly ad hoc thresholds of statistical significance. Rather than think of this as "we're applying a common standard to all our work more correctly", you could instead think of this as "we're elevating the bar for believing certain types of results over others".
. Finally, and most importantly to me, if you think that the generalizability of any one field experiment is low, then a large number of smaller but less precise experiments in different places is probably better than a smaller number of large, very precise studies.
My concern has to do with more than experimental economics. It has to do with virtually all empirical economics in the top journals.
At an economist's luncheon at Stanford last month, I was talking to Jonathan Meer of Texas A&M University. He told me that increasingly the top journals that publish empirical work are selecting studies done on proprietary data sets. He mentioned a graph that Raj Chetty, now at Stanford, presented showing this.
I said the following:
Here's how I see the empirical work of George Stigler. He had a new idea that made sense and was often surprisingly simple and/or obvious. But it had not been presented before. Or it had been presented before, but no one had gone out to get data to test the idea. George would find data, typically available in government reports or in other data sets that were not proprietary, and test his idea. Usually the data confirmed his hypothesis. Then, over the next 5 to 10 years, other economists would come along with better data and typically find results confirming Stigler's original idea. Stigler didn't publish these articles in so-so journals, but in the top journals at the time: the Journal of Political Economy, the American Economic Review, and the Journal of Law and Economics, to name three. Here's my question, Jonathan. Could we have a Stigler today who would come up with simple ideas, test them with so-so data, and get his/her articles published in a top journal?