David R. Henderson  

No More Stiglers

PRINT
Anthony de Jasay awarded the E... The Fed doesn't have a magic w...

I've been pondering a post that development economist Chris Blattman wrote earlier this month. It's disturbing, partly on its own and partly because it meshes with other things that are clearly observable in the economics profession.

Here's part of his post:

. I predict that, to get published in top journals, experimental papers are going to be expected to confront the multiple treatments and multiple outcomes problem head on.
. This means that experiments starting today that do not tackle this issue will find it harder to get into major journals in five years.
. I think this could mean that researchers are going to start to reduce the number of outcomes and treatment they plan to test, or at least prioritize some tests over others in pre-analysis plans.
. I think it could also going to push experimenters to increase sample sizes, to be able to meet these more strenuous standards. If so, I'd expect this to reduce the quantity of field experiments that get done.
. Experiments are probably the field's most expensive kind of research, so any increase in demands for statistical power or technical improvements could have a disproportionately large effect on the number of experiments that get done.
. This will probably put field experiments even further out of the reach of younger scholars or sole authors, pushing the field to larger and more team based work.
. I also expect that higher standards will be disproportionately applied to experiments. So it some sense it will raise the bar for some work over others. Younger and junior scholars will have stronger incentives to do observational work.
. On some level, this will make everyone more careful about what is and is not statistically significant. More precision is a good thing. But at what cost?
. Well for one, I expect it to make experiments a little more rote and boring.
. I can tell you from experience it is excruciating to polish these papers to the point that a top journal and its exacting referees will accept them. I appreciate the importance of this polish, but I have a hard time believing the current state is the optimal allocation of scholarly effort. The opportunity cost of time is huge.
. Also, all of this is fighting over fairly ad hoc thresholds of statistical significance. Rather than think of this as "we're applying a common standard to all our work more correctly", you could instead think of this as "we're elevating the bar for believing certain types of results over others".
. Finally, and most importantly to me, if you think that the generalizability of any one field experiment is low, then a large number of smaller but less precise experiments in different places is probably better than a smaller number of large, very precise studies.

My concern has to do with more than experimental economics. It has to do with virtually all empirical economics in the top journals.

At an economist's luncheon at Stanford last month, I was talking to Jonathan Meer of Texas A&M University. He told me that increasingly the top journals that publish empirical work are selecting studies done on proprietary data sets. He mentioned a graph that Raj Chetty, now at Stanford, presented showing this.

I said the following:

Here's how I see the empirical work of George Stigler. He had a new idea that made sense and was often surprisingly simple and/or obvious. But it had not been presented before. Or it had been presented before, but no one had gone out to get data to test the idea. George would find data, typically available in government reports or in other data sets that were not proprietary, and test his idea. Usually the data confirmed his hypothesis. Then, over the next 5 to 10 years, other economists would come along with better data and typically find results confirming Stigler's original idea. Stigler didn't publish these articles in so-so journals, but in the top journals at the time: the Journal of Political Economy, the American Economic Review, and the Journal of Law and Economics, to name three. Here's my question, Jonathan. Could we have a Stigler today who would come up with simple ideas, test them with so-so data, and get his/her articles published in a top journal?

Jonathan answered "No."


Comments and Sharing


CATEGORIES: Economic Methods




COMMENTS (18 to date)
Jonathan Meer writes:

I've put the two charts to which I was referring at

http://econweb.tamu.edu/jmeer/chetty1.png
and
http://econweb.tamu.edu/jmeer/chetty2.png

They're taken from Raj's (excellent) PhD public econ lecture notes available at http://www.rajchetty.com/index.php/lecture-videos

Jon Murphy writes:

Question: is being published in a top journal really that important? Forgive me for this seemingly simple question but I am not as heavily into academia as you are.

I guess the better way to rephrase my question is: if an idea gets published in a so-so journal, will it die there? Does an idea have to be published in a top journal to be recognized?

David R. Henderson writes:

@Jon Murphy,
Question: is being published in a top journal really that important?
It’s important for careers. It would be hard to get tenure at any of the top 20, or maybe even the top 40, graduate econ schools without at least one hit, or maybe more, in a top journal.
if an idea gets published in a so-so journal, will it die there?
Not necessarily. It has a better shot if it’s in a top journal but it’s a matter of probabilities.
Does an idea have to be published in a top journal to be recognized?
No. Fortunately, some of the ideas come out in books. But economists get very little credit for books, even when they’re refereed. This was a huge issue in my becoming a full professor earlier this year. Even the statement that recognized my accomplishments, written by someone (I think) who was pro-me, mentioned none of my books.
It’s a weird world, Jon, so no “Forgive me” necessary. :-)

jc writes:

I'm reminded of that saying that innovation comes from the periphery. Big breakthroughs and prominent new theories have come from reputable but far from elite journals, where authors who get favorable reviewers may be given a bit more freedom to think outside the box or offer a really interesting idea with a test whose execution isn't as airtight as we'd like.

But, yeah, from a career standpoint it often seems like the only currency is publications in a handful of journals.

This measure of quality - and, admittedly, one can legitimately argue that it (getting in) is as good a measure as we have - may be more important than actual quality (e.g., writing a book that changes the way everyone thinks about an important topic may get you less credit in some circles than an extremely well done paper that carefully isolates the cause of a relatively minor effect).

Charley Hooper writes:

As hurdles for advancement become more standardized, centralized, and rigid, they become less applicable to relevant issues and end up promoting candidates who are more methodical and mainstream but less creative.

I'm reminded of the civil service examination system in imperial China that spanned perhaps 2,000 years. By focusing on the classics and religion, as opposed to more practical subjects like actually administering an office, the exams ended up selecting for civil service those who had little knowledge of their actual job functions.

On the other side we have Jesus and Socrates--two of the greatest teachers ever. Neither published anything and neither would have received tenure at a modern university.

Don Boudreaux writes:

To Jon Murphy's question about whether or not a good economics idea will die if it isn't published in a top journal, I second all that David says above. But I also add that economics ideas, as such, seem to me to be published less and less in top journals. Top journals are increasingly becoming the province of very elaborate econometric studies (many of which are excellent and useful, but some of which - despite their econometric sophistication - are neither) and of ideas about how to better conduct and refine such studies.

A straightforward economics idea - one that gives readers good reason to think differently about how to understand reality - today stands little chance of being published in a top econ journal unless it is (1) accompanied by an elaborate econometric test of its validity (even if such a test is unnecessary to support the usefulness of the idea), or (2) translated into impressive-looking mathematics (even if such a translation is unnecessary, or even counterproductive, to a clear conveyance of the idea).

Think of some of the classic 'ideas' papers of the past. How many of them would stand a chance of being published in the AER or, today, even the Journal of Law & Economics? I'll bet (if I could) that Coase's 1960 JLE paper would today receive a desk rejection if it were submitted to the JLE. Ditto for Alchian's 1950 paper. With the possible exception of his 1962 paper on externalities with Craig Stubblebine, the same holds for Jim Buchanan's papers: not one contains elaborate econometrics and very few much mathematics.

Hayek's "Use of Knowledge in Society"? Forget about it. Harold Demsetz's "Barriers to Entry" or "Why Regulate Utilities?" Uh-uh. Gordon Tullock's 1967 paper on rent-seeking? Nope. George Stigler's 1964 article on the theory of oligopoly or his 1971 piece on the economics of regulation? No way. Henry Manne's classic 1965 JPE article on the market for corporate control? Impossible. Oliver Williamson's 1968 AER paper on economies as an antitrust defense? Highly unlikely.

Were a modern-day Schumpeter to offer the section on creative destruction (that appears in Capitalism, Socialism, and Democracy) to a top journal, it's inconceivable that it would be accepted for publication. Indeed, it's probably true, I think, that even Ben Klein and Keith Leffler's famous 1981 JPE piece on the role of market forces in assuring contractual performances would be unable today to be published in a top-ranked economics journal.

One of the most important articles in economics in the past quarter century - Bob Higg's 1997 article on regime uncertainty - is not, and (sadly) could not have been, published in a top economics journal.

Books have become, for economists with ideas, the chief means of conveying those ideas. For example, Tyler Cowen's theory of how commerce promotes cultural diversity and flourishing was presented in no major article; rather, it was presented in a book. Ditto for Bryan Caplan's idea of rational irrationality. (I list here two examples of ideas that I applaud, but the same is true even for ideas that I don't applaud. The point is that top econ journals no longer feature economic ideas.)

One of the distinguishing characteristics of George Mason University Economics is that we are, as Tyler often and correctly says, "a book culture." That's unusual. Books matter here at GMU; books count - and they count big time.

Nathan W writes:

I don't understand the obsession with the 5% threshold. Determine your methods in advance, describe the process of refining the models/results, and report the outcomes.

Should we throw out research because p=0.12, even though there are lots of interesting things going on?

In a world of p-hacking, we should be more interested to know that p-hacking didn't occur, and accept that some interesting results might not meet the magical standard of p

Not only might the fact of NOT having high statistical significance be an important finding itself, it may also drive others to search for other model specifications or hypotheses to explain the unexplained.

I also strongly feel that public access to data should be a very top consideration in the decision to publish research. Otherwise, it is far less likely that anyone else will spend the time/money to check the results.

David R. Henderson writes:

@Jonathan Meer,
Thanks.

David R. Henderson writes:

@Don Boudreaux,
Outstanding comment! Thanks.

Gene Laber writes:

Don Boudreaux refers to GMU's "book culture" at the end of comments that can reasonably described as skeptical of the direction of contemporary economics and economics journals. I agree with much of what he says, but I wonder if the GMU emphasis and its book culture has any effect on the market for its PhD graduates. I recently skimmed the most recent JEL listing of doctoral dissertations. The first category is History of Economic Thought. GMU accounted for 4 of the 5 listings in that category. It accounted for 2 out of several pages of listings for Micro, zero for Macro and Monetary Economics, zero for International, zero for Financial Economics, 1 for Public Economics, zero in Labor, and 3 in Law and Economics. I could have missed some GMU dissertations, since I did a quick scan. But it would seem that, judging from my quick review, the PhD grads are not spread broadly across the discipline. Is that true? And aside from that, what percentage of your PhD grads are ending up in top 20 or top 30 econ depts?

What is an economic explanation for this phenomenon? Mainstream academic economics looks to me like a bubble. Or maybe an eddy current swirling far away from real-world economic issues but somehow maintained by the odd, local economics of universities.

How much could a tulip bulb fetch? How much econometrics appears in "top economics journals"? They both look unreal to me, so I wonder if there may be a common thread in their cause.

The usual suspect writes:

Look--this is wrong. Two counterexamples are Nathaniel Hendren and Neale Mahoney's job market papers, which use the HRS and MEPS surveys, respectively and both ended up in top-5 journals (Econometrica and AER). The reason they can 'get away with' using lower-quality survey datasets and make it into top journals is that they make sufficiently important theoretical/conceptual contributions. Stigler's empirics weren't especially great, but his theoretical contributions were enough to carry the day at the time. Lately, the field has been moving towards more empirically-focused training, so if you're doing this kind of work, especially since the data is out there, you don't have an excuse for not doing more rigorous empirical testing.

As for Don's comment, he's probably slightly right, but completely wrong to think this is a bad thing. Modern economists now, rightly, expect people with 'big ideas' to appeal to evidence. Or to formalize their theories (apparently 'unnecessary' to clear conveyance--perhaps to Don, but not to, well, actual theorists). And frankly, the idea that journals don't convey ideas is the kind of sentiment you can expect from someone who hasn't done any serious academic reading in decades. An example of 'big ideas' in journals, whether you like the idea or not, is Acemoglu and Robinson's work on institutional quality, starting with Acemoglu et al.'s "The Colonial Origins of Comparative Development." If you're looking for a well-published, well-regarded, high-concept paper with really crappy data, there you go--without all the tumescent prose.

Gene Laber: I would imagine nearly zero GMU grads are making it to top 30 departments. A small part of that is the unfortunate pedigree bias in economics, but most of it is the fact that the faculty are so deeply disconnected from the way modern economics is done that there's not really any way they can train their students to fit in other departments. You can see their recent placements if you google "gmu economics job placements" (I believe the comment system here flags direct links). The ones who do OK are the ones who are students of faculty actually engaged in publishing.

Richard: If that were really true, economists wouldn't be in demand at the CEA, at central and private banks, at litigation consulting, etc. etc. I would urge you and other readers not to take David and Don's complaints at face value. They're hardly experts on the modern trajectory of the field.

David R. Henderson writes:

@The usual suspect,
Thank you for that counterpoint.
Would you agree--I think you said you would--that Hayek’s Use of Knowledge would not be accepted in any top journal today? And, if so, do you see that as a bad thing or a good thing?

Scott Sumner writes:

This is really sad. At times like these I'm glad I'm old.

The usual suspect writes:

David, I find the question rather pointless. Would the article, in its 1945 form, be published? Perhaps not. Would it be written in 2015 in the same way it was written in 2015? Even less likely. Hayek was a smart guy, who wrote a paper in 1945 the way you'd expect to write a paper in 1945. He'd probably still be a smart guy in 2015 and write his papers the way we do now.

I frequently see this sorts of "kids these days"-type rants that claim that the greats of yesteryear would be completely unsuccessful in modern economics. But it's not like their ideas haven't been translated into the 'contemporary language.' For example, the modern work on the EMH, both theoretical and empirical, is certainly in the spirit of and of the same flavor as Hayek 1945. Similar concepts from yesteryear that Don mentions have not been forgotten--transaction costs, externalities, oligopoly, etc. Don (and similar others) claims that the formalization of these concepts muddles them, but the mechanism by which this occurs has never been made clear to me. (To be exceedingly uncharitable, I frequently read this claim as communicating the statement "I can't understand modern papers anymore," which is unfortunate for the claimant but not necessarily unfortunate for the field)

Why would Hayek 1945 probably not be published today? Frequently you see people summarize it as making the point that prices communication information, which is a valuable conceptual point. But reading the body of the paper itself, there's plenty of statements in there that are woefully undersupported by modern standards, things treated as self-evident. Statements like "we cannot expect that this problem will be solved by first communicating all this knowledge to a central board which, after integrating all knowledge, issues its orders" are given with little basis. And, in the paraphrased words of a referee report I just received, do you really need 12 pages of text to make such a simple conceptual point?

The usual suspect writes:

By the way, on the original point, even as someone whose field is nearly completely reliant on proprietary datasets, I do find the shift towards them unsettling. Particularly as Chetty seems to be the sole proprietor of many of these, it forces even more concentration of gains toward the very top of the profession, and engenders quite a bit of inequality in outcomes.

To push back on Meer's point, though, part of the decline in using survey datasets is the increase in RCTs (not included in those graphs' denominator, but probably a substitute for survey data), whose data is often even more public-use than the CPS or SIPP, and the increase in empirical macro (which I assume ends up under 'administrative data,' but who knows?).

@The usual suspect,
In my gesture to seek explanation for why mainstream economics has, in my view, veered far from economic reality, I mentioned the effect of economic incentives in universities. But to move toward completeness I should also mention the effect of governments, or simply of the state. There often seems to be a symbiotic relationship between educational institutions and state agencies; they can support each other as they both feed upon an underlying productive economy.

Almost all people, not being particularly educated in real-world economics, tend to believe that government should regulate the economy, or at least intervene some where it is "really necessary". Some yet-to-decided extent of intervention sounds right to most people. What sounds right typically wins in democratic elections.

So there is this question: Should (or how much should) government guide the economy? This question, which hangs above democratic government, feeds the profession of mainstream economics.

Some deniers of government beneficence may argue that government can not plan an economy and further that government's gestures of regulation are immoral. But the deniers have little influence. The big question, how should government regulate the economy, continues to hang over democratic process.

So the government staffs up with fancy-educated economists, the primary purpose of which is to justify more feeding by government and its cronies, in my view, feeding which is always labeled with a positive-sounding name. This finally gets back to your objection that fancy-educated economists are in demand. This explains why economists are hired by the CEA and central banks.

The private banks which survive in such a mixed economy must also staff up with fancy-educated economists, because such economists know the language which enables them to face off against the government team, the government team saying "yes we should" the private-bank team saying "no you shouldn't".

Fancy-educated economists, in this ecology I am sketching, do not necessarily need to know anything about how goods and services are produced and distributed. But they must be good at arguments before democratically empowered commissions. It is not unlike being a candidate for elective office. The winner needs to assemble an appearance of knowledge accepted by 51% of the rationally-ignorant decision makers.

That, I propose, is the heat under the cooking of econometrics. But clearly my argument is full of over simplification. I need a government grant so I can hire an office full of economists to work on this :)

The usual suspect writes:

[Comment removed. Please consult our comment policies and check your email for explanation.--Econlib Ed.]

Comments for this entry have been closed
Return to top